AI in Finance
Why Finance Teams Resist AI (And What Actually Fixes It)
4 May 2026
Finance professionals resist AI. Not all of them. Not always. But often enough that resistance is the norm in early-stage AI adoption rather than the exception.
The standard response is to treat this as a communications problem. A change management gap to be closed with a town hall presentation and a few “AI champion” appointments. The assumption is that people are afraid of something they do not understand, and that better information will fix it.
That assumption is wrong. Finance professionals are trained to manage risk. They are trained to identify what can go wrong, maintain audit trails, and apply professional scepticism to information before acting on it. A new AI tool that changes how their function works is, by any reasonable definition, a risk. Their instinct to question it carefully is not a dysfunction. It is the professional training doing exactly what it should.
Understanding why finance teams resist AI, at the level of specific, named concerns rather than a general “fear of change”, is more useful than dismissing those concerns. Different types of resistance require different responses. Applying the wrong response to the wrong type makes things worse.
The Three Types of Resistance
Type one: fear of job displacement
This is the most common type and the most often mishandled. It is frequently unstated. A finance professional who is worried about whether their role will exist in 18 months does not typically say that in a project meeting. They raise concerns about data security. They ask detailed questions about how the tool handles exceptions. They point out that the process being automated has a nuance the vendor has not addressed.
All of those concerns may be legitimate on their own terms. But their function is sometimes to slow things down, and the driver is sometimes the job security question that has not been named.
The standard communication response is to say that AI will free up the team for higher-value work. This is often true. It is also often counterproductive as a communication strategy. It sounds like the language of a managed redundancy process. It says “your current role will not exist” before it says anything concrete about what will exist instead.
What actually helps is different. Honesty about what the role looks like in 12 months, not a reassurance that everything will be fine. Specific involvement in designing the new version of the role: if the AP clerk’s invoice matching work is going to be automated, what does the AP clerk do instead, and do they have meaningful input into defining that? And evidence, where it exists, from organisations where AI adoption changed what people do rather than whether they have jobs.
The organisations that have navigated this well have not done it by communicating better. They have done it by involving frontline staff earlier, being specific about the post-automation role before the automation goes live, and making the development pathway visible before the change happens rather than promising it will emerge afterwards.
Type two: professional accountability anxiety
Finance professionals are personally accountable for the accuracy of the work they sign off on. That accountability does not transfer to an AI tool. If the management accounts are wrong and it turns out an AI system produced the first draft, the question is not whether the AI system failed. The question is why the finance professional who reviewed and approved the output did not catch the error.
“If the AI is wrong, I am wrong” is not an irrational concern. It is an accurate description of how professional accountability works. The response cannot be reassurance that the AI is accurate enough. The AI will sometimes be wrong. Every tool is sometimes wrong. The question is whether the governance structure makes the accountability chain clear, visible, and manageable.
This is a governance design problem, not a communications problem. The solution is not to tell the team the AI is accurate. It is to design the human-in-the-loop controls that make the accountability structure explicit. What is the human specifically reviewing? What decisions require human sign-off regardless of the AI output? What is the escalation path when the output looks wrong? When those questions are answered in the tool’s configuration and the process design, the professional accountability anxiety has somewhere to go. It becomes a governance question with a governance answer rather than a vague concern with a vague reassurance.
The AI governance framework covers this in detail. The short version: accountability does not transfer to the tool, so the governance design needs to make human accountability operationally real, not just theoretically present.
Type three: competence anxiety
“I do not understand how this works and I do not know how to tell when it is wrong.”
This is the most solvable of the three and the one most often addressed inadequately. Generic AI training, the kind that explains what large language models are or gives a general overview of machine learning, does not address this concern. The concern is specific: can I use this tool competently in my job, and can I identify when it is producing outputs I should not rely on?
The answer has to be tool-specific and job-specific. The reconciliation analyst needs to understand how this reconciliation tool works, what its common failure modes are in the context of this organisation’s data, and what a suspicious output looks like in practice. Generic training does not provide that. Hands-on time with the specific tool in the context of the specific job, with explicit instruction on how to identify problems, does.
Transparency about the tool’s limitations matters here. A team told the tool is accurate, who then encounters an obviously wrong output, loses confidence in both the tool and the people who deployed it. A team told “the tool handles standard matching very well and struggles with split invoices from these three supplier types” knows where to focus attention and is less likely to be blindsided.
What Does Not Work
Several approaches are tried constantly and consistently produce poor results.
Town hall announcements of the new tool
The format signals that the decision has been made, feedback is not being solicited, and the audience’s role is to receive information and comply. This is sometimes the honest situation. Announcing it this way activates resistance rather than addressing it.
Generic AI training that is not specific to how the tool works in the team’s job
The gap between what generic AI training provides and what a finance professional needs to use a specific tool confidently in a specific role is significant. Training that does not close that gap wastes time and leaves the underlying concern unaddressed.
Mandating adoption without addressing the concerns that drive resistance
A team required to use a tool they do not trust will use it in ways that do not reflect its actual capabilities. They will add manual checks that duplicate the tool’s work. They will use the output as a starting point and then redo the analysis manually. They will appear to have adopted the tool while effectively working around it.
This is the worst outcome: an implementation that looks successful in the deployment metrics and produces no material improvement in how the finance function works. The tool is technically live. No one trusts it enough to actually rely on it.
Framing resistance as a performance issue
The finance professional who raises detailed questions about how the tool handles exceptions is doing their job. Treating that as obstruction is a category error. It damages trust and produces exactly the working-around-the-tool behaviour described above.
What Actually Works
Involvement before announcement
The people who will use a tool daily know things about the current process that no one else does. They know which supplier always sends non-standard invoice formats. They know which account codes get misclassified. They know the manual adjustment that gets made every month and why. Their input in pilot design makes the tool better in practice, not just in theory. It creates genuine ownership in a way that consultation-after-the-fact does not.
The consistent predictor of successful AI adoption in the finance functions I have worked with is whether frontline team members were involved in the pilot before it went live. Not consulted afterwards. Involved in the design.
Small wins first, and make them visible
Start with the use case where the AI is most clearly better than the manual process. Not the use case with the biggest theoretical value. The one where the team can see for themselves, in their own work, that the tool produces a better outcome faster. The first experience of an AI tool working well is more powerful than any amount of communication about how well it will work. Once one person on the team has had that experience, they become a more credible advocate than any external message.
Honest timeline conversations
If a role is going to change significantly in 12 months, say so now and describe what it will change to. The uncertainty of not knowing is worse than the discomfort of knowing something difficult. Finance professionals are capable of handling difficult information if it is given directly. They are not well-served by reassurances that turn out to have been misleading.
The change management in AI finance adoption post has the full framework. The finance transformation without losing your team post covers the parallel dynamic in broader transformation programmes. The principles are the same.
The Senior Resistance Problem
This tends to get less attention than the frontline resistance problem. It is at least as significant.
Finance directors and financial controllers who have built professional authority partly on the complexity they manage have a specific stake in that complexity continuing. Deep knowledge of how the reconciliation works. Expert navigation of the audit process. Strong control over the close. These are real skills that have taken years to develop and have created genuine professional value.
AI adoption makes some of those skills less scarce. The reconciliation expertise is less differentiating when the reconciliation is handled by an AI layer. The close process mastery is less central when the close has been restructured around automation.
This dynamic does not typically present as “I do not want this project to succeed.” It presents as caution, which is professionally virtuous for a finance professional. It presents as concerns about data security, which are sometimes legitimate. It presents as governance questions that require further analysis, which is sometimes the right call and sometimes delay.
The single most common reason I see AI pilots stalling in extended review phases rather than moving to production: the most senior person in the process has concerns that are not being named and addressed. The pilot met its criteria. The governance review keeps finding new questions. The launch date keeps slipping.
The solution is the same as for frontline resistance: involvement, honesty, and a clear picture of what the senior role looks like in an AI-assisted finance function. What is the FD actually doing when the reconciliation and the close and the board pack assembly are handled by the AI layer? The answer, more commercial engagement, more strategic judgment, more of the work the role was always supposed to be doing, is a good answer. But it needs to be said explicitly. And it needs to be said to the FD directly, not implied in a project communication.
A Practical Change-Readiness Checklist
Before any AI tool goes live in a finance function, five questions should have clear answers.
Have you involved at least two frontline team members in the pilot design: people who will use the tool daily, not project sponsors? If not, do it before the pilot, not during.
Have you had an honest conversation with the team about what the role looks like post-implementation, specific about what changes, not a general “higher-value work” reassurance? If not, have it before the announcement, not after.
Is the accountability structure explicit in how the tool is configured: what the human reviews, what requires human sign-off, what the escalation path is when the output looks wrong? If not, design it before go-live.
Have you agreed what success looks like for the team, not just for the project? Success metrics for the project (accuracy rates, processing time) are not the same as success from the team’s perspective (confidence in the tool, clarity about the role, reasonable workload). Both matter.
Have you identified the most sceptical person on the team and given them a meaningful role in the pilot: not a nominal one, but actual input into design and evaluation? The sceptic’s questions are the ones the implementation will face in production. Answering them in the pilot is significantly cheaper than answering them after go-live.
Resistance Is Information
Resistance is not obstruction. It is information about the gap between the current state and where the tool is asking the team to go.
When the AP clerk raises concerns about how the tool handles exceptions, that is information about which exception types are complex and need careful attention in the tool design. When the financial controller asks detailed questions about audit defensibility, that is information about what the governance design needs to address. When the FD keeps finding new questions that delay the production decision, that is information about an underlying concern that needs to be named and addressed directly.
The organisations that navigate AI adoption well treat resistance as a signal. They understand what the signal is pointing to. They address the underlying concern rather than the surface-level behaviour.
That is change management. It is not a soft activity that sits alongside the real work of the implementation. It is the work that determines whether the AI investment returns anything.
Building an AI-ready finance function and the full AI in finance strategy are the right places to take this thinking further.
Maebh Collins is a Fellow Chartered Accountant (FCA, ICAEW) with Big 4 training and twenty years of operational experience as a founder and senior finance leader. She writes about AI in finance transformation from the inside out.