AI in Finance
Why Finance Teams Resist AI (And What to Do About It)
30 March 2026
Resistance to AI adoption in finance teams is rational. This is the starting point that most AI change management programmes miss, and it is why most of them fail.
Finance professionals are trained to identify and manage risk. A new AI tool is a risk: to the accuracy of the numbers they are responsible for, to the professional judgment they have spent years developing, and possibly to the security of their employment. None of those concerns are irrational. They are the appropriate response of a trained professional to a significant change in their operating environment.
The standard change management playbook does not engage with any of this. Announce the tool. Explain the benefits. Run some training sessions. Monitor adoption. The result is surface compliance and low actual adoption.
Understanding the specific nature of the resistance is the starting point for managing it effectively.
The Three Sources of Resistance
Finance teams resist AI for three distinct reasons. They often coexist in the same individual. They require different responses.
Fear of job displacement is the most obvious source and often the one organisations are least willing to address directly. The corporate instinct is to reassure. The standard script: this will free you up for higher-value work. The problem with this script is that the people receiving it have read the same articles their managers have read. They know AI is automating tasks at scale. They have watched what happened in other industries. “This will free you up for higher-value work” sounds, to a finance professional processing invoices, like their job is going away and being replaced with something unspecified that may or may not exist.
The concern is real and deserves an honest response, not a corporate one. Some roles will change significantly. Some tasks will be automated. In a well-managed finance transformation, this creates capacity for work that is more valuable to the organisation and more professionally interesting to the individual. But this outcome is not automatic. It requires deliberate planning: what does the new version of this role actually look like, what skills does it require, and what development support will be provided. Finance teams can navigate change when they can see where they are going. They resist it when the direction is obscured by optimistic generalities.
Professional accountability anxiety is subtler and more specifically relevant to finance. The accountant signing off the numbers is professionally and legally responsible for those numbers. The AI tool that produced an element of those numbers is not. If the AI is wrong and the accountant approved the output without adequate scrutiny, the accountant is wrong.
This creates a legitimate governance design question. How is human accountability preserved in a process that includes AI-generated outputs? What does adequate scrutiny look like when the AI is processing thousands of transactions per day? At what point is the human reviewer genuinely in control, and at what point are they providing a rubber stamp on volume they cannot meaningfully review?
These are not hypothetical concerns. They are the correct questions from a professionally trained risk manager. The answer to this source of resistance is not communication. It is governance design: explicit human-in-the-loop checkpoints, clear definitions of what the human is reviewing and approving, and accountability structures that are visible in how the tool operates. Finance teams need to be able to see where their accountability sits in the new process, not just be told it has not changed.
Competence anxiety is the third source. Finance professionals are experts in their domain. They understand what they are doing and why. An AI tool introduces a layer of processing they do not understand and cannot fully interrogate. If the tool produces an output that looks wrong, they may not be able to identify why it is wrong. If the tool produces an output that looks right but is wrong, they may not catch it. Both scenarios threaten the professional competence their role depends on.
This is a training and transparency problem. Vendor-delivered system training tells people how to operate the tool. It does not build the understanding needed to know when to trust it, when to question it, and how to identify when it is producing unreliable outputs. Building that understanding requires a different kind of investment.
What Not to Do
Three patterns of mistake are common enough in AI change management that they are worth naming explicitly.
Telling the team this will free them for higher-value work is the most common mistake. As noted above, this is heard as threat rather than reassurance. The intent is genuine. The effect is not. If you want to communicate meaningfully about role evolution, be specific: this is what the role looks like after this implementation, these are the skills it will require, and here is how we are going to help you develop them. Vague reassurance is worse than silence because it signals the concern is not being taken seriously.
Running a pilot without involving the people who will use the tool daily is a design failure with adoption consequences. Pilots built by the implementation team and the vendor are optimised for the vendor’s preferred use case and the implementation team’s understanding of the process. The actual users know things that neither group knows: the edge cases the standard workflow does not handle, the process steps the system specification missed, the ways in which real-world inputs differ from the clean examples used in testing. Their input makes the tool better. Their involvement creates ownership. Pilots designed without them produce tools the daily users will work around rather than adopt.
Communicating the change at announcement rather than throughout design means that by the time the tool is presented to the finance team, the decisions have been made. The team’s reaction at that point is irrelevant to the outcome because the implementation is already in train. The resistance forms before announcement and calcifies after it. Involvement needs to happen during the design phase, not after it.
What Works
Involve the team in the pilot design. Not as a token gesture. As genuine contributors to the implementation. They know the edge cases. They know the problems in the current process that the AI tool could address and the ones it cannot. Their input makes the tool more likely to work. Their involvement creates a group of people inside the finance team who have ownership of the outcome rather than suspicion of it.
Be honest about job impact. AI will change what finance teams do. This is true. It is not a reason to panic, but it is worth planning for seriously. Help people understand what the new version of their role looks like. Be specific about which tasks will change, which will remain, and which new capabilities will become relevant. Where roles are genuinely at risk, address that honestly and early, not after the tool is live and the anxiety has compounded.
Design accountability into the system explicitly. The human is reviewing, approving, and accountable. Not the AI. This is both true and needs to be visible in how the tool operates. The workflow should make the human checkpoint clear. The training should explain what the reviewer is expected to do at each checkpoint. The governance documentation should record where human oversight sits. This does not just manage accountability anxiety. It also produces a better-controlled system.
Run the governance framework before the tool is live. The finance team should know exactly when human judgment is required, what escalation paths exist, and how errors or anomalies are handled. These should not be discovered after go-live. Finance professionals can work confidently within a governance framework they understand. They struggle to work confidently in a system where the rules of engagement are unclear.
Both the general change management framework for finance transformation and the AI-specific governance considerations are worth reviewing before you begin: change management in finance transformation covers the broader context, AI governance for finance teams covers the specific framework.
The Skills Gap
Finance teams need AI literacy. Not technical knowledge of how machine learning works. Practical literacy: what does this output mean, when should I trust it, when should I question it, how do I identify when it is producing unreliable results.
This is different from system training. System training teaches you how to operate the tool. AI literacy teaches you how to evaluate what the tool produces. A finance professional with good AI literacy can look at an AI-generated reconciliation and identify whether the output is credible, whether the exceptions have been handled correctly, and whether anything in the output warrants further investigation. Without that literacy, they are either over-relying on the tool or avoiding it. Neither is the outcome you are trying to achieve.
Building AI literacy should be treated as a specific development priority, timed before go-live. Not a one-off training session. A programme of structured development that builds over the weeks preceding and following the tool launch, and continues to develop as the team gains experience. It should be built around the specific tools the team will use, the specific outputs they will review, and the specific error patterns those tools are known to produce.
Teams that receive this kind of preparation adopt AI tools at meaningfully higher rates and use them more effectively. The investment is not large relative to the total project cost. The impact on adoption and on outcomes is disproportionate to the cost.
Building an AI-ready finance function covers the full capability development picture, including AI literacy alongside data readiness and process design.
The people side of AI adoption is not soft work. It is the work that determines whether the tool produces the return it was bought to produce. Finance teams treated as intelligent professionals with legitimate concerns will engage with AI tools constructively. Finance teams managed with corporate change management scripts will comply and underperform. The choice of approach is yours.
Maebh Collins is a Fellow Chartered Accountant (FCA, ICAEW) with Big 4 training and twenty years of operational experience as a founder and senior finance leader. She writes about AI in finance transformation from the inside out.