AI in Finance
What clients get wrong about AI in accounting
14 April 2026
77% of accountants and bookkeepers have seen an increase in clients using AI tools such as ChatGPT for finance or tax advice, according to research by Dext. Of those, 72% are using AI to challenge professional advice. 68% have been told directly by clients that AI could replace their services.
These are not fringe experiences. They are the mainstream reality of professional accounting practice in 2026.
The client expectation that AI can replace accounting services is being tested against actual outcomes. The results are straightforward: just 5% of accounting firms surveyed by Dext have never had to correct AI-generated mistakes. Of the 95% that have corrected errors, 93% estimate they spend 10 hours a month doing so.
The tools are useful. They are not yet good enough to do what clients think they can do. And the firms that are not being clear about this are creating problems for themselves and their clients.
Where the expectation gap comes from
The expectation that AI reduces junior hours is being driven, in part, by what is visible at the top of the market. The Big Four cut graduate intakes by as much as 6% to 29% in 2025. The reduction is real and visible, and it is influencing what mid-market and smaller clients expect from their own advisers.
A partner at a mid-tier firm describes a direct example: “We were told that our junior hours were too high, and we weren’t using AI enough. The expectation was that a lot of the junior work would be done by AI. They told us we needed to reduce junior hours, and our overall fees.”
The partner’s assessment of the actual situation is precise: “AI can certainly make junior roles more interesting. It gets rid of some of the really dull tasks, but it complements the junior’s work and makes them more efficient. But it doesn’t replace the role of a junior; the tools just aren’t good enough at present.”
That gap between what clients expect and what the tools actually deliver is the central problem. Firms that do not address it explicitly, either because the conversation is uncomfortable or because they want to appear more AI-advanced than they are, bear the cost of the correction work without being able to explain what they are doing or why.
The 10 hours a month problem
The 93% of firms spending 10 hours a month correcting AI-generated errors are absorbing a significant hidden cost.
Ten hours of a senior professional’s time applied to fixing mistakes that the tool made with confidence is not a productivity gain. It is a productivity transfer: from junior correctable errors, which were visible and manageable, to AI errors, which are harder to spot because they are presented as polished, authoritative outputs.
I have seen this pattern in finance functions adopting AI tools. The early productivity gains are real. Reconciliations that took a day now take an hour. But the time saved on routine work gets partially consumed by reviewing outputs that look authoritative but require careful checking. The net gain depends heavily on how disciplined the review process is.
This is the AI hype cycle reality that does not appear in vendor demonstrations. AI tools perform well on clean data, well-structured problems, and tasks they have been specifically trained for. They perform less reliably on edge cases, novel situations, and data that does not fit the training patterns. In professional accounting, those edge cases are frequently the most consequential.
Clients using AI to challenge professional advice
The 72% of AI-using clients challenging professional advice with AI outputs deserve specific attention.
A client who arrives at an accounting conversation with a ChatGPT-generated position on a tax question is not a difficult client. They are a client who has done research with a tool that is confidently wrong often enough to create a problem. The professional response is not to dismiss the AI output. It is to engage with it: understanding what the tool said, explaining where it is accurate and where it is not, and demonstrating the professional value that comes from understanding the context the tool does not have.
This requires a different kind of client communication than most accounting firms have historically needed. The client with an AI-generated opinion is asking an implicit question: can you add value beyond what this tool provides? For a professional with genuine expertise, the answer is yes. But it needs to be demonstrated, not assumed.
Training is moving in this direction. Several firms are now explicitly developing critical thinking and professional scepticism as core skills in an AI-enabled environment, alongside AI tool proficiency. The two are not in tension. They are complementary requirements.
What realistic adoption looks like
The firms navigating this well are not the ones avoiding AI tools, nor the ones deploying them without governance. They are the ones treating AI as a capability that requires the same discipline as any other professional tool.
The mid-tier firm from the earlier account rolled out Copilot training across the business. They found genuine efficiencies: reviewing board minutes that run to hundreds of pages is now faster. They identified risks: hallucination in specific use cases, junior staff using outputs without checking them. They put safeguards in place and added training on ethical AI use.
“We are finding that there are a lot of efficiencies and better-quality work,” the partner says. But the efficiency gains come with active governance, not passive deployment.
The firms that are struggling are caught between two failure modes. The first: not adopting AI at all, losing genuine efficiency gains and appearing behind the market to clients who expect AI use. The second: deploying AI to satisfy client expectations about reduced junior hours, without the governance to catch the errors that the tools reliably produce.
The professional ethics obligations around AI use apply equally in a practice context. The professional obligation to verify outputs, maintain confidentiality of client data, and exercise independent judgment does not change because the tool is sophisticated.
The client expectation gap is real and is not closing on its own. The firms that address it explicitly, by being clear about what AI tools do and do not replace in professional accounting work, by demonstrating the value of human judgment where tools fail, and by building the governance to ensure that efficiency gains do not come with hidden error correction costs, are the ones that will use AI to strengthen their position rather than erode it.
Research cited: Dext survey of accountants and bookkeepers on AI use in practice, 2026.