AI in Finance
AI in Finance: What I Got Wrong (And What I Would Do Differently)
20 April 2026
I have been writing about AI in finance for three years. In that time I have worked with finance functions at different stages of readiness, reviewed more AI vendor demos than I can accurately count, and watched the technology move from interesting to genuinely consequential.
I have also been wrong about some things. Not about the fundamentals: I still believe AI adoption in finance is the most significant structural change to the function since the first ERP systems. But about the specifics. The pace, the blockers, the dynamics that determine whether a deployment succeeds or stalls.
Writing only about what you got right is not useful to anyone. The more useful exercise is to be specific about where the initial assessments were off and what those errors imply for people doing the work now.
I Underestimated How Fast It Would Move
Two years ago, my honest assessment of most AI in finance use cases was this: the technology was not production-ready at the scale vendors were claiming. The demos were impressive. The production deployments, particularly at scale and in organisations without unusually clean data, were more limited.
That was true at the time. The error was in how long I expected it to remain true.
The reconciliation and document processing capabilities in particular moved from “impressive demo, limited production” to “genuinely deployable with the right foundations” in roughly 18 months. That is faster than I expected. The gap between what vendors were showing and what was actually working in production at mid-market scale closed more quickly than my initial projections suggested it would.
The practical implication matters. If you were waiting for the technology to mature before taking readiness seriously, you are already behind. The readiness work, data governance, process documentation, team capability development, takes 12 to 18 months to do properly. If the technology was not ready when you started that timeline, it is ready by the time you finish it. If you start the readiness work only after the technology is clearly ready, you are 18 months behind the organisations that started earlier.
I should have made this point more forcefully two years ago: readiness work is not contingent on the technology being ready. It is contingent on the technology being plausibly on its way.
The AI readiness assessment framework exists precisely because the foundations need to be in place before you select a tool, not after.
I Overestimated How Much Data Quality Would Slow Things Down
The data quality problem in finance is real. I have written about it extensively and I stand by the importance of addressing it. Data quality is the foundation that determines whether AI works. That position has not changed.
But I expected it to be a larger blocker in practice than it has turned out to be in a number of the deployments I have observed. Some of the better AI tools have more tolerance for data imperfection than I initially credited. Not infinite tolerance. Not a reason to skip the foundation work. But more practical resilience to the kind of messy, real-world data that finance functions actually have than I expected.
Specifically: the best-performing document processing tools in 2024 and 2025 showed better handling of inconsistent supplier invoice formats, partial OCR data, and non-standard chart of accounts structures than the 2022 and 2023 tools that informed my initial assessments. The tolerance for real-world imperfection improved.
This does not change the recommendation to fix data quality before deploying AI. It changes the urgency gradient slightly. The worst-case scenario I was modelling, that data quality problems would stall most mid-market AI deployments entirely, has not materialised in the way I expected.
What has materialised instead is a more nuanced picture: tools deployed on poor data foundations work poorly and create more problems than they solve, but the definition of “poor data foundations” has shifted slightly upward in terms of what the tools can handle. The work is still necessary. The consequence of skipping it is still significant. But the binary “it will fail” assessment I was using was too stark.
The honest lesson: calibrate expected blockers against current tool capabilities, not the capabilities from two years ago. The technology is moving and the assessment needs to move with it.
I Underestimated the Change Management Challenge at the Senior Level
This is the one I got most wrong in terms of where to focus attention.
I was thinking clearly about frontline team resistance, which is real and well-documented and worth significant change management effort. The concerns of the AP clerk or the management accountant about what AI means for their role are legitimate, they are frequently unstated, and they require direct and honest handling. I wrote about this and I believe that analysis was correct.
What I did not pay enough attention to was resistance at the senior level. Finance directors and CFOs who have built their professional authority partly on deep knowledge of systems and processes have a genuine stake in those systems and processes remaining complex and human-intensive. This is not always a conscious position. It rarely presents itself as “I do not want this technology to succeed.” It shows up differently.
It shows up as excessive caution about data security: legitimate in some cases, but disproportionate in others. It shows up as concerns about audit defensibility that are framed as governance questions but function as delay mechanisms. It shows up most often as the tendency to move a successful pilot into an indefinite review phase rather than into production. The pilot worked. The results met the criteria. But the next step keeps getting deferred.
This is change management. It is just change management at a different level of the organisation than the one I was focused on. The dynamics are different because the stakes are different: this is not job security anxiety, it is authority and identity. But the underlying mechanism is the same. The change threatens something the person has built their value on, and the response is resistance that does not identify itself as resistance.
The practical implication is that change management in AI finance adoption needs to explicitly address the senior level, not just the frontline. The most important person to involve early is not always the most enthusiastic one. Sometimes it is the most senior sceptic, and the involvement needs to be substantive, not performative.
I Got the Sequencing Wrong in One Engagement
I want to be specific about a mistake in practice, not just in analysis.
In one engagement, the readiness assessment happened after the vendor had already been selected. The organisation had gone through a procurement process, seen a compelling demo, made a commercial decision, and then brought me in to help with implementation. The readiness assessment was scoped as part of implementation planning, not as a precondition for vendor selection.
The readiness assessment revealed data quality problems that the selected tool specifically required addressing before it would produce reliable outputs. The tool’s performance was highly sensitive to chart of accounts consistency, and the client’s chart of accounts was, in practice, inconsistently applied across three subsidiary entities.
Two months were lost. Not because the problem was unfixable: it was. But fixing it after vendor selection meant renegotiating the implementation timeline, managing a vendor relationship that had been built on an optimistic launch date, and explaining to internal stakeholders why the tool that had been approved and announced was not live on schedule.
The same two months spent on data governance before vendor selection would have produced a better outcome in every dimension. The vendor selection would have been better informed. The implementation timeline would have been realistic. The internal credibility of the project would not have taken a hit before the tool had produced a single output.
The right sequence is not complicated. Assess. Fix the foundations. Select the tool with the foundations in mind. Pilot with clear criteria. Scale from a position of evidence. The first 90 days framework is built around this sequence because the alternative, which is what happens most often, produces the outcome I described above.
What I Would Tell Myself If I Were Starting Again
Three things. Not principles. Specific actions.
First: start the data governance conversation earlier than feels necessary. The right time to raise data quality is before anyone has started looking at AI tools. It will feel premature. It will seem like scope-creep into an IT or operations question that has not been asked yet. Do it anyway. Data governance takes longer than the time available once a vendor selection is underway. There is no version of this where starting earlier is wrong.
Second: involve the most sceptical senior person in the pilot design, not just the most enthusiastic one. The enthusiast will advocate for the tool and help move things forward. The sceptic will ask the questions that the implementation will face in production: the ones about edge cases, exception handling, audit defensibility, and what happens when the tool is wrong. Getting those questions answered in the pilot phase, before they become production problems, is worth more than the smoother pilot you get by working only with people who want it to succeed.
Third: set the success criteria in writing before the pilot starts. Include what you will do if the results are below expectation. The absence of pre-agreed criteria is how a failing pilot becomes a “learning exercise” rather than a decision point. It is how sunk cost takes over from evidence. Writing the criteria and the decision logic before the pilot begins is not pessimism. It is the discipline that makes the results mean something.
Where This Leaves Things
The technology is real. The value is real in the right conditions. The conditions are buildable by organisations prepared to do the foundation work before they get to the interesting part.
The mistakes I have made, and the ones I have observed in organisations going through this, cluster around the same failure modes. Moving too fast on tools and too slow on foundations. Underestimating senior-level dynamics. Skipping the readiness work because the vendor made the tool look ready even when the organisation was not.
These are fixable mistakes. They are also predictable ones. The organisations that understand the patterns in advance are the ones that avoid them.
The AI in Finance Strategy page has the full framework. If you are about to start this process, the starting point is always the same: understand where you are before you decide where you are going. That has not changed, and it is the part I would not take back.
Maebh Collins is a Fellow Chartered Accountant (FCA, ICAEW) with Big 4 training and twenty years of operational experience as a founder and senior finance leader. She writes about AI in finance transformation from the inside out.