AI in Finance

The CFO's Guide to AI Strategy in 2026

15 December 2025

A McKinsey survey published in late 2024 found that 85% of CFOs identified AI as central to their strategy for the coming year. A separate Gartner study found that over 90% of finance leaders said they lacked confidence in their ability to execute on AI. That gap between intent and execution is not a technology problem. It is a strategy problem.

I have built finance functions from scratch, led transformations inside large organisations, and watched a significant number of vendor pitches over the past two years. The CFOs getting real results from AI are not the ones who moved fastest. They are the ones who were clearest about what they were doing and why.

This post is a framework for that clarity.


The gap is not budget or technology

Most finance functions struggling to execute on AI are not struggling because they lack money or because the technology is not mature enough. They are struggling because they have not answered three basic questions: What specifically are we trying to do? What do we need to have in place first? How will we know if it is working?

Those questions sound obvious. They are surprisingly hard to answer when you are being pitched by vendors, reading analyst reports, and fielding questions from a board that has heard AI mentioned in every earnings call for two years.

The framework below is structured around those three questions.


Part one: assess before you buy

The first step in any coherent AI strategy is an honest assessment of where your finance function actually is, not where you would like it to be.

AI readiness is about process maturity and data quality, not budget. A finance function with a modest technology budget but clean data, coherent processes, and a capable team is more AI-ready than a well-funded function with inconsistent data and processes that have accumulated years of workarounds.

I wrote about this in AI Won’t Fix a Broken Finance Function. The core argument is that AI amplifies what already exists. If your foundation is sound, AI accelerates good outcomes. If it is not, AI accelerates bad ones.

The readiness assessment has three components.

Process maturity. Can you document, at a process level, how your core finance activities actually work? Not the ideal version. The actual version, including the manual steps, the workarounds, the spreadsheets that sit outside the system. If you cannot document it, you cannot automate it meaningfully. The documentation exercise itself usually surfaces problems worth fixing before AI gets involved at all.

Data quality. What is the state of the data your processes run on? Is it consistent across systems? Is it complete? Is it trusted by the people who use it, or do they routinely adjust it before circulating reports? Data quality problems are the most common cause of AI project failure in finance. They are also the most solvable, but only if you face them squarely before you start.

Team capability and adoption. Do your finance team members actually use your existing systems, or do they work around them? A team with low adoption of the ERP implemented four years ago is not going to adopt an AI layer on top of it. This is a change management question as much as a technology question. See the full readiness assessment framework for a structured approach.


Part two: prioritise by value, not by hype

Once you have a clear picture of your readiness, the next step is prioritisation. This is where most AI strategies go wrong. Finance leaders prioritise based on what they have seen in demos or read in vendor materials, rather than based on where the real value is in their specific function.

The most useful prioritisation question is: which finance processes have the highest volume of repetitive decisions?

Repetitive decisions are where AI creates value. Not because repetitive decisions are unimportant, but because they have the highest return on automation and the lowest risk. Judgment calls, novel situations, complex problems: those still need people. The tenth invoice from the same supplier following the same pattern does not.

Apply this lens to your finance function and identify the five highest-volume repetitive decision processes. Typical candidates include: invoice matching and approval, expense coding and categorisation, bank reconciliation, intercompany elimination, and period-end accrual calculations.

Now assess each one against your readiness findings. The right candidates for early AI investment combine high volume, clear process documentation, clean data, and a team ready to change how they work. That intersection is usually narrower than people expect. That is fine. Narrow and successful beats broad and failed.


Part three: build the data foundation

Most AI projects fail in the data layer, not the model layer. This is one of the most consistent findings from implementations I have seen, and it is consistently underweighted in early planning.

The reason is simple: the data layer is unglamorous and the problems in it are not visible in a demo. A vendor demonstration uses clean, curated data. Your production environment uses the data you actually have. The gap between those two things is often significant.

The data foundation for finance AI has four components.

Consistency. The same entity, transaction type, or cost category should be represented the same way across all systems that feed your AI processes. If your supplier master has three variations of the same supplier name because different team members entered it differently over the years, an AI matching tool will fail to consolidate them reliably. This is not an AI problem. It is a data governance problem that needs fixing before AI gets involved.

Completeness. Missing data is as damaging as inconsistent data, often more so. An AI tool making recommendations based on incomplete records will make wrong recommendations with confidence. Identify the completeness gaps in your core datasets before you start.

Lineage. Can you trace where each data element came from and what transformations it has been through? In a finance context, this matters for audit. An AI recommendation needs to be explainable, and explainability requires knowing the provenance of the data that recommendation was based on.

Governance. Who is responsible for data quality in each domain? What is the process for identifying and resolving data quality issues? Data foundations degrade without active governance. Build the governance structure before you build the AI layer on top of it.


Part four: pilot narrow, measure hard, scale deliberately

The most common mistake in AI pilots is making them too broad. A pilot covering three processes, two systems, and four team members is not a pilot. It is a mini-implementation. It takes longer, costs more, and produces results that are harder to interpret.

A good pilot is narrow enough to control the variables. One process, clearly defined inputs and outputs, a team of two to four people, a defined measurement period, and agreed success criteria before you start.

Return on investment for finance AI is not “saved time” in the abstract. It is specific numbers: hours per week eliminated from a defined process, error rate reduction from a measured baseline, cycle time reduction in a process you can track.

If you cannot define the success metric before the pilot, you are not ready to pilot. That sounds strict. It protects you from the most common outcome: a pilot that “went well” without anyone being able to say specifically why or what the return was.

What a good pilot looks like: a single process selected based on the prioritisation criteria above, clean data confirmed before start, success metrics agreed in writing, a measurement period of 60 to 90 days minimum, and a debrief that includes what did not work as well as what did.

What a bad pilot looks like: selected for visibility rather than value, data quality assumed rather than verified, success defined vaguely as “improved efficiency”, measurement period cut short because the board wants a positive result to report, debrief that focuses on the wins and explains away the problems.

The difference between those two things is discipline. It is also the difference between an AI investment that scales and one that stalls after the pilot.


The strategy on one page

For a CFO who needs to articulate an AI strategy in 2026, this is how I would frame it.

Invest in AI in a focused way, starting from where you actually are rather than where you wish you were. Complete a process and data readiness assessment in the first quarter. Select two to three pilot candidates based on volume of repetitive decisions, process maturity, and data quality. Run a 90-day pilot with hard success metrics on the highest-priority candidate. Scale what works, fix what does not, and report back to the board with numbers, not impressions.

That is a strategy. It is not exciting. It will produce results. When you are ready to talk to vendors with that framework in place, the conversation will be very different. See the upcoming post on how to evaluate AI vendors as a CFO for the next step.

Explore the full AI in Finance Strategy for a deeper treatment of where this fits in the broader transformation agenda.


Maebh Collins is a Fellow Chartered Accountant (FCA, ICAEW) with Big 4 training and twenty years of operational experience as a founder and senior finance leader. She writes about AI in finance transformation from the inside out.

Back to Blog | AI in Finance →