AI in Finance

Building an AI-Ready Finance Function: The Roadmap

23 February 2026

There is a version of AI readiness that treats it as a separate project. You run your finance function as it currently operates, and alongside it you run an AI readiness workstream: assessing data quality, documenting processes, evaluating systems. When the workstream is complete, you introduce the AI tools.

This framing is wrong. Not wrong in a subtle way. Wrong in a way that creates unnecessary work and misses the point entirely.

AI readiness and finance function quality are the same thing. A finance function with clean data, coherent processes, documented controls, and genuine system adoption is AI-ready. A finance function without those things is not ready for AI. It is also not performing as well as it should be, independent of AI entirely.

The work is the same work. Do the finance transformation work properly. AI readiness follows.


Assess your current state accurately

Before any roadmap, you need an accurate picture of where you actually are. Not where the ERP implementation project plan said you would be by now. Not where you would be if the team used the system the way it was designed. Where you actually are.

The AI readiness assessment framework covers this in detail. The five dimensions that matter are: process maturity, data quality, system adoption, team capability, and governance. Score yourself accurately across all five.

Process maturity means: are your core processes documented, consistent, and operating as designed? Or are they operated from institutional memory, with workarounds embedded so deeply that nobody remembers why they exist?

Data quality means: is the data in your systems accurate, complete, and consistently structured? Or are there known issues with duplicates, gaps, and inconsistent coding that everybody works around?

System adoption means: is your ERP being used for what it was implemented to do? Or are significant parts of the finance function still running in spreadsheets alongside the system, because the system never fully replaced the manual process?

Team capability means: does your team have the skills to work with AI tools critically, not just operationally? Can they interpret AI outputs, identify errors, and exercise judgment about when to override?

Governance means: do you have the framework to control AI tools safely? Data access, human review thresholds, error correction, audit trail.

Score each dimension. Be honest. A score of three out of five that reflects reality is more useful than a score of five out of five that reflects aspiration.


Process documentation and redesign

Start with your five highest-volume processes. For most finance functions these will be: accounts payable, bank reconciliation, month-end close, expense management, and accounts receivable. Document each process as it actually runs, not as it should run.

The gap between those two things is where the problems live.

In practice, process documentation of this kind almost always reveals the same things. Spreadsheets that exist outside the system because the system cannot do something, or because someone did not know the system could do it. Manual steps that were meant to be temporary and became permanent. Approval steps that exist on paper but are routinely bypassed because the approval workflow was never built in the system.

Document it all. Then redesign before you automate.

This sequencing matters. Automating a broken process produces faster broken outcomes. The manual steps in a process that look like inefficiency are sometimes genuine inefficiency. They are also sometimes control steps, workarounds for system limitations, or quality checks that someone introduced after a previous problem. Understand what each step is before you decide whether to eliminate or automate it.

The three-tier automation framework is the right mental model for this phase. Tier one is systematic automation of high-volume, rule-based tasks. Tier two is AI-assisted decision support for judgment-heavy processes. Tier three is human judgment where context, risk, or complexity requires it. Design your processes to operate across all three tiers deliberately, not by accident.

Process redesign is not glamorous work. It requires sitting with the team and mapping what actually happens, step by step. It requires asking why the workaround exists and being willing to hear an answer that implicates a previous implementation decision. Do it properly. What you build after this foundation holds. What you build without it does not.


Data quality remediation

The data quality audit comes before any AI implementation. Not during it, not after it. Before.

The work has four stages. First, audit: understand the actual state of your data across the dimensions that matter. Completeness (are all records present?), accuracy (do records reflect reality?), consistency (is the same concept represented the same way across the data set?), and timeliness (is data current?).

Second, prioritise. You will not fix everything before an AI implementation. Fix the data quality issues that will most directly affect the AI tools you are planning to deploy. For AP automation, this means supplier master data: duplicates, inconsistent naming, missing bank details, inactive suppliers not marked as inactive. For forecasting, this means historical transaction data: consistent account coding, no gaps in the time series, no large adjustments that are unexplained.

Third, remediate. Deduplication, standardisation, governance rules, data validation controls at point of entry. This is not exciting work. It is the work that determines whether any subsequent AI investment returns anything.

Fourth, maintain. Data quality that was remediated and then allowed to deteriorate is worse than data quality that was never addressed, because the AI tools will have been calibrated on clean data and will perform poorly on the degraded data that follows. Build data governance into your ongoing processes. Validation rules in the system. Approval controls for master data changes. Regular quality audits.

The data quality in AI finance post has the full framework for this work. The principle that applies here is plain: AI amplifies the quality of your data. Good data in produces valuable outputs. Poor data in produces confident, fast, wrong outputs. Fix the data before the AI, not after.


System adoption

If your ERP is materially underused, understand why before you add an AI layer.

Underused ERPs are underused for specific, diagnosable reasons. The three most common are: process design failure (the system was implemented to support a process that nobody actually follows), change management failure (the team never properly adopted the new way of working because the implementation focused on go-live, not on sustained adoption), and capability gaps (people are working around the system because they do not know how to use it for specific functions).

Each of these has a different solution. Process design failure requires redesigning the process and configuring the system to support the redesigned version. Change management failure requires a sustained adoption programme. Capability gaps require training and, often, identifying the internal champion who already uses the system well and building their knowledge into the team.

The ERP versus spreadsheets post covers the decision framework for when to push ERP adoption harder and when the spreadsheets reflect a genuine system limitation. Most of the time the answer is adoption, not replacement. But the diagnosis has to come first.

Adding an AI tool to a poorly-adopted ERP creates a specific failure mode. The AI tools need to connect to your systems of record. If your system of record is incomplete because half the finance function is still processing in parallel spreadsheets, the AI is working with incomplete data. The automation rate will be lower than projected. The exception rate will be higher. The team will use the AI inconsistency as justification for continuing to use the spreadsheets. The AI implementation joins the list of tools the team has that nobody fully uses.

Fix the adoption before you add the AI layer. The sequence matters.


Team capability building

AI literacy for finance teams is not about understanding how the underlying models work. Nobody needs to understand transformer architecture to use an AI reconciliation tool effectively.

What the team needs is a specific, practical set of capabilities. They need to understand what the AI tools can and cannot do: where they are reliable and where they are not. They need to know how to interpret AI outputs critically, not just accept them. They need to recognise the categories of error that AI tools make and understand when an AI output needs to be questioned rather than acted on. They need to maintain their own professional judgment as the final layer, even when the AI is confident.

This last point is the hardest to build, because it runs counter to the efficiency narrative around AI. If the AI is 94% accurate and the human review step adds time, the pressure to reduce human review is real. The finance teams that get this right understand that the 6% error rate is not uniformly distributed: the errors cluster in specific transaction types, edge cases, and novel situations. The human judgment layer is not reviewing the 94%. It is catching the 6%.

Build AI literacy into your team development programme. Not as a one-off training event. As an ongoing capability that is developed, tested, and maintained. Include specific scenarios: here is an AI output that looks correct but is wrong, how would you identify it? Here is an AI output that looks wrong but is correct, how do you evaluate it? Here is a situation where the AI is confident and the answer is ambiguous. What is your process?

The team that uses AI tools well is the team that trusts them appropriately. That means neither over-trusting nor under-trusting. That calibrated trust is built through practice and explicit training, not through exposure alone.


Governance framework

Build the governance framework before you deploy the first AI tool, not after. The AI governance post has the full framework. The elements that must be in place before go-live are: data access controls, human review thresholds, error correction processes, and model change management.

Data access controls define what data each AI tool can access and on what basis. Least-privilege: the tool accesses the minimum data required for its function.

Human review thresholds define at what value or exception type human sign-off is mandatory regardless of AI confidence. Write these down. Review them quarterly in the first year.

Error correction processes define what happens when the AI is wrong: who identifies it, who corrects it, how the correction is documented, and how the feedback reaches the tool or the team responsible for it.

Model change management defines your validation process when the vendor updates the model. Notification obligation, validation run, rollback procedure.

Build it once. Maintain it. The governance framework is not a project deliverable that gets filed after implementation. It is an operational control that requires ongoing attention.


The roadmap in sequence

The full roadmap, in the order it must be executed:

One: assess current state across all five dimensions. Score accurately. Identify the specific gaps.

Two: document your five highest-volume processes as they actually run. Redesign before you automate.

Three: audit and remediate data quality in the areas relevant to your planned AI implementation.

Four: address ERP adoption issues. Fix the process design, change management, or capability gaps that are keeping the team in spreadsheets.

Five: build AI literacy into the team. Calibrated trust, critical interpretation, maintained judgment.

Six: implement the governance framework. Data access, human review thresholds, error correction, model change management.

Seven: pilot AI. Start narrow, measure precisely, use the first 90 days framework to manage the implementation properly.

Not in parallel. In sequence. Each step creates the foundation for the next.

Finance functions that skip the early steps and go directly to AI implementation are the ones that produce the case studies about AI implementations that failed to deliver. The technology was not the problem. The foundation was not there.

Do the foundation work. The AI layer, when it comes, will return what it promises.


Maebh Collins is a Fellow Chartered Accountant (FCA, ICAEW) with Big 4 training and twenty years of operational experience as a founder and senior finance leader. She writes about AI in finance transformation from the inside out.

Back to Blog | AI in Finance → | Work With Me →