AI in Finance

Is your finance function ready for AI? A five-pillar self-assessment

4 April 2026

Every AI vendor has a readiness assessment. It always ends with the same conclusion: you are ready to buy their product. The questions are designed to surface the problems the product addresses. The scoring is calibrated to create urgency rather than accuracy.

This assessment is designed to do something different. It is designed to give you an honest starting position, because the truth is what saves money and time. A finance function that discovers it is not ready for AI before it buys an AI tool avoids a failed implementation, a damaged team relationship with technology, and six months of explaining to the board why the automation rate is 40 points below the pilot.

The assessment covers five pillars. Score yourself 1 to 4 on each. Add your scores. The total gives you an honest baseline.

1 = Not in place. 2 = Partially in place. 3 = Largely in place. 4 = Fully in place.


Pillar one: data readiness

Data readiness covers the quality, availability, consistency, and integration of the data your finance function produces and relies on. It asks whether your data is accurate enough to trust, structured consistently enough to use across systems, and accessible to the tools that need it.

The diagnostic question: If you pulled the same revenue figure from your ERP, your CRM, and your management accounts for the same period, would all three match?

Score 4: Core financial data is consistent across systems. Reconciliation between systems is automated or near-automated. Key metrics are defined uniformly and produce the same result regardless of who runs the report. Data quality issues are identified, logged, and owned.

Score 3: Data is largely consistent, with documented exceptions. Most reconciliations run reliably. There are known quality issues that are tracked and being remediated.

Score 2: You have a general sense of data quality from day-to-day experience. There are known problems in specific areas: the supplier master has duplicates, the chart of accounts has legacy codes, some cost centre names are inconsistent post-acquisition. These have not been formally assessed and not all have been remediated.

Score 1: Data exists in multiple systems with no single source of truth. Reconciliation between systems is a manual monthly task. Key metrics are defined differently depending on who produced the report.

This is the dimension where self-assessment error is most common. Finance teams tend to score themselves higher here than a formal assessment would confirm. The issues visible in daily work are a fraction of the total. AI amplifies data quality: for better and for worse.


Pillar two: process maturity

Process maturity covers whether your finance function’s core processes are standardised, documented, and stable enough to build on. A process that is different every month depending on who is running it cannot be automated. A process that exists only in someone’s memory cannot be handed over, improved, or audited.

The diagnostic question: If your most experienced finance team member left tomorrow, could the month-end close still run on time?

Score 4: Core processes are documented to a level that allows someone new to follow them. The close runs consistently regardless of who is in the team. Manual workarounds are identified as exceptions, not embedded as standard practice. Documentation is reviewed and updated at least annually.

Score 3: Most critical processes are documented and largely stable. Some workarounds exist but are logged. The close runs reliably most months.

Score 2: Some processes are documented. Coverage is patchy. Documentation exists for some steps but not all, is not current, or lives in a format nobody uses in practice.

Score 1: Key processes are undocumented, or documentation exists but does not reflect what actually happens. Close timelines extend when specific people are absent. Process knowledge lives with individuals.

The reason this pillar matters for AI: AI tools are configured against documented processes. If these do not exist in documented form, the configuration work creates them from scratch during implementation, at vendor consulting rates. That cost and delay is avoidable.


Pillar three: technology foundations

Technology foundations cover the ERP infrastructure, cloud readiness, and API capability that AI tools connect to, draw data from, and write outputs back into. If the ERP is old, poorly configured, or unable to expose data through an API, the integration will fail or require expensive custom development.

The diagnostic question: Do you know which version of your ERP you are running, and when it was last meaningfully updated?

Score 4: ERP is current or recently updated, cloud-hosted or cloud-capable, and can expose data to external tools without manual intervention. API integration capability exists and has been tested. Someone in the finance team can specify what data the ERP can expose and what it cannot.

Score 3: ERP is on a reasonably current version. Data can be extracted reliably, though some manual steps remain. Basic integration capability exists.

Score 2: Between 70% and 90% of transactions go through the system. There are known bypass patterns: a team that always submits expenses outside the system, a supplier category processed manually because the ERP cannot handle the invoice format. These are tolerated rather than addressed.

Score 1: ERP is on a legacy version with significant customisations that complicate upgrades. Data extraction requires manual exports. No API integration capability exists or has been tested.

Low system adoption is the predecessor problem to data quality issues. Systems that are not consistently used do not have consistent data.


Pillar four: people and skills

People readiness covers whether your finance team has the data literacy, analytical capability, and change readiness to work effectively with AI tools. The technology can be perfect and still fail at this layer. Finance teams that have been through failed technology implementations carry scepticism that is rational and earned. Addressing it requires more than a training session.

The diagnostic question: Could you name the person in your finance team who would champion an AI pilot, and would their colleagues trust them to do it?

Score 4: At least one person in the finance team has genuine analytical curiosity and the confidence to experiment. The team understands why AI is being considered and has been consulted on it. There is a named internal sponsor for technology change with credibility among the team.

Score 3: Some members of the team are actively engaged with AI tools. There is a clear potential champion. Awareness is present if uneven.

Score 2: Engagement is limited to one or two individuals. Others are waiting to see whether it is useful. There is no identified internal sponsor.

Score 1: The team has low confidence with data outside familiar reports. Previous technology changes were imposed rather than co-developed. There is no internal sponsor for change with credibility among the team.

A score of 1 here is not a reason to avoid AI adoption. It is a reason to invest three months in capability building before tool deployment. Deploying tools into a team that does not understand them creates resistance that is far harder to address after the fact.


Pillar five: governance and controls

Governance readiness covers the strength of your internal controls framework, the quality of your audit trail capability, and your awareness of the regulatory obligations that apply to AI use in finance. AI does not reduce the need for controls: it changes where the control points are. An AI system that processes journal entries needs the same segregation of duties, approval workflows, and exception reporting as a manual process.

The diagnostic question: If an AI system produced an incorrect journal entry that was posted and paid, how quickly would your current controls catch it, and who would be accountable?

Score 4: ICFR framework is documented, tested, and current. Audit trail capability is robust. You have a policy governing AI use in finance processes, and at least one person understands the EU AI Act obligations that apply from August 2026.

Score 3: Controls are documented and largely applied consistently. Audit trails are adequate. Awareness of regulatory obligations exists but formal mapping has not been completed.

Score 2: Some governance elements exist. Informal data access policies. Default review thresholds. No documentation that would satisfy an auditor. No defined escalation path for AI errors.

Score 1: Controls are informal or inconsistently applied. Audit trails are incomplete. There is no policy governing AI use in finance processes, and no awareness of EU AI Act obligations.

The EU AI Act (Regulation 2024/1689) has obligations that apply to finance functions from August 2026. Most mid-market finance teams have not begun the mapping exercise required. Chapter 5 of the AI-Ready Finance guide covers what finance leaders need to know.


Your score and what it means

Add your five pillar scores. The total runs from 5 to 20.

17 to 20: Ready to pilot.

Your foundations are strong enough to support a controlled AI pilot in a well-defined, high-value process. Start where data quality is highest and process documentation is clearest. Measure automation rate and error rate in production. Build from evidence.

12 to 16: Specific gaps to address before piloting.

You are close. Identify your lowest-scoring pillar and address it first. This is not a delay: it is the faster route to AI that actually delivers. A finance function that spends eight weeks improving data quality before a pilot achieves its automation targets in production. One that skips that work spends six months post-go-live trying to close the gap between pilot and production performance.

8 to 11: Foundation building is the priority.

AI is not the right investment for your finance function right now. Not because AI is wrong for finance. Because the foundations that determine whether AI delivers are not in place. Deploying AI into this environment produces an expensive, confidence-damaging failure that sets back adoption by two years.

The right priority is data quality remediation, process documentation, and system adoption discipline. These investments pay back independently of AI. A finance function with documented processes, clean data, and high system adoption is a better finance function regardless of whether AI is layered on top.

5 to 7: Start with diagnosis.

Before deciding where to invest, you need a clear picture of where the gaps actually are. The five diagnostic questions above give you starting points. Working through them honestly with your finance team takes half a day and produces a more accurate picture than most vendor assessments provide in a week.


The full assessment

This quick version gives you a directional read. The full AI Readiness Self-Assessment Scorecard, which appears in the AI-Ready Finance: The Practitioner’s Guide for UK and European Mid-Market Finance Teams, covers 20 statements across the five pillars and scores against /80. It is designed to be completed with your finance team rather than by you alone, and to produce a pillar-by-pillar breakdown that tells you exactly where to focus.

The guide is free. It covers the full five-pillar framework, the use cases that actually work in mid-market finance, the governance obligations finance leaders cannot afford to miss in 2026, and a 90-day readiness action plan. Download the AI-Ready Finance guide here.


What to do with your score

The assessment is a diagnostic tool, not a verdict. A score of 9 today does not mean a score of 9 in six months. Every pillar is improvable with focused effort.

The most common mistake after a readiness assessment is treating the score as discouraging. A score of 10 means you have specific pillars below 3. Each of those pillars represents a defined, addressable problem. That is useful information: a list of things to fix, not a reason to delay.

If you want help interpreting your scores and building a remediation plan, or if you want a more rigorous version of this assessment conducted against your actual data and processes rather than self-assessment, that is work I can support directly.

Work With Me | Get in Touch


Maebh Collins is a Fellow Chartered Accountant (FCA, ICAEW) with Big 4 training and twenty years of operational experience as a founder and senior finance leader. She writes about AI in finance transformation from the inside out.