AI in Finance
Is Your Finance Function Ready for AI? A Self-Assessment
2 February 2026
Every AI vendor has a readiness assessment. It always ends with the same conclusion: you are ready to buy their product. The questions are designed to surface the problems the product addresses, and the scoring is calibrated to create urgency rather than accuracy.
This assessment is designed to do something different. It is designed to tell you the truth about where your finance function actually is, because the truth is what saves you money and time. A finance function that discovers it is not ready for AI before it buys an AI tool saves itself a failed implementation, a damaged team relationship with technology, and six months of explaining to leadership why the automation rate is 40 points below the pilot.
The assessment covers five dimensions. Score yourself 1, 2, or 3 on each. The total gives you an honest starting position.
Dimension one: process documentation
The question: can you document your five most manual finance processes at a task level?
Not at a summary level. At the level of individual tasks: who does what, in what system, with what input, producing what output, passing to whom. A documented process at this level is specific enough for an AI tool to be configured against it. A summary-level description is not.
Score 3: Your five most manual processes are documented at task level, version-controlled, and reviewed when the process changes. The documentation exists somewhere accessible to the team, not in someone’s head or in a file last updated 18 months ago.
Score 2: Some processes are documented. Coverage is patchy. Documentation exists for some steps but not all, is not current, or lives in a format nobody uses in practice.
Score 1: Process knowledge lives primarily with individuals. If the person who does the task were unavailable for a month, you would need to reconstruct the process from scratch. Documentation, where it exists, does not reflect how things actually work.
The reason this dimension matters for AI: AI tools are configured against documented processes. The configuration work requires you to specify inputs, outputs, decision rules, and exception paths. If these do not exist in documented form, the configuration work creates them from scratch, during the implementation, at vendor consulting rates. That cost and delay is avoidable.
The finance transformation without losing the team post covers why process documentation is also a change management tool, not just a technical input.
Dimension two: data quality
The question: across your top three data domains, how consistently does your data meet the five quality dimensions?
The five dimensions are completeness (no missing values in critical fields), consistency (same thing represented the same way across systems), accuracy (data reflects reality), timeliness (data is current enough to be useful), and lineage (you can trace where data comes from). The data quality post covers each in detail.
Score 3: You have done a data quality assessment of your highest-priority AI candidate data domains in the last 12 months. Known issues have been remediated. You have governance rules that prevent new quality problems from being created at source. Your data error rate in these domains is below 5%.
Score 2: You have a general sense of your data quality based on day-to-day experience. There are known problems in specific areas: the supplier master has duplicates, the chart of accounts has legacy codes, some cost centre names are inconsistent post-acquisition. These have not been formally assessed and not all have been remediated. Your estimate of your own data quality may not be reliable.
Score 1: Data quality is unknown or known to be problematic. You regularly encounter data issues in your daily work: invoices that cannot be matched because of naming inconsistencies, reconciliation variances that trace back to coding errors, reports that require manual adjustment before they are usable. You have not done a systematic assessment of the underlying causes.
This is the dimension where the most common self-assessment error occurs. Finance teams tend to score themselves higher here than a formal assessment would confirm. The issues visible in daily work are only a fraction of the total. The AI won’t fix a broken finance function post is directly relevant here: AI amplifies data quality, for better and for worse.
Dimension three: system adoption
The question: what percentage of finance transactions are processed through your systems without manual bypass?
Manual bypass means a transaction that should go through the system but does not. POs raised after the invoice arrives. Journal entries input directly without going through the defined workflow. Payments processed outside the AP system. Expense claims submitted by email and posted manually. Every bypass is a gap in your data completeness and a gap in your controls.
Score 3: Over 90% of finance transactions are processed through defined system workflows without manual bypass. Bypasses that do occur are exceptions with documented justification. Your system adoption is high enough that your data reliably reflects what is happening in the business.
Score 2: Between 70% and 90% of transactions go through the system. There are known bypass patterns: a team that always submits expenses outside the system, a supplier category always processed manually because they send invoices in a format the system cannot handle, a recurring journal entered directly because the automation was never set up. These are known and tolerated rather than addressed.
Score 1: Below 70% system adoption. Significant transaction volume is processed manually or outside defined workflows. This is often the case in finance functions that have outgrown their systems, or where the team has worked around system limitations long enough that the workarounds have become the process.
Low system adoption is the predecessor problem to data quality issues. Systems that are not consistently used do not have consistent data. The ERP versus spreadsheets post covers the relationship between system discipline and data reliability.
Dimension four: team capability
The question: does your finance team understand how AI tools work at a conceptual level, and have they used AI tools in a work context?
This dimension is not about technical expertise. Nobody on your finance team needs to understand how a transformer model works to use AI tools well. The capability that matters is: does the team understand what AI can do, what it cannot do, when to trust its outputs and when to question them, and how to work alongside it rather than defer to it uncritically or resist it reflexively.
Score 3: Most of the finance team uses AI tools as part of their normal work, even informally. There is active curiosity about AI applications in finance. People understand that AI makes errors and know what to do when they encounter one. There is no significant resistance to AI-assisted work, though there may be healthy scepticism about specific tools.
Score 2: Some members of the team are engaged with AI tools. Others are not. Awareness is uneven. There may be one or two people driving exploration and a larger group waiting to see whether it is useful. Conceptual understanding is present but not consistent across the team.
Score 1: AI tools are not used in the team’s work. Awareness is low. There is either no particular interest or active resistance. The team has not had training or exposure to AI tools in a finance context. You would be introducing both the technology and the concept simultaneously, which significantly increases the change management challenge.
A team at score 1 is not a reason to avoid AI adoption. It is a reason to invest in capability building before tool deployment. Three months of structured exposure, use of AI tools in low-stakes contexts, and open conversation about the technology creates the foundation for effective adoption. Deploying tools into a team that does not understand them creates resistance that is far harder to address after the fact.
Dimension five: governance readiness
The question: do you have defined data access controls, human review policies, and error escalation paths for AI tools?
This is the governance dimension. The AI governance post covers the full framework. For this assessment, the question is whether the minimum viable governance infrastructure exists.
Score 3: You have a documented AI use policy for the finance function, a data access register listing what AI tools have access to which systems and data, defined human review thresholds, and a documented escalation path for AI errors. These documents exist, are current, and are known to the team.
Score 2: Some governance elements exist. You have thought about data access and have informal policies, but they are not documented in a way that would satisfy an auditor or be consistently applied without individual judgment. You have human review thresholds but they are defaults rather than decisions. You have not documented what happens when an AI tool makes an error.
Score 1: No governance framework exists for AI in the finance function. Tools may be in use without formal data access documentation. Decisions about what to review and what to auto-approve are made individually at the point of decision. There is no defined process for handling AI errors.
Your score and what it means
Add your five dimension scores. The total runs from 5 to 15.
13 to 15: Ready to pilot.
Your foundations are strong enough to support an AI pilot in a defined, high-value process. Start with the process where your data quality is highest and your documentation is clearest. Run a controlled pilot, measure automation rate and error rate in production, and build from evidence rather than assumption. The CFO guide to AI strategy covers how to structure that pilot measurement framework.
9 to 12: One or two quarters of foundation work.
You are close but not there. Identify your lowest-scoring dimension and start there. This is not a delay: it is the faster route to AI that actually delivers. A finance function that spends two months improving data quality before an AI pilot achieves its automation targets in production. One that skips that work typically spends six months post-go-live trying to close the gap between pilot and production performance.
Prioritise the lowest score first. If it is data quality, run the data quality audit and remediate. If it is process documentation, spend a month documenting the five highest-priority processes at task level. If it is governance, build the three minimum governance documents. These are each finite bodies of work with defined end states.
5 to 8: Foundation building is the priority.
AI is not the right investment for your finance function right now. Not because AI is wrong for finance. Because the foundations that determine whether AI delivers are not in place. Deploying AI into this environment will produce an expensive, confidence-damaging failure that sets back adoption by two years.
The right priority is process documentation, data quality remediation, and system adoption discipline. These investments pay back independently of AI. A finance function with documented processes, clean data, and high system adoption is a better finance function regardless of whether AI is layered on top of it. The building a finance function from scratch post covers what this foundation work looks like in practice.
What to do with your score
The assessment is a diagnostic tool, not a verdict. A score of 8 today does not mean a score of 8 in six months. Every dimension is improvable with focused effort.
The most common mistake after a readiness assessment is treating the score as discouraging. A score of 7 means you have three dimensions below 3. Each of those dimensions represents a specific, addressable problem. That is useful information: a defined list of things to fix. It is considerably more valuable than the vendor’s readiness assessment that told you that you were ready to buy.
If you want help interpreting your scores and building the remediation plan, or if you want a more rigorous version of this assessment conducted against your actual data and processes rather than self-assessment, that is something I can support directly. See Work With Me for how to get in touch.
For the full framework within which this assessment sits, see AI in Finance Strategy.
Maebh Collins is a Fellow Chartered Accountant (FCA, ICAEW) with Big 4 training and twenty years of operational experience as a founder and senior finance leader. She writes about AI in finance transformation from the inside out.