AI in Finance
The ethical bypass problem: how accountants are using AI without thinking it through
14 April 2026
The tool does something useful. You use it again. Then again. At some point, the outputs stop being checked the way the first ones were. That is when the ethical problems begin.
Dr Giles Cuthbert, an AI and ethics specialist who contributed to the Consultative Committee of Accountancy Bodies (CCAB) podcast on this topic, names the pattern directly: “There’s this view that somehow we can have a sort of ethical bypass when we’re looking at AI.” The bypass is real. It is also a professional conduct issue, not a risk management one.
I have seen this in finance teams adopting AI tools. Cautious early use, with careful review. Successful outputs build confidence. Scrutiny gradually reduces. At some point: comfortable reliance replaces professional scepticism. The tool is trusted because it has worked before, not because this particular output has been verified.
The fundamental principles do not have an AI exception
The CCAB Ethics Group, chaired by Professor Susan Smith, is developing a formal Statement to the Profession on ethical AI use, along with a series of case studies. The message, even before that statement is published, is this: the fundamental ethical principles that govern accounting apply when AI produces work that you would otherwise produce yourself.
Integrity, objectivity, professional competence and due care, confidentiality, professional behaviour. The mode of production does not change the professional obligation attached to the output.
This sounds obvious. The practical application is less obvious than the principle, which is exactly why the CCAB is producing a formal statement rather than simply noting that the principles exist.
Integrity: transparency about how work is produced
Integrity requires being straightforward and honest in professional work. In the AI context, this means considering the level of transparency you provide to clients about how work is produced. If AI tools are drafting tax advice, preparing analysis, or generating documentation that a client believes reflects your professional judgement, the question of disclosure is not trivial.
The ICAEW position is clear: members should consider what transparency to provide about AI use for each piece of work. Not a blanket requirement to disclose in every case. A requirement to think about it deliberately, for each engagement.
My position is that the right test is whether the client would find it material if they knew. If the answer is yes, tell them. If you are uncertain, that uncertainty is itself informative.
Confidentiality: what goes into the tool
Client data entered into third-party AI tools without appropriate contractual protections is not confidential in the way data held in your own systems is confidential.
Professor Smith is explicit: “Is it permitted to load it into a publicly available tool? Probably not. So it’s about being mindful, but also making sure that employees are aware and are appropriately trained.”
Most general-purpose AI tools use input data to improve their models unless you are operating under an enterprise agreement with explicit data handling terms. Entering client financial data, personal data, or commercially sensitive information into these tools without client consent and without understanding the data handling terms is a confidentiality breach. Not a potential one. An actual one.
This is the area where I observe the most unreflective behaviour: finance teams using tools that are perfectly appropriate for generic work, applying them to client-specific data without thinking through the confidentiality implications. The professional obligation is to check the terms, understand what happens to the data, and either obtain appropriate consent or not use the tool for that purpose.
Objectivity: the bias you cannot see
AI tools trained on historical data reflect historical patterns. In finance and accounting, those patterns include systematic biases in credit decisions, valuations, anomaly detection thresholds, and treatment of certain industries or geographies.
The ICAEW framing: be aware that there could be bias, and take a step back to think about what the tool is telling you. Objectivity requires not just avoiding bias yourself but being alert to the possibility that the tool is introducing it.
I have reviewed AI outputs that produced analysis built on assumptions that were not wrong on their face but were directionally misleading in context. The tool does not know the context. Professional judgement does. Objectivity means exercising that judgement rather than delegating it to an output.
This is where the AI governance framework for finance functions becomes practical. The governance question is not only “do we have controls?” It is “are the controls catching the cases where the tool is confidently wrong?”
Professional competence: hallucination as a professional liability
AI models hallucinate. They produce confident, plausible-sounding outputs that are factually wrong. Cuthbert puts the underlying issue plainly: “Always remember that we might talk about artificial intelligence, but it’s very real, it’s operating in the real world and it’s also not terribly intelligent. It follows commands, it looks for goals. Don’t set expectations which it will never be able to deliver.”
In professional accounting, hallucination is not a curiosity. It is a liability. Tax rules, case law references, regulatory requirements, numerical calculations involving unusual inputs: these are the categories where AI tools produce confident errors with the most consequence. Professional competence requires understanding which categories carry the most hallucination risk and verifying those outputs with appropriate rigour.
AI cannot fix a broken finance function, and it cannot substitute for professional judgement. The professional competence obligation is to understand which is which.
The governance and culture question
Boards and leadership teams need to understand the purpose of the AI tools in use, how people are actually using them, the intended use within the organisation, and what safeguards are in place, Professor Smith notes. This is not a one-time implementation check. It is ongoing governance, because the tools are developing and the use cases are expanding continuously.
The ethical bypass Cuthbert describes is a cultural pattern, not a policy failure. It develops through the gradual normalisation of less scrutiny because the tool has usually been right before. Boards and finance leaders who treat AI governance as a completed project rather than an active one are the ones where the bypass takes hold.
The CCAB will publish its Statement to the Profession and case studies this autumn, with a webinar to discuss the profession’s response. Those materials will be worth engaging with carefully when they arrive.
The professional obligations they will articulate are already in effect.
The CCAB Ethics Group’s work on ethical AI use is ongoing, with a Statement to the Profession and case studies planned for publication this year. More at icaew.com.