AI in Finance

What the board needs to understand about AI governance

14 April 2026

Peter Lee, a partner at Simmons & Simmons who leads their AI Governance Advisory Practice, offered a useful measure of AI’s current pace at ICAEW’s Corporate Governance conference in March: “I recently met with a founder who said that a year ago, their business may have needed 50 staff. Now, thanks to vibe coding, they can develop in a day with just four people what once would have taken them two months.”

That is the operational tempo AI is enabling. It is also a governance challenge that most boards are not yet equipped to address.

Pauline Norstrom, CEO of Anekanta AI, describes the typical board response: “Isn’t that with IT? What’s that got to do with us?” The adaptation of board roles to include meaningful AI oversight is not there yet. The consequence is that governance is happening at the wrong level, or not happening at all.


Why AI governance is a board responsibility

There is a direct line between how an organisation uses AI and its purpose, values, and stakeholder relationships. Lee frames this precisely: “Whether you’re compliant with the EU AI Act is a matter of law. But whether you’re operating in a way that your employees and stakeholders are ethically comfortable with is another matter.”

The legal and the ethical are both board responsibilities. Compliance with the EU AI Act requires understanding which AI systems are high risk and what obligations attach to their use. Operating in a way that stakeholders are ethically comfortable with requires broader judgment that cannot be delegated to a legal team or an IT function.

Finance leaders sit at the intersection of both requirements. The AI governance framework for finance functions addresses the internal controls dimension. The board’s responsibility goes beyond that to the strategic and reputational questions that only board-level engagement can address.


What boards need to understand about the risks

Lee identifies several AI risks that boards need to be informed about, not just vaguely aware of.

Bias and hallucinations. These are technical characteristics of AI tools that have real operational and reputational consequences. A board that does not understand the difference between an AI output that is confidently wrong and one that is reliably accurate cannot make meaningful decisions about where AI should and should not be used.

High-risk use cases. Some applications of AI create liability, regulatory exposure, or ethical concerns that require board-level scrutiny rather than operational sign-off. Boards need a framework for identifying which use cases these are and who has accountability for them.

Drift. Lee highlights a risk that is underappreciated outside technical circles: “There’s the concept of drift, where a tool’s rationale can shift away from how it was first designed, in ways that can accentuate any bias in the underlying Large Language Model.” An AI tool that was appropriate when deployed may behave differently over time. Governance frameworks need to account for this, not just sign off on initial deployment.

Agentic AI. Where AI operates without a human in the loop, the governance questions become significantly more complex. The agentic AI implications for finance teams require specific oversight frameworks rather than the same controls applied to tool-assisted human work.

Critical thinking erosion. Lee raises a less obvious risk: AI can blunt the critical thinking of knowledge workers. Finance functions where professional scepticism has been quietly replaced by AI output acceptance are functions where the professional value of the human layer has been systematically eroded. This is a governance issue, not just a training one. The pattern is already visible in accounting practices, where professionals are spending significant time correcting AI errors that were accepted without adequate review.


Board members need to use the tools

Tuomas Syrjänen, Co-founder and Chair of Futurice, makes a point that boards tend to resist: “Board members should start not merely by asking people’s opinions on AI tools, but by actually using them. Walk the talk.”

The board member who has no direct experience of AI tools is making governance decisions about technology they do not understand. The consequence is a default to either blanket restriction, of the kind Norstrom describes where CIOs are instructed to lock everything down, preventing staff from benefiting from tools that are genuinely useful; or uncritical endorsement of whatever AI strategy management presents, because the board has no independent basis for assessment.

Both are failures of governance. Both are currently common.


AI literacy as a governance foundation

AI literacy was raised repeatedly at the ICAEW conference as the foundation for sound governance. Norstrom frames it in terms directly useful for finance professionals: knowing what is under the hood of a model allows you to apply professional scepticism to its outputs.

“For example,” she says, “it’s fairly well known that Open AI was trained on Reddit. If you have that level of awareness, you can use your professional scepticism and say, ‘What this model is telling me doesn’t align with my expectations.’”

This is exactly the kind of literacy that accountants, trained in professional scepticism, are well placed to apply. The challenge is building it deliberately rather than assuming that general digital competence is sufficient.

From a legal perspective, AI literacy is mandated in the EU AI Act for organisations subject to it. Esther Mallowah of ICAEW suggests using the Act’s framework as a governance guide even for organisations not legally required to comply: “If you look at ISO 42001, that’s really helpful, as is the text of the EU AI Act. Even if you don’t have to comply with it, it can be a helpful guide.”


The sustainability and diversity dimensions

Two dimensions of AI governance that are underrepresented in most board discussions.

On sustainability, Lee urges organisations to distinguish tasks that require deep-reasoning AI models from those that could be addressed with simpler tools. Most people do not think about AI’s energy consumption. For organisations with sustainability commitments, undifferentiated AI use is a governance gap that will eventually require explanation.

On diversity, Norstrom is direct: “Right now, if you look at board composition, it’s still skewed towards one demographic. That means their data is going to be skewed, and their insights and ability to spot potential issues will also be skewed.” Boards that lack diversity are making AI governance decisions with limited perspectives. The bias risks in AI outputs are, in part, a reflection of the bias risks in the humans overseeing them.


I have seen this dynamic in the finance functions I have worked with. Boards that delegate AI governance entirely to IT end up with two problems: IT makes deployment decisions without understanding the commercial and regulatory risk, and finance teams adopt tools without the governance framework that professional obligations require. The gap between them is where the liability accumulates. Closing it is a board responsibility, not a byproduct of someone else’s roadmap.

The ethical obligations around AI use in accounting apply at the professional level. The governance questions apply at the board level. Both require active engagement rather than the assumption that someone else in the organisation is handling it.

For most organisations, nobody currently is.