AI in Finance
The security risks that arrive with AI tools in finance
14 April 2026
A security researcher working with a BBC journalist recently demonstrated something worth understanding if you are deploying AI tools in a finance context. The journalist used a vibe coding platform to create a simple game. The researcher was able to hack the platform, access and edit the journalist’s code, and gain access to their computer. The journalist had done nothing wrong. There was no malicious link to click, no software to install. The attack surface was the AI tool itself.
This is the new category of cyber risk that finance leaders are adopting without fully understanding. Not phishing. Not password attacks. Vectors that emerge from the specific architecture of AI tools: agents that require access to systems, generated code with unreviewed security gaps, and tools that handle sensitive data in ways organisations have not thought through.
82% of businesses experienced a cyber incident in the previous 12 months, according to the Department for Science, Innovation and Technology’s longitudinal study published this year. That is not a worst-case scenario figure. It is the baseline.
The vibe coding problem
Vibe coding, generating code from natural language prompts, is gaining adoption among finance functions that want automation without traditional development resources. The appeal is real: faster development, no requirement for programming expertise, accessible to finance professionals who can describe what they want in plain language.
The security risk is real too. AI-generated code requires the same review as human-written code, and often gets less scrutiny because it was generated quickly and looks functional.
The BBC demonstration above was not a sophisticated attack. It exploited gaps that appropriate security review would have identified. Earlier this year, a vulnerability was found in Moltbook, a social networking site entirely built with vibe coding, where a misconfigured database exposed 1.5 million authentication tokens, 35,000 email addresses, and private messages. The site owner acknowledged publicly that he “didn’t write one line of code.” Neither, apparently, did he conduct any security testing.
Finance functions building automation tools, workflow applications, or reporting systems using AI-generated code need to treat that code as software requiring security testing, not a productivity shortcut that bypasses normal controls.
The AI agent problem
AI agents are more directly relevant to finance teams than vibe coding. Agents embedded in tools finance professionals are already using, including Microsoft Copilot, require access to a user’s systems, applications, and accounts to perform their tasks. That access, if misconfigured or inadequately secured, creates exploitable attack surfaces.
In February, a bug was reported in Copilot Chat, the AI assistant embedded in Word, Excel, PowerPoint, and Outlook. The bug resulted in confidential emails being summarised and returned by the chat interface, even when those emails were labelled as sensitive. Microsoft confirmed the bug and stated it had been fixed.
The significance is not the specific bug, which was fixed. It is the pattern: AI tools embedded in financial workflows, handling sensitive data, processing information that the organisations deploying them have not fully audited for security implications. The AI governance framework for finance functions needs to include a specific assessment of what access each AI tool has and what it can do with that access. “It’s embedded in our existing tools” is not a security assessment.
Supply chain cyber risk as a finance governance issue
The DSIT study identifies supply chain management as a continued area of weakness. Less than one-third of organisations formally assess their suppliers’ cyber security. 77% of charities experienced a cyber incident in the same period as the 82% of businesses figure. These numbers are not sector-specific. They are baseline statistics across UK organisations.
For finance leaders, supply chain cyber risk is both a direct exposure and a governance responsibility. Direct: finance systems connect to supplier systems through ERP integrations, payment processing, and data sharing that creates exposure when a supplier is compromised. The attacker does not need to breach your systems directly if they can move laterally from a supplier’s compromised system into yours.
Governance: the audit of supplier cyber practices is increasingly expected as part of responsible financial controls. In November 2025, ministers wrote to large businesses asking them to require Cyber Essentials certification in their supply chains. The National Cyber Security Centre published a Supply Chain Playbook in December 2025 to support implementation.
I have built system integrations between finance tools and external platforms, including supplier portals, payment processors, and data providers. The security of those integrations is as important as the security of the core systems they connect to. An integration that works perfectly but allows an attacker to move through a supplier’s compromised system into a finance system is not a secure integration. It is an unreviewed one.
What finance leaders should be doing
The recommendations from recent incidents are practical.
AI-generated code needs security testing before deployment. The fact that a tool generated it quickly does not mean it is secure. Conduct appropriate security testing, document the process, and implement review before automation built on AI-generated code runs in a production environment with access to financial data.
AI agent access should be audited. For each AI tool embedded in a finance workflow, understand what systems it can access, what actions it can take autonomously, and what happens if that access is exploited. “I’m not sure” is not an acceptable answer for tools handling financial data.
Supplier cyber assessments should be formalised. Finance leaders who manage supplier relationships and payment processing need to include cyber security criteria in supplier assessments, not leave this to IT or procurement alone.
Default to enabling security features, not opting out. On AI tools where privacy and security settings are configurable, review and set them deliberately rather than accepting defaults. The Copilot bug above illustrates what happens when organisations rely on configurations that have not been actively reviewed.
The cyber risk landscape for finance functions has not changed category. It has added new vectors. The tools that improve finance team efficiency create attack surfaces that require the same discipline, documentation, and controls that any other system introduction requires. The finance leader who treats AI tool deployment as a productivity decision and delegates the security question to IT is leaving a governance gap that will eventually be found.
The Cyber Essentials scheme, supply chain playbook, and freely available resources mentioned above are available through the National Cyber Security Centre at ncsc.gov.uk.