Lack of AI governance leaves financial services vulnerable, senior leaders warn

Finance firms warn that the UK financial services sector is deploying AI faster than it can be governed, leaving a critical gap in AI governance standards.

Related topics:  Financial services,  AI
Lucy Whalen | Editorial Assistant, Financial Reporter
29th April 2026
AI technology dangers

Senior leaders across financial services have warned of a critical gap in AI governance standards, leaving the UK exposed to systemic risk, new research from Zango reports.

It comes as the Bank of England prepares to convene the Treasury, FCA and National Cyber Security Centre to assess the risks posed by Anthropic’s Mythos model.

Lord Clement-Jones, Liberal Democrat spokesperson for science, innovation and technology in the House of Lords and co-chair of the All-Party Parliamentary Group on AI, writes in the foreword to the report: "What is immediately missing is the translation of high-level regulatory principles into day-to-day operational practice. We cannot simply wait for the aftermath of the first major AI-fuelled financial scandal to force us into action."

The Future of AI Governance & Compliance in Financial Services, coordinated by compliance technology firm Zango AI, draws on interviews with 27 C-suite and senior leaders across risk, compliance and AI governance at UK and European financial institutions, and four industry roundtables with 60 additional senior practitioners.

Contributors to the report include senior leaders from Santander, St James’s Place, Stripe, Standard Chartered, Lloyds Banking Group, Monzo, Allica Bank, Commerzbank, Revolut, and Ecommpay, alongside John Glen MP, member of the Treasury Committee.

The findings highlight a shift in the AI systems adopted by UK financial institutions, from tools that produced predictable outputs to generative and agentic systems that produce context-dependent outputs that cannot be fully validated in advance, thereby changing the requirements of governance.

That shift is creating a widening oversight gap. Business and technology teams are deploying AI at a much faster pace than the risk and compliance functions responsible for overseeing them, with several institutions unable to identify all the AI tools in use across their own organisations.

Criminal organisations are already exploiting that gap: global fraud losses hit $579 billion in 2025, with 90% of financial professionals reporting an increase in AI-enabled attacks.

Leaders cited a lack of operational guidance as a significant gap in the UK compared to the US. The US published a practical Financial Services AI Risk Management Framework in February 2026, developed by a Treasury-led public-private collaboration involving 108 financial institutions, with input from agencies including the National Institute of Standards and Technology. The Singapore regulator, the Monetary Authority of Singapore, published an equivalent in March. No comparable standard exists in the UK or EU.

The report warns that without shared operational guidance, firms are solving the same governance problems independently, which leads to inconsistent control standards and creates oversight gaps that can be exploited at scale. This is a dynamic that sits at the heart of the AI-enabled risks that regulators are now urgently examining.

The report also calls for practitioner-built, sector-specific implementation guidance, developed with regulator engagement and modelled on the precedent set by the Joint Money Laundering Steering Group, the industry-developed standard for financial crime compliance that carries government endorsement without being mandated by regulators. No equivalent exists for AI.

Ritesh Singhania, CEO of Zango, said: "Compliance teams are trying to keep pace with AI systems their own colleagues have deployed, and with criminal networks scaling faster than anyone's defences. Weak governance doesn't just create individual risk; it creates systemic vulnerability across the entire sector. What's missing is a shared implementation standard that gives firms a consistent basis for governing AI as they adopt it."

Dean Nash, adviser to Zango and global chief operating officer (legal) at Santander, said: "The kinds of AI systems now being adopted across financial services don't behave the way the systems we built our governance frameworks around behaved. They make judgements, produce different outputs in different contexts, and cannot be fully tested in advance. This poses a significant accountability problem. Right now, most firms are trying to solve it alone, without a shared standard to work from."

More like this
CLOSE
Subscribe
to our newsletter

Join a community of over 30,000 intermediaries and keep up-to-date with industry news and upcoming events via our newsletter.