AI in financial services: FCA principles and supervisory expectations

Artificial intelligence and machine learning are being adopted across financial services at pace — in credit decisioning, fraud detection, customer segmentation, algorithmic trading, compliance monitoring, and customer service applications. The FCA has been broadly supportive of innovation in this area but has been increasingly clear that its existing principles-based framework already captures AI uses, and that firms must apply the same governance standards to AI-assisted decisions as to any other decision with regulatory implications. The FCA's 2022 guidance on AI and machine learning, updated through subsequent Dear CEO letters and thematic reviews, provides the clearest available articulation of regulatory expectations — though the landscape is evolving rapidly.

The FCA's framework for AI governance rests on several interlocking principles. First, explainability: firms must be able to explain how their AI models work, what data they use, and how they produce outputs, both for internal governance purposes and when a customer or regulator asks for an explanation of a decision. This does not necessarily require technical interpretability of every model — the FCA acknowledges that some models are inherently complex — but it does require firms to have a meaningful level of understanding and to be able to articulate the key factors that influenced a decision. A 'black box' model whose outputs cannot be interrogated is unlikely to be consistent with the FCA's SYSC obligations.

Second, bias and fairness: AI models trained on historical data can perpetuate or amplify discriminatory patterns present in that data. Firms using AI in credit decisioning, insurance underwriting, or customer segmentation must assess whether their models produce outcomes that are disparate across protected characteristics (as defined by the Equality Act 2010) and, if so, whether those disparities are justified by legitimate factors or represent unlawful discrimination. The FCA has flagged this as an area of growing supervisory concern and expects firms to have documented processes for testing model outputs for bias.

Third, accountability: there must be a human in the loop. AI can assist, inform, and support decisions, but where a decision has significant regulatory or consumer impact, a firm must be able to identify the individual(s) responsible for that decision and demonstrate that they exercised genuine oversight and judgement — not merely ratification of an algorithmic output. Under SMCR, AI governance is increasingly being mapped to specific SMF responsibilities, and boards should ensure that their governance frameworks clearly allocate responsibility for AI risk.

The FCA's planned AI supervisory exercise — signalled in its 2025/26 Business Plan — will involve thematic data collection from a range of firms about their AI use cases, governance arrangements, and risk management processes. Firms should use this period to conduct a self-assessment against the FCA's published AI principles and to ensure that their AI governance framework is documented, proportionate to the AI uses deployed, and subject to appropriate board oversight.

Generative AI and large language models

The use of generative AI and large language models (LLMs) in financial services introduces specific risks that existing model governance frameworks may not fully address. LLMs can generate plausible but factually incorrect content ('hallucinations'), may have training data cutoffs that make them unreliable for current regulatory questions, and raise specific data protection concerns when processing customer information. Firms deploying LLMs in customer-facing or compliance contexts should ensure that their governance frameworks address these specific risks, including testing for accuracy, implementation of output review processes, and appropriate data minimisation in model inputs.