Artificial intelligence (AI) is increasingly integrated into the operations of UK financial institutions, from risk management and fraud detection to customer service and product design. In practice, this involves the use of AI systems and tools (whether developed in-house or supplied by third-party vendors) within regulated activities and supporting business processes. Its deployment engages overlapping regulatory regimes, in particular the Financial Conduct Authority (FCA) framework and the UK data protection regime.
1. FCA Regulatory Considerations for AI Deployment in the UK
We summarise below the FCA’s approach to the supervision of firms’ use of AI systems and tools under existing regulatory frameworks and highlight the practical implications for governance, outsourcing/third-party risk and operational resilience.
The FCA Approach
The FCA published its approach on their website in relation to the supervision of firms’ use of AI, stating that it supports firms to experiment, develop, and test AI to drive innovation, benefit consumers and markets (whilst balancing the risks) and support UK growth and competitiveness.
The FCA’s principles-based approach is focused on outcomes, allowing firms flexibility to adapt to technological change and market developments. The FCA’s position is that firms’ use of AI systems and tools is supervised through existing FCA framework, rather than through a bespoke or standalone AI-specific rulebook.
Accordingly, when AI systems or tools are used within regulated activities or customer journeys, the relevant FCA rules and expectations apply. In particular:
- Consumer Duty (PRIN 2A): where AI is used to design or distribute products, set prices, assess eligibility, detect fraud, manage servicing or complaints, or generate customer communications, firms must ensure that they have considered customer outcomes; and
- SM&CR and governance (including SYSC): where the deployment and use of AI systems or tools require clear ownership, firms must maintain appropriate systems and controls in place and defined escalation routes for risks it might be exposed to.
The FCA is focused on supporting the development of AI models and solution in a safe and responsible way. In October 2024, it launched the AI LAB as a pathway for engagement with firms on practical AI deployment and testing, including through:
- The supercharged sandbox to support early-stage experimentation through enhanced compute and datasets;
- AI Live Testing to support supervised testing of AI systems with regulatory engagement, and to inform expectations and emerging good practice; and.
- AI Spotlight projects to develop practical insight into how firms are using AI in financial services.
The FCA has indicated it will continue to engage with the Information Commissioner’s Office (ICO) where firms’ use of AI intersects with data protection requirements.
Legal Implications for FCA Regulated Firms
From a legal and compliance perspective, the key question is whether the AI system or tool creates conduct, governance, third-party or operational resilience risks that the FCA already regulates. The main implications are as follows:
- Governance and accountability (SM&CR / SYSC): firms should be able to evidence who owns the AI-enabled process, what approvals were obtained before deployment, what controls apply (including change control), and how performance is monitored and escalated (including for bias, drift, errors, outages and customer harm).
- Consumer Duty evidence pack (PRIN 2A): where AI is used in customer-facing journeys or decisions, firms should record how customer outcomes were considered at design/deployment (including testing) and how the firm monitors for and remediates poor outcomes for customers.
- Third-party and outsourcing: Many AI systems are embedded within outsourced or third-party arrangements. Where AI capability is delivered through a supplier (including cloud or model-as-a-service), firms should treat the AI dependency as part of the relevant third-party service and ensure the contract and oversight model supports FCA expectations (including due diligence, service continuity, audit rights, access to data and models, incident reporting, and exit/transition/termination rights).
- Operational resilience: where an AI-enabled system or third-party AI service supports an important business service, firms should map the dependency and its upstream/downstream components, test plausible failure modes against impact tolerances, and ensure incident response covers AI-specific failure scenarios (including data quality degradation and model performance drift.
- Record-keeping and auditability: firms should retain proportionate records demonstrating governance decisions, testing/validation outcomes and monitoring MI for AI-enabled processes, so they can evidence compliance in supervisory engagement or in response to customer complaints and disputes.
- Regulatory engagement strategy: where firms use FCA initiatives (e.g., AI Live Testing), they should define the purpose (assurance evidence, governance learning, or control validation) and document how outcomes are considered in internal risk and governance, including remediation actions and updated controls.
2. Data Protection Considerations for Deployment of AI in the UK
We outline first the key UK data protection requirements most commonly engaged by AI systems and tools before setting out the practical compliance implications for contracting, security, individual rights and international data flows.
The UK Data Protection Legal Framework
Where AI systems and tools process personal data, the UK GDPR applies (supported by the DPA 2018), and the Data (Use and Access) Act 2025 (DUA Act) amends parts of that framework, subject to commencement.
- Lawfulness, fairness and transparency are the core requirements for AI deployments involving personal data (including profiling and inferences). Controllers must identify a UK GDPR Article 6 lawful basis and, where relevant, a UK GDPR Article 9 condition for special category data. They must also comply with purpose limitation and data minimisation by defining (and controlling) how personal data is collected and used, including for training and any secondary uses. Articles 13–14 require clear, intelligible information about the processing, including how the AI system is used in the relevant context and how it affects individuals.
- Automated decision-making: firms should assess whether an AI system or tool is used to make (or materially inform) decisions about individuals that produce legal or similarly significant effects, and whether that decision is solely automated (i.e. without meaningful human involvement). Where that threshold is met, Article 22 UK GDPR applies, requiring appropriate safeguards (including the ability to obtain human intervention, express a view and contest the decision). The DUA Act replaces Article 22 with new Articles 22A–22D once commenced; until then, firms should apply the current Article 22 regime and ICO guidance. A Data Protection Impact Assessment (DPIA) will be required where the AI processing is likely to result in a high risk to individuals’ rights and freedoms.
- Explanation expectations: where AI informs decisions about individuals, firms should ensure they can provide meaningful explanations consistent with the ICO and Turing Institute guidance, (“Explaining decisions made with AI”), and that what is said in privacy and transparency information provided under Articles 13–14 UK GDPR is consistent with what the firm can evidence in practice.
Even where an AI use-case is lawful and transparent in design, compliance risk typically crystallises in contracts, security measures, and the practical handling of individual rights.
Legal Implications
In practice, compliance risk for AI deployments involving personal data usually crystallises in (i) role allocation and contracts with suppliers, (ii) security and incident handling, (iii) the operational handling of individual rights, and (iv) international data transfers.
Role allocation and contracting: Firms should determine and document whether relevant parties are acting as controllers, processors or joint controllers and ensure the contractual framework reflects that allocation. This typically includes appropriate data processing terms, audit/assurance rights, restrictions on supplier re-use (including for training where applicable), retention/deletion provisions, sub-processing controls, and incident notification and cooperation obligations. Where a supplier acts as a processor, the contract must include UK GDPR Article 28 terms.
Security and cyber alignment: Firms must implement appropriate technical and organisational measures under UK GDPR Article 32 when deploying AI systems/tools that process personal data (including access controls, logging, and measures to prevent unauthorised disclosure). Incident response procedures should cover personal data breaches (assessment and, where required, notification) and align with the firm’s FCA operational resilience arrangements where the AI-enabled service supports an important business service.
Rights handling: Firms must ensure subject access request (DSAR) processes are workable (including third-party rights/redactions), and put in place a clear process to handle objections, rectification requests and complaints where AI-generated inferences or outcomes are challenged, including investigation and remediation where appropriate.
International data flows: Firms should identify when the AI stack triggers cross-border processing/access and ensure transfer compliance is operationally workable (kept practical and proportionate). Where AI systems or vendors involve cross-border data flows, organisations must implement appropriate transfer mechanisms (adequacy, IDTA or the UK Addendum to the EU SCCs, as applicable).
Deploying AI within UK financial services requires a dual compliance framework: (i) FCA compliance: embedding AI systems and tools within existing conduct, governance and resilience obligations, ensuring oversight and accountability; and (ii) Data protection compliance: demonstrating lawful, fair and transparent processing, adequate safeguards for automated decisions, where applicable, and robust technical and organisational controls. Firms should document these measures within governance records, risk frameworks, and DPIAs to evidence compliance to both the FCA and the ICO.





Comments
Join Our Community
Sign up to share your thoughts, engage with others, and become part of our growing community.
No comments yet
Be the first to share your thoughts and start the conversation!