AI and the UK regulatory framework

May 15, 2023

In March 2023, the Department for Science, Innovation and Technology published a white paper detailing its proposed approach to regulating artificial intelligence (AI) in the UK. It takes a different approach to the EU with a non-statutory principles-based framework to be applied by regulators in each sector. By contrast, the EU AI Act will, once finalised, be prescriptive legislation applying across sectors.

The UK's aim is to foster innovation, driving growth and prosperity, while also enabling consumer trust and strengthening the UK's position as "a global leader in AI". It will do this by setting the framework of principles which sector specific regulators will apply proportionately and sympathetically to their respective regulatory requirements.

Due to the extraordinary pace of change in the sector, this non-statutory flexible approach is intended to be built up through guidance that can be issued and updated easily.

Central government will coordinate and ensure consistency of approach between regulators by giving guidance on how to apply the principles, including as to the risks posed in particular contexts and what measures should be applied to mitigate them, and by monitoring and evaluating the effectiveness of the framework.

Regulators must provide guidance as to how the principles interact with existing legislation, illustrate what compliance looks like, and produce joint guidance with other regulators where appropriate.

The UK framework will define AI by reference to two characteristics:

  • Adaptivity – the fact that training allows the AI to infer patterns often not easily discernible to humans or envisioned by the AI's programmers;
  • Autonomy – some AI systems can make decisions without the express intent or ongoing control of a human.

The intention is to regulate high risk uses rather than all applications with these characteristics. The white paper does not pre-define these so we must wait to see whether they coincide with those which will be most affected in the EU, such as credit scoring and life and health insurance underwriting.

Below are the principles and some initial thoughts on themes one might expect see in future guidance from the financial services regulators.

Safety, security and robustness

AI systems should function in a secure and safe way throughout the lifecycle and risks should be continually identified, assessed and managed. IT, data and cyber security are base level expectations for the financial services regulators. This principle also echoes their focus on risk assessment and operational resilience, both of which are essential to the safety of both users of financial services and the financial system itself.

Firms might expect guidance on how to adapt these arrangements to the new combination of risks presented by AI.

Appropriate transparency and explainability

Transparency refers to the communication of information about an AI system and explainability refers to the extent to which it is possible for relevant parties to access, interpret and understand the decision-making process of the AI. The paper acknowledges that an appropriate degree of transparency and explainability should be proportionate to the risks presented by the AI system.

In financial services, this might depend on factors such as the type of decision and the customer concerned. These concepts also reflect existing rules and principles including the new Consumer Duty outcome requiring firms to ensure customers receive communications they can understand. This could be particularly challenging where firms use AI.


The paper recognises that the concept of fairness is embedded across many areas of law and regulation and invites actors involved in the AI lifecycle to consider the definitions that are relevant to them. It anticipates that regulators may need to develop and publish descriptions and illustrations of fairness that apply within their regulatory remit together with technical standards.

The Financial Conduct Authority (FCA) should have a head start here given that several principles for business are based on fairness and the Consumer Duty requires firms to apply the concept even more broadly, including in relation to fair value.

Accountability and governance

This is about effective oversight of the supply and use of AI systems with clear lines of accountability. Regulator guidance should reflect that accountability means individuals and organisations adopting appropriate measures to ensure the proper functioning of AI systems with which they interact. Financial services firms in the UK have done much work on this area over the last few years as they have implemented the senior managers and certification regime.

This would naturally accommodate AI as it does other initiatives, but the Prudential Regulation Authority (PRA) and FCA will want to see firms putting in place wider governance arrangements around all stages in the AI lifecycle. In fact, robust governance is likely to be one of the key means through which firms can give the regulators comfort about their evolving use of AI.

Contestability and redress

The paper provides that users, affected third parties and actors in the AI lifecycle should be able to contest a decision that creates a material risk of harm. Regulators are expected to encourage and guide regulated entities to make routes easily available and accessible. It will be interesting to see how the regulators approach this.

Customers of financial services have personal rights of action against firms in relatively limited circumstances, with the regulators more often acting against firms for wider systems and controls weaknesses. Customers of certain services may also be entitled to compensation in the event of a firm failure and this does not depend on the technology used to deliver the service.

Unlike the EU AI Act, the UK plan does not include penalties at this stage.

The white paper is open for consultation until June 21. In the next six months, government is due to issue the principles and initial implementation guidance to regulators, and in the next 12 months the key regulators will publish guidance on how the principles will apply within their areas.

Given their resonance with existing regulatory expectations, firms developing their use of AI should be working within the spirit of these principles even now, but keeping abreast of specific guidance to follow from not only the PRA and FCA, but also other relevant regulators.

This article was originally published by Thomson Reuters Regulatory Intelligence.