UK AI: UK consults on non-statutory cross-sectoral guidance principles for regulating AI – final approach still some way off
On 18 July 2022, the UK Government put forward proposals on the future regulation of Artificial Intelligence (AI). The publications include: (i) National AI Strategy – AI Action Plan (Action Plan); and (ii) a policy paper, ‘Establishing a Pro-Innovation Approach to Regulating AI’ (Policy Paper or Pro-Innovation Approach).
The Action Plan sets out how the Government will meet its objectives in the National AI Strategy and can be found here. The Policy Paper is open for consultation and comment until 26 September 2022, and is a pre-cursor to the Government’s yet to be published White Paper on AI regulation. The Policy Paper is the focus of this post.
The approach anticipates a horizontal non-statutory overlay addressing key areas of AI regulatory novelty to attain consistency and coherence, while allowing sectoral flexibility. There are a number of key choices the Government will need to make in this overlay in terms of detail (which will provide coherence and clarity to users) and principles and exceptions (which will provide flexibility).
Although a non-statutory footing allows further changes in theory, it will be important to set the guidance correctly early as businesses will not want to change tack (and will be tracking the EU AI regulation closely as many will need to comply with both regimes). Many will be disappointed by such a high level Policy Paper at this stage; it seems that real substance will not be forthcoming until the White Paper on AI regulation is published, presumably towards the end of 2022/ beginning of 2023 – so it is still “watch this space” in the UK.
According to the Government’s Pro-Innovation Approach, the establishment of clear, innovation-friendly and flexible approaches to regulating AI will be core to achieving the UK’s ambition to unleash growth and innovation while safeguarding the UK’s fundamental values and keeping people safe and secure. Aiming to remain internationally competitive, the Government’s Pro-Innovation Approach is underpinned by a set of cross-sectoral principles tailored to the specific characteristics of AI, and is:
- Context-specific: To regulate AI based on its use and the impact it has on individuals, groups and businesses within a particular context, and delegate responsibility for designing and implementing proportionate regulatory responses to regulators. This will ensure that the UK’s approach is targeted and supports innovation.
- Pro-innovation and risk-based: To focus on addressing issues where there is clear evidence of real risk or missed opportunities. To avoid placing unnecessary barriers in the way of innovation, regulators will be encouraged to focus on high risk concerns rather than hypothetical or low risks associated with AI.
- Coherent: To establish a set of cross-sectoral principles tailored to the distinct characteristics of AI, with regulators asked to interpret, prioritise and implement these principles within their sectors and domains. In order to achieve coherence and support innovation by making the framework as easy as possible to navigate, the Government will look for ways to support and encourage regulatory coordination - for example, by working closely with the Digital Regulation Cooperation Forum and other regulators and stakeholders.
- Proportionate and adaptable: To set out the cross-sectoral principles on a non-statutory basis in the first instance so the approach remains adaptable (to be kept under review). Regulators will be encouraged to consider lighter touch options, such as guidance or voluntary measures, in the first instance. Further, working with existing processes rather than the creation of new ones, is encouraged.
Attributing the success of the UK’s AI ecosystem in part to the UK’s reputation for the quality of its regulators and its rule of law, the Pro-Innovation Approach emphasises that the rules that govern the development and use of AI must keep pace with evolving implications of AI technologies.
Since AI is a general purpose technology, the development of a clear framework for regulating AI demands clarification of the scope of AI. The Pro-Innovation Approach starts by setting out the core characteristics of AI to inform the scope of the AI regulatory framework. Instead of proposing a universally-applicable definition of AI, it allows regulators to evolve sector or domain-specific definitions of AI.
The Core Characteristics of AI
The Pro-Innovation Approach sets out two key characteristics that: (1) underlie distinct regulatory issues which existing regulation may not be fully suited to address: and (2) form the basis of the scope of the approach:
- ‘adaptiveness’ of the technology - explaining intent or logic: AI systems often partially operate on the basis of instructions that have not been expressly programmed with human intent, having instead been ‘learnt’ on the basis of a variety of techniques. Further, AI systems are often ‘trained’ - once or continually - on data, and execute according to patterns and connections that are not easily discernible to humans. For regulatory purposes this means that the logic or intent behind the output of systems can often be extremely hard to explain, or errors and undesirable issues within the training data are replicated.
- ‘autonomy’ of the technology - assigning responsibility for action: AI often demonstrates a high degree of autonomy, operating in dynamic and fast-moving environments by automating complex cognitive tasks. Whether that is playing a video game or navigating on public roads, this ability to strategise and react is what fundamentally makes a system ‘intelligent’, but it also means that decisions are made without express intent or the ongoing control of a human. While AI systems vary greatly, the Pro-Innovation Approach proposes that it is this combination of core characteristics that demands a bespoke regulatory response and informs the scope of the UK’s approach to regulating AI.
Concerned about issues such as a perceived lack of explainability when high-impact decisions are made about people using AI, the Government proposes developing a set of cross-sectoral principles that regulators will further build into sector or domain-specific AI regulation measures.
The UK’s cross-sectoral principles for AI regulation are tailored to the UK’s values and ambitions, and build on the OECD’s Principles on Artificial Intelligence. They are briefly set out as follows:
- Ensure that AI is used safely.
- Ensure that AI is technically secure and functions as designed.
- Make sure that AI is appropriately transparent and explainable.
- Embed considerations of fairness into AI.
- Define legal persons’ responsibility for AI governance.
- Clarify routes to redress or contestability
For further details on the cross-sectoral principles, please click here.
The Pro-Innovation Approach proposes initially putting the cross-sectoral principles on a non-statutory footing. This is so the Government can monitor, evaluate and update its approach in order that it remains agile enough to respond to the rapid pace of change in the way that AI impacts upon society.
Regulators will lead the process of identifying, assessing, prioritising and contextualising the specific risks addressed by the cross-sectoral principles. The Government may issue supplementary or supporting guidance (for example, focussed on the interpretation of terms used within the principles, risk and proportionality), in order to support regulators in their application of the principles.
Collaboration between regulators - to ensure a streamlined approach - will be encouraged. As a practical outcome of collaboration between regulators, it is expected that organisations will not have to navigate multiple sets of guidance from several regulators all addressing the same principle.
- Envisages that the UK will continue to be an active player in organisations such as GPAI and the OECD, as well as acting as a pragmatic pro-innovation voice in ongoing Council of Europe negotiations.
- Intends to ensure that UK industry’s interests are properly represented in international standardisation - both to encourage interoperability, as well as to embed British values.
- Wishes to work with global partners to ensure international agreements embed British values so that progress in AI is achieved responsibly, according to democratic norms and the rule of law.
- Emphasises that it will reject efforts to adopt and apply AI technologies to support authoritarianism or discrimination.
Over the coming months, the Government will be considering how best to implement and refine its approach to drive innovation, boost consumer and investor confidence and support the development and adoption of new AI systems. Specifically the UK will be considering:
- The proposed Pro-Innovation Approach and whether it adequately addresses the UK’s prioritised AI-specific risks in a way tailored to the UK’s values and ambitions, while also enabling effective coordination with other international approaches.
- The practical implementation of the Pro-Innovation Approach, including the roles and powers of regulators, coordination across statutory and non-statutory regulators, the need for new institutional architecture to oversee the functioning of the landscape as a whole and to anticipate future challenges, the role of technical standards and assurance mechanisms as potential tools for implementing principles in practice, supporting industry, and enabling international trade.
- Monitoring of the Pro-Innovation Approach to ensure that it delivers the UK’s vision for regulating AI in the UK, is capable of foreseeing future developments, mitigating future risks and maximising the benefits of future opportunities. This includes designing a suitable monitoring and evaluation framework to monitor progress against the Government’s vision as well as criteria for future updates to the framework to ensure a robust approach to identifying and addressing evolving risks.
The Government invites stakeholder views about how the UK can best set the rules for regulating AI in a way that drives innovation and growth, while also protecting fundamental values. These views will inform the much-anticipated White Paper. The call for views and evidence will be open for 10 weeks, closing on 26 September 2022.
For more information about AI, see:
• Our information hub, Artificial intelligence.
• Our recent blogs on AI, AI Blogs.