Artificial Intelligence (Regulation) Bill: UK Private Members’ Bill underscores wide-spread regulatory concerns
A Private Members’ Bill, the Artificial Intelligence (Regulation) Bill (the Bill), has been introduced into House of Lords (the UK’s upper House of the UK Parliament) and is currently at the second Parliamentary stage.
The King’s Speech, which set out the agenda for the current Parliamentary session, did not contain any proposals from the Government for legislation on AI, a point that was highlighted by the House of Commons Science, Innovation and Technology Committee. However, a Member of the House of Lords, Lord Holmes of Richmond, has proposed a Private Members’ Bill.
The purpose of the Bill is to make provision for the regulation of artificial intelligence (AI) and “for connected purposes”.
Currently the Bill is simply a framework to produce more detailed regulations (by statutory instrument), apparently placing important areas on the Secretary of State to decide rather than Parliament: the Bill provides that some aspects provided for in such regulations would require approval by both Houses of Parliament, while others could simply be brought in but could subsequently be annulled by either House by resolution.
It should be noted that Private Members’ Bills rarely get enacted and there is no indication that this Bill is endorsed by the Government. However, sometimes such Bills get media attention and prompt governments to act.
What does the Bill provide for?
At high level, the Bill provides for the following:
- A definition of AI.
- The establishment of an AI Authority by separate regulations.
- Regulatory principles that the AI Authority must have regard to in discharging its functions.
- A requirement that the AI Authority must collaborate with relevant regulators to construct regulatory sandboxes for AI.
- A requirement (to be established by separate regulations) that any business which develops, deploys or uses AI must have a designated AI officer.
- Requirements (to be established by separate regulations) relating to transparency, intellectual property and labelling.
- A requirement that the AI Authority must implement a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI and consult with the public.
- Criminal offences, penalties and fines.
In this blog, we consider each of the above, highlighting implications for business were a proposal along these lines to proceed to legislation.
Definition of AI
As has already been seen in relation to the EU AI Act, one of the challenges in regulating AI is to settle upon a definition of AI that is both future proof and acceptable in scope (not too wide or narrow) – see our blog, The AI Act: A step closer to the first law on artificial intelligence.
How is AI defined?
The Bill defines AI to mean:
‘technology enabling the programming or training of a device or software to:
- perceive environments through the use of data;
- interpret data using automated processing design to approximate cognitive abilities; and
- make recommendations, predictions or decisions;
with a view to achieving a specific objective.
AI includes generative AI, meaning deep or large language models able to generate text and other content based on the data on which they were trained.’
This is clearly a very broad definition. At the moment it might extend to sophisticated robotic process automation, perhaps receiving data through internet of things devices, where autonomous decision-making may not actually feature. It is likely that the definition will require refinement should the legislation proceed.
The establishment of an AI Authority by separate regulations
The Bill gives the Secretary of State the power to create an AI Authority, which (among other things) will be charged with doing the following things:
- Reviewing legislation applicable to AI, including product safety, privacy and consumer protection to check its suitability to address the challenges and opportunities of AI.
- Reviewing if regulators and their responsibilities cover AI risks properly (essentially a gap analysis).
- Aligning the approach of regulators to AI and promote interoperability with international regulatory frameworks.
- Monitoring emerging AI risks, consulting with industry, and monitoring how the regulatory framework is responding to them.
- Supporting testbeds and sandboxes to get AI to market.
- Accrediting independent AI auditors.
- Implementing a programme of public engagement on opportunities and risks of AI.
As the AI Authority’s role will involve coordination and oversight with other regulators, it is presumably envisaged that regulation of AI will be carried out by existing regulators, as proposed in the Government’s white paper on AI regulation.
Regulatory principles that the AI Authority must have regard to in discharging its functions
The Bill provides that the AI Authority must have regard to some very high level AI regulatory principles – it appears these will be woven into what current regulators do and how they interpret their powers and duties.
They include the following:
- Standard AI good practice: (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.
- Developers and deployers should: (i) be transparent about using AI; (ii) test it thoroughly and transparently; and (iii) comply with laws, including on data protection, privacy and intellectual property.
- AI should: (i) comply with equalities legislation; (ii) be inclusive by design; (iii) not discriminate unlawfully or perpetuate such discrimination from “input” data; (iv) meet the needs of those from lower socio-economic groups, older people and disabled people; and (v) generate data that is findable, accessible, interoperable and reusable.
- Ensure that the restrictions imposed are proportionate to the benefits and risks of the AI application. It must also consider if it enhances UK competitiveness.
The standard AI good practice principles above are the same five principles as proposed in the Government’s white paper. The Government had planned not to put these principles on statutory footing initially, but anticipated introducing a statutory duty on regulators to have due regard to the principles after an initial period of implementation.
The Bill adds additional context on expected application of the principles, and proposes that the Secretary of State can amend the principles following consultation.
A requirement that the AI Authority must collaborate with relevant regulators to construct regulatory sandboxes for AI
The AI Authority must collaborate with the regulators to set up AI regulatory sandboxes (which are innovation supportive and designed to give access to tests with real consumers).
A requirement to be established by separate regulations that any business which develops, deploys or uses AI must have a designated AI officer
The Secretary of State, in consultation with the AI Authority, must make regulations to require:
- All businesses which develop or deploy AI to have a designated AI officer whose duties are to ensure the safe, ethical and non-discriminatory use of AI.
- If the business is a listed company, then it must include information about the identity of the AI officer, the AI activities it is involved in and the policies it deploys in its annual strategic report (which currently covers, among other things, ESG issues) and which must be sent to shareholders.
Requirements to be established by separate regulations relating to transparency, intellectual property and labelling
The Secretary of State, in consultation with the AI Authority, must make regulations to require:
- Any person involved in training AI:
- To supply the AI Authority with a record of all third party data and intellectual property used in the training.
- Assure the AI Authority that:
- They use all such data and intellectual property by informed consent.
- They comply with applicable intellectual property and copyright obligations.
- Any person supplying a product or service involving AI must give customers clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance.
- Any business which develops, deploys or uses AI must allow independent third parties accredited by the AI Authority to audit its processes and systems.
There are some elements of the above requirements that do not make commercial sense at the moment (for example, giving users the opportunity to consent to provision of training data and use of AI), but they are likely to be removed or refined in debate should the proposal proceed further in the legislative process.
The provisions relating to intellectual property reflect concerns about unlicensed use of intellectual property in generative AI. The EU AI Act addresses such concerns with an obligation to disclose what intellectual property has been used to train the system.
A requirement that the AI Authority must implement a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI and consult with the public
The Bill provides that the AI Authority must:
- Implement a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI.
- Consult the general public and such persons as it considers appropriate as to the most effective frameworks for public engagement, having regard to international comparators.
Fines and penalties
The regulations the Secretary of State makes can include requirements to pay fees and to create offences, fines and penalties.
Our take
The most important things for businesses to note about the proposal are:
- The very high level regulatory principles.
- The fact that a compulsory AI officer is contemplated.
- That listed companies would have to set out what they are doing in relation to AI in their annual strategic reports.
- Businesses must permit accredited auditors to audit AI compliance.
The Bill envisages that existing regulators (for example, the ICO and the CMA) will continue to regulate, but the AI Authority would sit above them, coordinating them.
The Bill’s future is uncertain, as the majority of Private Members’ bills do not become law and there is limited Parliamentary time to debate them. The time to debate the Bill may also be further limited depending on the timing of the next general election.
A previous AI-related Private Members’ Bill proposed in May 2023, the Artificial Intelligence (Regulation and Workers’ Rights) Bill, did not progress past the first Parliamentary stage. Its progress was halted when the new Parliamentary session began on 7 November 2023, as unlike Public Bills, Private Members’ Bills cannot be carried over to the next session.
However, Private Members’ Bills can sometimes prompt the Government to take legislative action. In 2018, a Private Members’ Bill brought by MP Wera Hobhouse MP on “upskirting” was unable to progress following an objection in the House of Commons. The publicity generated prompted the Government to introduce a Bill which later received royal assent as the Voyeurism (Offences) Act 2019.
The Bill underscores that attempts to codify principles about how AI should be developed and used are no longer limited to the European Union, and that UK legislators’ attempts to do so will be informed by the twin objectives of protecting the public (and data subjects/consumers in particular), on the one hand, while also seeking to ensure that innovation and global competitiveness are not thereby hampered, on the other.