The UK National AI Strategy: Regulation, data protection and IPR in the mix

September 27, 2021

The UK Government has published its National AI Strategy. In this blog we examine three discrete issues addressed in it (AI regulation, data protection and intellectual property rights) and we then step back from the detail to share a computer scientist’s view of the wider objectives of the Strategy and the context within which the Strategy has been published.

What does the National AI Strategy say about AI regulation?

The UK Government has previously expressed the view that blanket AI-specific regulation would be inappropriate and that existing sector-specific regulators are best placed to consider the impact on their sector of any subsequent regulation which may be needed. In the National AI Strategy the Government says it now intends to revisit that issue and to consider whether:

  • The UK’s current approach is adequate.
  • There is a case for greater cross-cutting AI regulation or greater consistency across regulated sectors.

The Government will no doubt have in mind the recent proposed EU AI Regulation, and whether such an approach within a UK context would be likely to encourage innovation in AI, or to stifle it. The EU proposal is for cross-sector, risk-based regulation of AI. Its view is that AI innovation depends on trust, and to have trust requires a clear and certain regulatory environment.

The UK Government faces a difficult commercial judgement about the balance to be struck. It intends to issue a White Paper in early 2022 to consider the issue, in which it says it will consider these options (and potentially others):

  • Removing some existing regulatory burdens where there is evidence they are creating unnecessary barriers to innovation.
  • Retaining the existing sector-based approach, ensuring that individual regulators are empowered to work flexibly within their own remits to ensure AI delivers the right outcomes.
  • Introducing additional cross-sector principles or rules, specific to AI, to supplement the role of individual regulators to enable more consistency across existing regimes.

What are the implications of the National AI Strategy for data protection in the UK?

The National AI Strategy merely supports the proposals made in the Government’s consultation, Data: A new direction (see our blog, UK Government sets out proposals to shake up UK data protection laws) including:

  • Softening the conditions around use and reuse of personal data for research.
  • Disapplying the legitimate interests test for use of personal data for bias detection and providing a ground for processing special category personal data and criminal data for the same purpose.
  • Considering whether data protection legislation and the ICO is the right forum and regulator for determining “fairness” in profiling and automated decision-making (and the role of other regulators such as Competition & Markets Authority, Equality and Human Rights Commission, Financial Conduct Authority and Medicines and Healthcare products Regulatory Agency in this).
  • Considering whether to clarify and extend Article 22 GDPR on wholly automated decision-making in light of uncertainties around its operation, terminology and narrow scope or abolishing it altogether as recommended by the Taskforce on Innovation, Growth and Regulatory Reform.
  • Government leading by example through compulsory transparency measures for Government contractors using public data and public bodies, which would mean providing information about technical specifications of the algorithms, what datasets are used and how ethical considerations are being addressed.
  • Emphasising the importance of championing international data flows to facilitate access to data for AI as part of future trade deals.

Finally, also echoing EU developments, the National AI Strategy cross refers back to the UK National Data Strategy to make more: (1) data available through greater attention to data formats in its creation; (2) public sector open data; and (3) data intermediary structures to facilitate sharing between both public and private sector data creators and users.

The development of these proposals could be quite controversial alongside a stricter EU regime in terms of privacy, equality rights campaigns and the future of the UK’s data protection EU adequacy status.

What does the National AI Strategy have to say about intellectual property rights?

AI is giving rise to all kinds of difficult intellectual property rights issues – for example, can AI be an inventor for purposes of a patent? (See our detailed analysis of the issues, here, and our blog, Rise of the machines: Federal Court of Australia holds that artificial intelligence can be a patent ‘inventor’.) There are also difficult IPR issues around AI data, and whether intellectual property can vest in AI-generated data (see our analysis of the issues, Disruptive technology data: case studies).

Recognising this, the National AI Strategy says that the UK Government intends to launch a consultation before the end of the year on copyright areas of computer-generated works and text and data mining, and on patents for AI devised inventions.

The UK is not alone in considering such matters. A number of other countries are also addressing these issues right now – for example:

  • Singapore is in the process of enacting changes to its Copyright Act in relation to text and data mining.
  • The EU also intends to address some of these issues, probably in the form of its proposed Data Act and in a revision to the Database Directive.

An AI researcher responds

Professor Peter McBurney, Professor of Computer Science in the Department of Informatics, King’s College London and Co-Head of Technology Consulting, Norton Rose Fulbright LLP, shares his views on the National AI Strategy

The very first goal of the National AI Strategy is that “the UK experiences a significant growth in both the number and type of discoveries that happen in the UK, and are commercialised and exploited here.”

This goal is laudable. But where do ideas come from? In my experience as an AI researcher most good ideas only rarely come from people sitting in isolation under apple trees during pandemics. Much more often, good ideas come from interactions with others – chance encounters, discussions, debates, joint research projects, collaboration on interdisciplinary ventures, etc. In particular, ideas arise when people in one discipline have to explain their work to people in another, because this is when implicit assumptions or definitions are noticed and challenged, or when research methods are questioned or different methods proposed.

In the case of AI and computer science, many of the key ideas and the continuing challenges have come from philosophy, for example, from the philosophy of reasoning (from which arose the formal logic used to program Expert Systems and to verify properties of computer systems), the philosophy of argumentation (from which arose the first generation of self-explanatory intelligent systems, back in the 1980s), and now, increasingly, the philosophy of ethics. But philosophy is not the only discipline acting as a source of ideas to AI. Economics, psychology, sociology, anthropology and even theology have been influential in AI.

If we want our automated decision-making machines to act ethically, they or their designers need to know about the philosophy of ethics and the law. If we want our AI systems to recognise and respond to other intelligent entities in their environment, whether human or machine, they or their designers need know about the psychology and economics of self-interest, about social psychology and collective behaviours, about social structures and hierarchies, and about game theory and strategic interactions. If we want our AI systems to make decisions in artificial societies comprising many such entities, they or their designers need to know how to communicate, how to persuade and how to negotiate with one another - knowledge that is within the scope of the philosophy of language, of linguistics and of rhetoric. A careful study of Shakespeare’s Julius Caesar will tell you more about persuasive argument than any number of lectures on AI deep learning.

If our intelligent systems are to make decisions fairly and transparently, they or their designers need to understand decision theory from economics, philosophy and marketing science; they need to know how to marshal evidence, from the study of rhetoric and of law; and they need to know how to make decisions jointly as a group, from political philosophy and social choice theory. If we want our AI systems to recognise deception but not to engage in it themselves, our systems or their designers need to have a well-grounded theory of deception, and the only ones available today are found in psychology, in theology (particularly pre-modern theology), and in military strategy.

In summary, world-class AI research draws on a wide hinterland of the humanities, particularly philosophy, linguistics and theology; of economics and the other social sciences; of law and rhetoric; and of military and business strategy. AI research in Britain will only flourish if these other disciplines also flourish, and if there is major funding for interdisciplinary research projects, collaborative activities, and start-up incubators which put AI researchers and people from non-STEM (science, technology, engineering, and medical) disciplines into the same rooms together.