EU’s High-Level Expert Group on AI publishes draft Artificial Intelligence Ethics Guidelines

February 19, 2019

The European Commission’s High-Level Expert Group on Artificial Intelligence, made up of fifty-two academics and representatives from business and civil society groups, has published draft ethics guidelines on how to achieve trustworthy Artificial Intelligence (AI).

This guidance comes at a time where the EU considers that it is commercially vital for it to “up its game” in the international AI investment market. In order to catch up with the U.S.’s and China’s levels of spending on AI, European member states must collectively invest €20 billion in AI research by 2020, and the same amount per year in the decade after. The EU hopes that the creation of ethical guidelines will foster EU investment in AI and help European technology to become more competitive.

The guidance envisages that trustworthy AI ought to have two components:

  • it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose”; and
  • it should be technically robust and reliable since (even with good intentions) a lack of technological mastery can cause unintentional harm.

We consider each of these components in turn:

Component one:

AI should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose”.

The guidelines outline some of the fundamental rights, principles and values that AI should respect in order to be trustworthy. These include respecting human dignity, an individual’s right to make decisions for him / herself, and equal treatment.

The guidelines go on to outline five principles and correlated values that must be observed to ensure that AI is developed in a human-centric manner. These consist of:

  • do good;
  • do no harm;
  • preserve human agency;
  • be fair; and
  • operate transparently.

Component two:

AI should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

Component one’s “ethical purpose” is backed up by the discussion of a non-exhaustive list of requirements to achieve trustworthy AI. These address issues such as accountability, governance, respect for privacy and human autonomy, safety and transparency. Technical and non-technical methods to achieve trustworthy AI have also been included.

The technical methods: these include implementing ethical values into the design of an AI system, the creation of architectures for trustworthy AI requirements, extensive testing and validating, the systematic documentation of decisions that AI makes, and the process by which a particular decision was made in order to ensure:

  • transparency and the traceability of the decision; and
  • clear explanation of the reasoning for the decision.

The non-technical methods: these envisage a standard framework of regulation, guidance, standardisation and accountability.

The guidance states that trustworthy AI also requires responsibility mechanisms that, when harm does occur, ensure an appropriate remedy can be put in place.

Critical concerns raised by AI:

The guidelines have identified a number of critical concerns worthy of further public consultation:

  • Identification without consent: new AI-powered software could bring mass tracking and mass surveillance of citizens using biometric data, which could be open to serious abuse. The guidelines state that informed consent is the only way for AI to be deployed in this way. However, how such consent would be obtained is as yet to be determined, and may prove to be difficult to achieve in practice in all cases;
  • Covert AI systems: this refers to software and robots that successfully pretend to be human. The guidance notes that “their inclusion in human society might change our perception of humans and humanity, and have multiple consequences such as attachment, influence, or reduction of value of the human being”. Such systems could easily manipulate humans on an unprecedented scale, and therefore the guidance suggests that AI of this type must identify itself as AI;
  • Normative and mass citizen scoring without consent, infringing fundamental rights: AI can use algorithms to allocate scores to large numbers of citizens, based on certain defined criteria. The guidelines suggest that “whenever citizen scoring is applied in a limited social domain, a fully transparent procedure should be available to citizens, providing them with information on the process, purpose and methodology of the scoring, and ideally providing them with the possibility to opt-out of the scoring mechanism"; and
  • Lethal autonomous weapons systems (LAWS): while the guidance acknowledges that LAWS “can reduce collateral damage, e.g. saving selectively children”, they do nonetheless raise major ethical issues concerning – for example - the ability of robots "to decide whom, when and where to fight without human intervention," which could lead "to an uncontrollable arms race on a historically unprecedented level, and can create military contexts in which human control is almost entirely relinquished, and risks of malfunctions not addressed."

Assessing trustworthy AI:

Usefully the guidance sets out in the final chapter a check-list of questions aimed at AI developers to assess whether they are meeting the requirements for trustworthy AI.

The questions cover topics such as accountability, data governance, non-discrimination, respect for privacy, robustness, transparency and safety.

Our take:

Following consultation, the European Commission’s High-Level Expert Group on Artificial Intelligence envisages that final guidelines will be handed to the European Commission for consideration in March 2019.

More broadly, the development of such guidelines reflects thinking among many governments that “something needs to be done” about AI.  Significant AI vendors have established fora for the development of self-regulatory practices and standards.  To what extent governments and regulators will consider self-regulation to be sufficient remains to be seen.  If not, then they will need to consider a further question: should regulation be general (that is, regulate the technology itself, regardless of domain application), sector-specific (regulated in particular domains), or perhaps a combination of both.  

As at the date of publication, in the light of Brexit it is unclear whether the UK will be engaging in any EU initiatives concerning AI in the future. In any event, the UK has been pursuing its own AI initiatives, such as:

  • the House of Lords Select Committee on Artificial Intelligence’s report, published in 2018, outlining a strategy for an ethical code for the use and development of AI; and
  • the establishment of a UK Centre for Ethics and Innovation, which the UK government envisages will be an advisory body that will investigate and advise on how the use of data and data-enabled technologies are governed, including AI.

 

The author would like to thank Holly Tunnah, Knowledge Paralegal, for her assistance in preparing this article.