Artificial intelligence: Steps towards an ethical and legal framework

April 18, 2018

The European Group on Ethics in Science and New Technologies (EGE) recently published a statement calling upon the European Commission to launch a process to develop an internationally recognised ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and autonomous systems. 

The EGE stressed the importance of establishing such a framework, as the pace at which AI develops threatens to eclipse attempts to address ethical, legal and societal questions.

Citing examples such as self-driving cars and autonomous weapons systems, the statement focuses on considerations of: (1) safety, security, and the prevention of harm/mitigation of risks; (2) questions of human moral responsibility/moral agency; (3) regulation, monitoring, testing and certification; (4) big data, behavioural science, and data privacy; and (5) the explainability and transparency of AI.

Building upon the statement, the European Commission has taken the first step to form an expert group on artificial intelligence which will advise on the development of a European AI Alliance,  support the implementation of upcoming European initiatives on AI, and produce draft guidelines for the ethical development and use of AI based on European fundamental rights.

What does this mean for business?

While the development of AI is a major opportunity for business, those considering a move towards the use or development of AI need to address the accompanying ethical and legal issues. Why?  One need only consider the position of the driverless vehicle.  The first pedestrian death from a driverless car recently prompted immediate suspension of several autonomous vehicle pilots in the U.S.  Such an outcome illustrates that businesses have both a commercial and legal imperative to address the ethical and legal issues:

  • commercially, standards of ethical and legal compliance will be required in order to achieve overall market acceptance for the AI-enabled product/service. It becomes of question of public trust; and
  • legally, factoring in ethical considerations is likely to help mitigate liability. To understand how, it is necessary to have some appreciation of the impact of AI on ethical decision-making. AI acting autonomously expands the scope of ethical decision-making:
  • new ethical judgments: decisions that would have been decided by split-second reaction by a human are made by AI autonomously. Instead of an instant, unpremeditated judgment, there is a precise calculation based on principles articulated when the AI was designed and built.  An instantaneous reaction becomes an ethical judgment;
  • ethical judgments by manufacturers not users: decisions that would have been made by the user at the time the situation arises are instead made by the manufacturer when the AI was designed and built; and
  • consistency of ethical judgments: decisions that would have been made at different times by different users are made all at once by a single manufacturer. Consistency (a key requirement of ethical judgments) becomes measurable for the first time.

In the light of these factors it is hardly surprising that the EGE and the European Commission should focus so much on the ethical implications of AI, for AI gives rise to multiple scenarios demanding ethical judgments that are difficult to foresee, must be agreed in advance, and must be consistent. 

How can a business deal with this new kind of risk? It is not possible to eliminate the risk by guaranteeing that all ethical judgments will be correct.  In fact, as people will disagree about the correct decision, it is futile to seek judgments that command universal approval.

Instead, businesses should consider creating a defensible process for making ethical judgments. Elements of such approach are set out in our Ethics Risk Toolkit.  In response to a query about an ethical judgment, the business could point, not to the correctness of the decision, but rather to the robustness of the process which led to that decision.

The author would like to thank Matthew Gregson, Trainee Solicitor, for his assistance in preparing this post.