Executive summary
A decision to adopt AI can raise fundamental and moral issues for society. These are complex and vital issues that are not typically the domain of lawyers.
We use cookies and other similar technology to collect data about you to allow us to deliver our online services, measure our website audience and improve your browsing experience. Full details on the cookies we use are set out in our Cookies policy. Please click OK to signify your consent to our use of cookies.
You can withdraw your consent by clicking “manage cookies” and following the instructions shown.
Part of our Artificial Intelligence briefing
Global | Publication | June 2017
Assuming a business intends to make a decision in principle to use or develop AI, there is a key question it will need to consider: will the AI result in a shift of ethical and legal responsibility within the business’s supply chain?
A clear example of AI’s potential to do this has already been mentioned: the driverless car. In the absence of mechanical fault with the car, the driver is typically liable for loss it causes. The introduction of AI, however, has the potential to shift that liability up the supply chain to the manufacturer or to another AI Supply Chain Participant.
Changes like these are likely to have a profound impact on business models over time, necessitating a business which might be affected to address the ethical-legal consequences. In what follows we introduce an Ethics Risk Toolkit for managing the distinctive ethical-legal problems in relation to the development or use of AI. As the potential for AI to reallocate ethical and legal responsibility will vary according to the type of AI at issue and the relevant sector, the Toolkit is a framework only, and will require detailed customisation to accommodate the circumstances of a particular business and its sector.
AI acting autonomously expands the scope of ethical decision-making:
1. new ethical judgments: decisions that would have been decided by split-second reaction by a human are made by AI autonomously. Instead of an instant, unpremeditated judgment, there is a precise calculation based on principles articulated when the AI was designed and built. An instantaneous reaction becomes an ethical judgment.
The AI programmer’s dilemma
Justin Moore, AI, Autonomous Cars and Moral Dilemmas, Techcrunch.com, 19 October 2016
“Take, for example, an autonomous car self-driving along the road when another car comes flying through an intersection. The imminent t-bone crash has a 90 percent chance of killing the self-driving car’s passenger, as well as the other driver. If it swerves to the left, it’ll hit a child crossing the street with a ball. If it swerves to the right, it’ll hit an old woman crossing the street in a wheelchair.”
Here the programming of the AI within the car determines the outcome even before the driver has turned the key to start the engine. That programming does not involve a split-second reaction as the outcome follows from the programming, determined at the time of coding. The programming itself therefore involves an ethical judgment.
2. ethical judgments by manufacturers not users: decisions that would have been made by the user at the time the situation arises are instead made by the manufacturer (or other AI Supply Chain Participant) when the AI was designed and built. This is easily obscured by the fact that an AI system appears to make the judgment when the situation arises (as would an ordinary user). However, an AI system does not have any independent agency. It is simply carrying out the processes built into it by the manufacturer.
Shift in ethical responsibility
Because the introduction of AI has the potential to shift responsibility up the supply chain to the manufacturer or to another AI Supply Chain Participant, in the case of, say, a driverless car, it will be the manufacturer (or other responsible AI Supply Chain Participant), rather than the driver, who will be forced to make new ethical judgments.
3. consistency of ethical judgments: decisions that would have been made at different times by different users are made all at once by a single manufacturer. Consistency (a key requirement of ethical judgments) becomes measurable for the first time.
Same facts, same result
In State v Loomis1 the Wisconsin Supreme Court recently approved a trial court’s use of an algorithmic risk assessment in relation to the assessment of risk of criminal recidivism for sentencing purposes. Historically there has been a wide disparity in sentencing terms for the same offences, in part because of subjective and sometimes wildly varying human prejudices when applying sentencing guidelines. An argument in favour of using such AI is that, based on the same set of facts, it produces the same sentencing term, and so achieves consistency of treatment.
How can a business deal with this new risk: that is, multiple scenarios demanding ethical judgments that are difficult to foresee, must be agreed in advance, and must be consistent? It is not possible to eliminate this risk by guaranteeing that all ethical judgments will be correct. In fact, as people will disagree about the correct decision, it is futile to seek judgments that command universal approval.
Instead, businesses should consider creating a defensible process for making ethical judgments. Elements of such approach are set out in this Toolkit. In response to a query about an ethical judgment, the business could point, not to the correctness of the decision, but rather to the robustness of the process which led to that decision.
The Toolkit has three elements:
Ethical judgments are often mixed in with other parts of the automation process. Weighting of different actions or outcomes may be delegated to programmers or accountants; and verification3 may be carried out by statisticians. If the AI is an ANN, then these decisions may be implicit in the architecture and available only indirectly through parameters or hyper-parameters.
A robust process cannot be created until all these different elements are exposed and categorised, so that they can be adjusted, verified, and audited. This calls for an interdisciplinary approach. Lawyers, coders, accountants, statisticians, and philosophers will all be needed to tease apart the existing design of the AI and to identify where ethical judgments are made.
Bringing together a multidisciplinary and diverse group of individuals will ensure that all potential ethical issues are covered. IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 44
Once the locus of ethical decision-making in an AI system has been identified, it is necessary to impose a process on those decisions.
A process imposed in relation to decision-making undertaken by AI involves:
The supervision of the Ethics Board and the principles and rules set out in the Ethics Policy aim to address the consistency issue - ensuring that all similar ethical judgments are made in a similar way. It can expressly deal with issues like bias and discrimination, although these will still have to be verified by monitoring outcomes under the next stage (see Validation).
The Ethics Policy will be able to take account of industry-wide guidance (such as standards and codes of practice). Similarly, the Ethics Board can take advantage of work by other Ethics Boards.
The Process stage ends with an AI system that has been built to comply with ethical rules together with a set of associated documentation (from high-level Ethics Board principles down to coding functional specifications) that;
“The most effective way to minimise the risk of unintended outcomes is through extensive testing.”10 Testing ethics frameworks resulting from the Unmasking and Process stages, together with ongoing verification, also requires an interdisciplinary team. For instance, examination of underlying code will need programmers, analysis of test data will need statisticians, and comparison with laws and regulations will need lawyers.
Technologists should be able to characterise what their algorithms or systems are going to do via transparent and traceable standards. … Similar to the idea of the flight recorder in the field of aviation, algorithmic traceability can provide insights on what computations led to specific results ending up in questionable or dangerous behaviours. Even where such processes remain somewhat opaque, technologists should seek indirect means of validating results and detecting harms. IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 48
There are two broad approaches to Validation:
Incorporated as part of the design of the system, ideally AI systems “should generate audit trails recording the facts and law supporting decisions”11 which can then be used as part of the Validation exercise.
In practice, a combination of both intrinsic and extrinsic Validation will be used. For ANN-based systems, judicious tuning of parameters will allow some insights into the internal models used to control behaviour. Even for a transparent rules-based system, a swift double-check using so-called “Monte Carlo” methods (that is, repeated random sampling to obtain numerical results) will provide extra comfort.
Validation of an AI system operating within a wider IT eco-system which includes other AI systems can give rise to particular problems. An AI system might work well on its own, but may have unexpected interaction effects in relation to other AI systems. Peter McBurney, Professor of Computer Science at King’s College London, gives this example:
“Suppose that driverless cars communicate with CCTV cameras to detect the presence of pedestrians, and both sets of devices communicate with traffic lights to optimise traffic flow. Something goes wrong and a driverless car hits a pedestrian. Each intelligent device was working perfectly. Some system-level interaction property instead caused a problem.”
AI that is designed for “situatedness” may give rise to additional challenges in terms of Validation (and attribution of liability). “Situatedness” connotes the idea that successful operation of an AI system may require a sophisticated awareness of the environment within which the system operates, including predictions (and models) about how other entities in the environment are likely to act. Testing other than in a live environment would produce a less than complete picture of the performance of the AI system.
The above example illustrates that intrinsic Validation of an AI system will not typically be sufficient. Extrinsic Validation will also be required.
So-called “automation bias” will need to be measured and corrected as part of the Validation stage where humans are inserted into a decision-making process involving AI (see Inserting Humans into the Loop).
Automation bias is the name given for the problem that humans tend to: (1) give undue weight to the conclusions presented by automated decision-makers; and (2) ignore evidence that suggests a different decision should be made.
For example, an automated diagnostician may assist a GP reach a diagnosis, but it may also have the effect of reducing the GP’s own independent competence as he or she may be unduly influenced by the automated diagnostician. The result may be that the overall augmented system may not be as much of an improvement as had been expected.
One-off Validation exercises should be done periodically to audit the functioning of the Toolkit and to give a “snapshot” of progress. However, essentially, Validation is continuous. The results are fed back into the Process stage, so that the Process and Validation stages together form an iterative development cycle.
Once entrenched in development of an AI system, a multidisciplinary team applying the Toolkit will give the business the confidence that ethical-legal are being addressed.
State v Loomis, 881 N.W.2d 749 (Wis. 2016).
Our use of the description “Validation” here is intended to include verification (checking that a system conforms to a design) as well as the traditional idea of validation (checking that a system acts as desired/intended).
Verification means checking that a system conforms to a design.
IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 93.
IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 38.
IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 22.
European Parliament, Resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017.
Designing autonomous systems that do not try to work around external obstacles such as ethical rules (‘corrigible systems’) is an active area of research in computer science: see, for instance, Orseau and Armstrong, Safely Interruptible Agents in Proceedings of the Thirty-Second Conference in Artificial Intelligence, 2016.
Executive Office of the President, National Science and Technology Council Committee on Technology, Preparing for the Future of Artificial Intelligence, October 2016, page 3.
Executive Office of the President, National Science and Technology Council Committee on Technology, Preparing for the Future of Artificial Intelligence, October 2016, page 32.
A decision to adopt AI can raise fundamental and moral issues for society. These are complex and vital issues that are not typically the domain of lawyers.
AI is a field of computer science that includes machine learning, natural language processing, speech processing, expert systems, robotics, and machine vision.
AI will need to meet certain minimum ethical standards to achieve sufficient end user uptake, varying according to the type of AI and sector of deployment.
Courts in a number of countries have already had to address a range of legal questions in relation to the automatic nature of machines and systems.
The key question businesses need to consider is whether deploying AI will result in a shift of ethical and legal responsibility within their supply chain