Executive summary
A decision to adopt AI can raise fundamental and moral issues for society. These are complex and vital issues that are not typically the domain of lawyers.
We use cookies and other similar technology to collect data about you to allow us to deliver our online services, measure our website audience and improve your browsing experience. Full details on the cookies we use are set out in our Cookies policy. Please click OK to signify your consent to our use of cookies.
You can withdraw your consent by clicking “manage cookies” and following the instructions shown.
Part of our Artificial Intelligence briefing
Global | Publication | June 2017
In this briefing we do not deal with wider societal and public policy factors relating to the use and deployment of AI, but limit ourselves to those ethical considerations (we call these Addressable Ethical Considerations) in connection with which businesses are:
For AI to be accepted for use in a given market (for example, by achieving sufficient end user uptake), as a matter of commercial reality the use of AI will need to be perceived by the participants in that market as meeting certain minimum ethical standards. What these are will vary according to the type of AI at issue and the relevant sector in which it is deployed.
For AI to be accepted for use in a given market (for example, by achieving sufficient end user uptake), as a matter of commercial reality the use of AI will need to be perceived by the participants in that market as meeting certain minimum ethical standards. Nick Abrahams, Global Head, Technology and Innovation
Because legal and ethical responsibility are inextricably linked, that same commercial reality in effect imposes an imperative on businesses to address the corresponding legal issues (quite apart from a desire to limit risk). Accordingly we examine:
Take an accounting software package. It can be used to reconcile accounts, but in the wrong hands it can also be used to commit corporate fraud. Assuming programming designed to facilitate such fraud was not deliberately included in the coding, the morally objectionable outcomes from its use (fraud) are determined by the user.
AI, like an accounting software package, performs functions determined by its programmers. However, unlike an accounting software package, AI can learn, determine on what basis (or criteria) it is to make decisions, and make autonomous decisions based on that learning and such criteria.
Its autonomous actions and outcomes are determined not by the user, but flow inexorably from ethical judgments: (1) made at the time the system was programmed; and (2) inherent in training data to which it was exposed.
AI systems are both the outcome of, and result in, a movement of ethical decision-making to an earlier stage in a system’s life-cycle. This can have profound implications for where ethical and legal responsibility can lie in a given AI supply chain. The Toolkit examines how businesses can mitigate the risks that arise as a result.
The idea that AI systems should be designed at inception to embed human values in order to avoid breaching human rights and creating bias, commonly known as “ethics-bounded optimisation”, is increasingly accepted within the AI industry.1 An example of ethics-bounded optimisation in, say, the Life Sciences and Healthcare sector would be the coding of AI systems to reflect the “first do no harm” principle of the Hippocratic Oath (and its modern equivalents).
AI will revolutionise many activities within the Life Sciences and Healthcare sector. AI in combination with Big Data analytics already plays an important role in clinical trials and diagnostics, and in the future will be expected to significantly augment (and perhaps replace aspects of ) human decision-making in primary healthcare and surgical procedures. Stacey Martinez, Global Co-Head of Life Sciences and Healthcare
The ability of AI to predict healthcare-related trends at the macro level (within national, regional or global populations) will also help with private and public sector investment decisions within the industry.
It is expected that there will be significant adoption of the technology within the sector going forward. Because humans are the focus of that technology, it will be crucial that the technology reflects and respects human values.To do otherwise would not only risk uptake of the technology within the sector, but could also give rise to serious liability and regulatory concerns for those manufacturing and using it.
There are many formulations of the human values of rights, freedoms, and expressions. Examples include:
AI will not change the fact that those who breach legal obligations in relation to human rights will still be responsible for such breaches (although it may make determining who is responsible more complex).
Addressing such risks by attempting to embed human values in AI may, however, be extremely difficult. This is because:
AI may also lean from real time interactions with society. Such interaction may lead to unfortunate results.
Tech Giant “Deletes ‘Teen Girl’ AI After it Became a Hitler-loving Sex Robot Within 24 Hours”
Helen Horton, The Telegraph, 24 March 2016
In 2016 a well-known tech giant launched a chatbot through the messaging platforms Twitter, Kik, and GroupMe. The chatbot was intended to mimic the way a nineteen year old American girl might speak. The chatbot owner’s aim was reportedly to conduct research on conversational understanding. The chatbot was programmed to respond to messages in an entertaining way, and to impersonate the audience she was created to target: American eighteen to twenty year olds.
Hours after the chatbot’s launch, among other offensive things, she was providing support for Hitler’s views, and agreeing that 9/11 was probably an inside job. She seemed to choose consistently the most inflammatory responses possible. By the evening of her launch, the chatbot was taken offline.
As the launch of this particular chatbot shows, AI can result in unexpected, unwanted outcomes and reputational damage. The chatbot’s responses were modelled on those she got from humans, so her evolution simply reflected the data sets to which she was exposed. What AI learns from can determine whether its outputs are perceived as intelligent or unhelpful.
The recently published Asilomar AI Principles (promulgated for AI research, AI ethics and values, and longer term AI deployment) offer a credible formation for embedding human values within AI.2
In some instances, the values to embed in an AI system might need to be specific to the relevant community or stakeholders, particularly for vulnerable groups or users impacted directly by the AI system (such as operators of AI-enabled robotics).
Designers, developers, and manufacturers of AI should avoid creating unacceptable bias from the data sets or algorithms. To mitigate against the risk of bias:
AI and AI-enabled products and services will need to incorporate degree of ethical transparency in order to engender trust (otherwise market uptake may be impeded). This will be particularly important when AI autonomous decision-making has a direct impact on the lives of the market participants.
How can such ethical transparency be achieved? There are two separate elements. AI should:
The decision-making of an AI system should be open, understandable and consistent to minimise bias in decision-making. This may be easier said than done: an AI system acting or operating autonomously may not indicate how or why it acted or operated a certain way. It is currently rare for AI systems to be set up to provide a reason for reaching a particular decision.3
For example:
The algorithms behind intelligent or autonomous systems are not subject to consistent oversight. This lack of transparency causes concern because end users have no context to know how a certain algorithm or system came to its conclusions. IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 45
Most products involving machine learning or AI rely heavily on proprietary datasets that are often not released. Keeping the data sets proprietary can provide implicit defensibility against competitors. An AI developer might seek to rely perhaps on the law relating to trade secrets/confidentiality and other intellectual property rights in relation to the data sets (see What IP Protection is available for AI?).
The proprietary nature of its data sets may be particularly important to an AI developer in circumstances where open source software might otherwise lower barriers to entry by competitors.
Where open data sets:
The operation of an AI system should be transparent to ensure that the AI designer, developer, manufacturer, or other responsible person can explain how and why the system made a decision or executed an action.
Several obstacles may need to be overcome to achieve this objective. For example:
Creators of AI systems should therefore consider implementing a process to automate a log or report for a human user of operations and decisions to enable audits and increase transparency. Any assumptions relied on should also be included in the log. The logic and rules of AI systems should also be available as needed. (See the Toolkit for how such functionality could operate in conjunction with an ethics policy.)
AI systems can cause physical damage and economic loss. Legal systems will inevitably need to consider how to allocate legal responsibility for such loss or damage. As AI systems proliferate and are allowed to control more sensitive functions, unintended actions are likely to become increasingly dangerous. There should accordingly be program-level accountability to explain why an AI system reached a decision to address questions of legal responsibility. We explain in more detail in the Toolkit how this can be achieved in practice, including by the use of an ethics compliance log.
AI processes can include an element of randomness (for example, the outcome of tossing a coin). While randomness might help to reduce the risk of bias, it is unlikely to enhance:
The complexity of AI systems in combination with emerging phenomena they encounter mean that constant monitoring of AI systems, and keeping humans “in the loop”, may be required. One-time due diligence in advance of implementation may not be sufficient. Introducing (or maintaining) an element of human involvement in AI autonomous decision-making may well assist in demonstrating accountability where program-level accountability might otherwise be problematic.
Businesses should therefore identify how their AI may fail and what human safety nets can be implemented in the case of such failures. Where AI systems are used in safety-critical situations, a human safety override capability should be embedded into the AI’s functionality.
So-called “interactive machine learning” puts interactions with humans as a central part of developing machine learning systems. It includes building in functionality that enables: (1) an AI system to “explain” its decision-making to a human; and (2) the human to give feedback on the system’s performance and decision outcomes.
However, while keeping humans “in the loop” may help to achieve accountability, it may also limit the intended benefits of autonomous decision-making. Not having to involve humans may have been the reason the AI was implemented in the first place.
Legislatures and courts will need to clarify liability issues for AI systems to help designers, developers, manufacturers, and other persons responsible to understand their rights and obligations. Depending on the type of AI and the particular sector, legislative measures might include:
This approach can be vulnerable to being undermined by the autonomous nature of the technology (for the reasons discussed in more detail in the Ethics Risk Toolkit).
Future of Life Institute, Asilomar AI Principles, 2017.
House of Commons Science and Technology Committee, Robotics and Artificial Intelligence, 5th Report of Session 2016 – 2017, HC 145, 12 October 2016, page 17.
IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 90.
A decision to adopt AI can raise fundamental and moral issues for society. These are complex and vital issues that are not typically the domain of lawyers.
AI is a field of computer science that includes machine learning, natural language processing, speech processing, expert systems, robotics, and machine vision.
AI will need to meet certain minimum ethical standards to achieve sufficient end user uptake, varying according to the type of AI and sector of deployment.
Courts in a number of countries have already had to address a range of legal questions in relation to the automatic nature of machines and systems.
The key question businesses need to consider is whether deploying AI will result in a shift of ethical and legal responsibility within their supply chain