Executive summary
A decision to adopt AI can raise fundamental and moral issues for society. These are complex and vital issues that are not typically the domain of lawyers.
We use cookies and other similar technology to collect data about you to allow us to deliver our online services, measure our website audience and improve your browsing experience. Full details on the cookies we use are set out in our Cookies policy. Please click OK to signify your consent to our use of cookies.
You can withdraw your consent by clicking “manage cookies” and following the instructions shown.
Part of our Artificial Intelligence briefing
Global | Publication | July 2017
AI is a field of computer science that includes machine learning, natural language processing, speech processing, robotics, and machine vision. Although many people assume that AI means “Artificial General Intelligence” (that is, intelligence of a machine which performs any intellectual task as well as, or better than, a human can perform it), in reality current AI systems are still some way off from this level of sophistication. They are constituted by a range of methodologies and are deployed in an array of applications (see AI Encompasses A Wide Spectrum Of Technologies).
For AI systems to be accepted for use in a given market, as a matter of commercial reality the use of AI will need to be perceived by the participants in that market as meeting certain minimum ethical standards. Because legal and ethical responsibility are inextricably linked, that same commercial reality in effect imposes an imperative on businesses to address the corresponding legal issues (quite apart from a desire to limit risk). Some important ethical considerations include:
For more detail in relation to the ethical issues, see AI & ethics.
AI’s autonomous nature, more than any other characteristic, needs to be factored into any legal risk assessment of the technology. Such risk assessment should cover a range of considerations, including:
For more detail in relation to the legal issues, see What are the key legal risks?
How can a business deal with the distinctive ethical-legal risks that arise in relation to AI - that is, multiple scenarios demanding ethical judgments that are difficult to foresee, must be agreed in advance, and must be consistent? It is not possible to eliminate these risks by guaranteeing that all ethical judgments will be correct. Instead, businesses should consider creating a defensible process for making ethical judgments. The elements of such approach are set out in the Toolkit.
We gather together a range of sector-specific AI use cases that are already deployed, and some which are likely to be adopted in the future. They demonstrate that AI will impact most, if not all, industry sectors in significant (and at times highly disruptive) ways.
A decision to adopt AI can raise fundamental and moral issues for society. These are complex and vital issues that are not typically the domain of lawyers.
AI is a field of computer science that includes machine learning, natural language processing, speech processing, expert systems, robotics, and machine vision.
AI will need to meet certain minimum ethical standards to achieve sufficient end user uptake, varying according to the type of AI and sector of deployment.
Courts in a number of countries have already had to address a range of legal questions in relation to the automatic nature of machines and systems.
The key question businesses need to consider is whether deploying AI will result in a shift of ethical and legal responsibility within their supply chain