AI is a field of computer science that includes machine learning, natural language processing, speech processing, robotics, and machine vision. Although many people assume that AI means “Artificial General Intelligence” (that is, intelligence of a machine which performs any intellectual task as well as, or better than, a human can perform it), in reality current AI systems are still some way off from this level of sophistication. They are constituted by a range of methodologies and are deployed in an array of applications (see AI Encompasses A Wide Spectrum Of Technologies).
For AI systems to be accepted for use in a given market, as a matter of commercial reality the use of AI will need to be perceived by the participants in that market as meeting certain minimum ethical standards. Because legal and ethical responsibility are inextricably linked, that same commercial reality in effect imposes an imperative on businesses to address the corresponding legal issues (quite apart from a desire to limit risk). Some important ethical considerations include:
- Why are ethics a key issue for the AI industry?
- Can human values be embedded in AI?
- What steps should be taken to minimise the risk of bias?
- How can transparency be achieved?
- What steps are required to achieve accountability?
- How do we keep humans in the loop?
For more detail in relation to the ethical issues, see AI & ethics.
AI’s autonomous nature, more than any other characteristic, needs to be factored into any legal risk assessment of the technology. Such risk assessment should cover a range of considerations, including:
- can responsibility for loss or damage be attributed to someone?
- what types of liability might be at issue?
- what is the potential impact on people?
- what is the impact on the supply of goods and services, obtaining insurance, and potential antitrust implications?
- what impact will AI have on a business’s intellectual property rights strategy?
For more detail in relation to the legal issues, see What are the key legal risks?
How can a business deal with the distinctive ethical-legal risks that arise in relation to AI - that is, multiple scenarios demanding ethical judgments that are difficult to foresee, must be agreed in advance, and must be consistent? It is not possible to eliminate these risks by guaranteeing that all ethical judgments will be correct. Instead, businesses should consider creating a defensible process for making ethical judgments. The elements of such approach are set out in the Toolkit.
We gather together a range of sector-specific AI use cases that are already deployed, and some which are likely to be adopted in the future. They demonstrate that AI will impact most, if not all, industry sectors in significant (and at times highly disruptive) ways.