AI: Balancing the opportunities and the risks
In the UK government’s most recent budget announcement and accompanying Industrial Strategy, the UK government announced that it will allocate funding towards the development of Artificial Intelligence (AI) in order to “put the UK at the forefront of the AI and data revolution.”1 Through these announcements, the UK government has made it clear that it sees great potential in digital technology and its transformative powers, not only in relation to the economy, but also in relation to the ways in which we live and work. Indeed, the budget announcement cited research suggesting that “AI, for example, has the potential to increase productivity by up to 30% in some industries”,2 fuelling the creation of 80,000 new jobs annually.3 Additionally, it could “benefit households across the UK by up to £2,300 per year by 2030, and increase GDP by 10%.”4
With AI we will see the rise of robots, smart software, driverless cars and automated decision-making - to name just a few examples - and this will in turn create many new industries, business models and exciting opportunities. But as it creates these opportunities, it will also create new challenges for people, places and businesses, and we must be prepared to respond to these challenges as they arise.
As lawyers, we must be prepared, in particular, to respond to a range of potential new legal issues, including questions regarding liability. Because AI can act unexpectedly, or in ways which can’t be explained (e.g. how it arrived at a decision), the courts will probably have to deal with new cases concerning how AI liability fits within existing principles of liability. Can one blame a decision on an algorithm? Or will liability move further up the supply chain and rest with those responsible for creating the algorithm?
Businesses will want to consider whether any new types of civil or criminal liability will arise in conjunction with their supply chains, and whether certain liabilities will need to be reallocated by new contractual liability or indemnification schemes. They will also want to consider the risk of data privacy and cyber breaches associated with the use of AI. These questions are already starting to be examined: in 2017 the European Parliament passed a resolution urging the European Commission to consider legislation and potential special status for AI robots in order to determine responsibility for loss or damage caused by AI. While these topics are being considered by governments, businesses will need to factor such things into their risk assessments if they choose to adopt AI into their operations.
As we move forward into what has been called the “fourth industrial revolution” and continue to embrace AI, we must remain cognisant that technology is not infallible and needs to be regulated. A recent newspaper article highlighted an important principle: “Robots and their software must remain in the service of humans, able to be switched off. Their ability to reason and to appear sentient needs regulating. Humans possess a creativity still denied to machines.”5 In chasing higher economic productivity and new technological advances, we must not lose sight of this and the potential legal issues that may arise.
The author would like to thank Catherine Johnson (Trainee) for her assistance in preparing this legal update.
Footnotes
- Industrial Strategy, page 10.
- Bank of America Merrill Lynch, Robot Revolution – Global Robot & AI Primer, December 2015.
- McKinsey, Shaping the Future of Work in Europe’s 9 Digital Front-runner Countries, https://www.mckinsey.com/global-themes/europe/shaping-the-future-of-work-in-europes-nine-digital-front-runner-countries, 2017.
- PwC, The Economic Impacts of Artificial Intelligence on The UK Economy, 2017.
- The Guardian, https://www.theguardian.com/commentisfree/2017/nov/24/the-guardian-view-on-productive-capitalism-it-needs-humans-more-than-robots.