Executive summary
A decision to adopt AI can raise fundamental and moral issues for society. These are complex and vital issues that are not typically the domain of lawyers.
We use cookies and other similar technology to collect data about you to allow us to deliver our online services, measure our website audience and improve your browsing experience. Full details on the cookies we use are set out in our Cookies policy. Please click OK to signify your consent to our use of cookies.
You can withdraw your consent by clicking “manage cookies” and following the instructions shown.
Part of our Artificial Intelligence briefing
Global | Publication | June 2017
Courts in a number of countries have already had to address a range of legal questions in relation to the automatic nature of machines and systems. For example, in the U.S. the courts have heard claims against auto-pilot manufacturers and operators.1 However, AI’s defining characteristic is not simply that it can act and make decisions automatically (that is, by automation), but that it can do so autonomously. That is to say, the AI can:
It would currently be unusual for a legal system to confer separate legal personality on AI. For example:
Because AI does not have separate legal personality, a key issue in relation to establishing liability in connection with AI is to determine who is to be attributed with liability, if such liability can otherwise be established.
The “AI Supply Chain Participants”
Who might potentially be attributed with liability?
Like all IT systems, the development of AI will involve a supply chain and numerous “AI Supply Chain Participants”. Software developers and manufacturers will source expertise by subcontracting. Open source software or third party software components might also be included in the AI coding (will those involved in the development and supply of such components similarly potentially be at risk of liability?)
Contractually, those in the supply chain may need to address more complex liability allocations than would be the case in a non-AI IT supply arrangement. Take the position of a driverless car. In the absence of mechanical fault, the driver is typically liable for loss it causes. AI, however, has the potential to shift that liability up the supply chain to the manufacturer. AI therefore potentially has a disruptive effect on traditional allocations of liability in some supply chains. For more information, see Norton Rose Fulbright, Autonomous Vehicles.
Attribution of both civil and criminal liability in many jurisdictions is determined by rules commonly known as “rules of attribution”. In common law jurisdictions the rules of attribution include agency principles in contract and vicarious liability in tort. They typically attribute liability to a legal person when that liability arises through the acts or omissions of a natural person (that is, a human).
The rules of attribution, separate legal personality, and AI
How does AI fit within the rules of attribution? History may help to provide an answer. “Joint stock companies”, or companies and corporations as we know them today, were incorporated to house risk, share the spoils of joint enterprise, and to create transferrable ownership interests (stocks and shares). Early on the courts in England decided that a company had a separate legal personality from its shareholders, and this model has been widely replicated in legal systems globally.6 The courts have, until comparatively recently, only rarely “pierced the corporate veil” to attribute liability directly to shareholders (although legislation in many jurisdictions can permit them to do so in certain circumstances).
Separate legal personality was conferred on companies and corporations for compelling public policy and commercial reasons. It may not be such a leap of faith to suggest that legislators may find compelling reasons to do so in the case of AI.
However, conferring separate legal personality on a company or corporation has given rise to a number of problems, not least being the problem of whether legal liability can be attributed to something that is deemed to exist by a legal fiction.
For example, it took some time for common law principles to develop which enabled civil and criminal liability to be attributed to a company or corporation, even though it is a non-human entity that acts only through the “arms and legs” or the “directing mind and will” of natural persons (humans).7
In the absence of separate legal personality for AI, could the acts or omissions of AI acting autonomously be attributed to a company or corporation? The law simply has not had to address these questions yet. To take the common law as an example, the courts would have to extend the rules of attribution to attribute liability to a legal person when that liability arises through the acts or omissions of something other than a natural person.
Because of these kinds of problems:
However, a person may be independently civilly or criminally liable for its own acts or omissions without the need for attribution. Should AI – even if it acts entirely autonomously - be treated for liability purposes just like any other technology or inanimate object in the hands of the user? Or will the courts consider that, the more autonomous AI technologies are, the less they “can be considered as simple tools in the hands of other actors (such as the manufacturer the owner, the user, etc)”?12
These types of issues (which we examine in more below) may first come up for consideration by the courts and legislators in relation to an AI technology we already have: autonomous vehicles.
Autonomous vehicles present a number of challenges for legal systems. They are going to change regulation of, and insurance in relation to, transportation. They may also change where liability sits in the supply chain for transport manufacturing. Harry Theochari, Global Head of Transport, Norton Rose Fulbright
Criminal law:
“My Robot Bought Illegal Drugs”
Rose Eveleth, My Robot Bought Illegal Drugs, BBC, 21 July 2015
In a recent incident in Switzerland, a bot named Random Darknet Shopper was given USD 100 of Bitcoin to spend a week and, with algorithmic decision-making, used the Bitcoin to buy drugs and a Hungarian passport over the Internet. The bot and its purchases were seized by the Swiss police.
Because AI is not defined as a legal person, a potential criminal wrong-doer cannot enter into a criminal conspiracy with AI, but in many jurisdictions the potential wrong-doer can commit an offence where the AI is used as a simple tool to commit criminal activity.
Examples of such offences might include using equipment for stealing, computer misuse offences, and possession or control of an article for use in fraud (there are of course jurisdictional variations in the way that such offences may be formulated).
Are there limits on when AI can be taken into account?
There are likely to be limits on when AI can be taken into account in relation to allegedly criminal activity. For example:
Such limitations will present significant obstacles for a prosecutor, including where the AI system at issue acted on its own volition (perhaps by autonomously making a decision to do something, and then doing it), as the AI system’s own acts may not be a mere extension or consequence of a human’s acts.
For example, in bringing a prosecution against an individual or corporation, it may be extremely difficult to demonstrate that it had the necessary mental element in the commission of an act (for example, in the case of fraud, an intent to make a gain or cause a loss) where the relevant state of mind is to be found (if at all) in the opaque sub-routines of an AI system.
AI autonomous decision-making may push the boundaries of civil liability norms. Civil liability connotes a wide spectrum of non-criminal actionable wrongs which vary cross-jurisdictionally. In some respects the liability issues raised by AI are not new: “computers already operate independent of direct human supervision and make decisions that [cannot be predicted] by their designers or programmers”.14
Can analogies be drawn from other areas of law?
Other areas of existing law may be helpful in considering the potential civil liabilities in relation to AI. For example, it is possible the courts might draw analogies with, say:
What is new in terms of liability for the courts and legislators is the wholly autonomous nature in which AI technologies can make decisions or (in combination with robotics) undertake acts or omissions. Where a causative link is required between the alleged wrongdoing and the damage that has resulted, a key issue for the courts will be whether that characteristic breaks the chain in causation between the wrongdoing and the damage. That may be as much a factual inquiry as a legal one.
Such an inquiry may present challenges for the courts. “Many artificial intelligence processes can be opaque, which make it hard for an individual to take direct responsibility for them.”20 Expert evidence may be required.
Identifying the actionable AI “fault”
The courts will have to identify a contractual breach or other actionable conduct in relation to AI autonomous decision-making in order to establish civil liability in connection with it. The focus of enquiry may be on factors that were determined months or even years before the AI system caused the harm.
The code, the ethical parameters within which the AI system was set to operate, and the training data to which the AI system was exposed (among other things) may have determined the system’s “decision” that gave rise to the harm before the system was even set in live operation.
The scope of possible civil liability is potentially as broad as those who could be potentially liable. AI Supply Chain Participants will need to consider the question of their own potential liability.
What could potential civil liability include?
Although not exhaustive, potential civil liability might include:
Product liability: in many jurisdictions, legislation provides that a manufacturer (and sometimes those distributing down the chain of supply) has strict liability for a defect in its products. The claimant does not generally have to prove fault (for example, negligence). In this sense liability is said to be strict. The role of product liability – and the responsibility that falls to businesses manufacturing these products – will probably grow when human actors become less responsible for the actions of a machine.21
The courts will need to consider issues such as whether AI is a “product” and whether its programing could be said to be a “defect” when it results in unexpected outcomes. The answers may vary by jurisdiction and on the facts.
Regulations: these may impose other forms of strict liability in relation to AI. For example, the European Parliament Resolution with Recommendations to the Commission on Civil Law Rules on Robotics22 includes the following recommendations in relation to civil liability:
Tort: the tort of negligence:
The sheer scale in which AI systems are going to be used by banks and financial institutions in risk evaluations, pricing, lending, investment decisions, and robo-advice - just to name a few activities - means that it will be inevitable that disputes and litigation will arise involving AI autonomous decision-making in the Financial Institutions sector.
In addition, regulators are becoming increasingly focused on the systemic (or macro) implications of such technology in both the wealth management and retail sub-sectors.
Banks and financial institutions will need to undertake robust regulatory and compliance reviews of their proposed AI deployments, and ensure that their operational, contractual, and regulatory commitments can be met James Bateson, Global Head of Financial Institutions
Requirements of civil law jurisdictions
In Germany, for example, in order to establish actionable negligence there needs to be:
Is the risk of harm foreseeable?
The prospect that AI will behave “in ways designers do not expect challenges the prevailing assumption within tort law that courts only compensate for foreseeable injuries. Courts might arbitrarily assign liability to a human actor even when liability is better located elsewhere for reasons of fairness or efficiency. Alternatively, courts could refuse to find liability because the defendant before the court did not, and could not, foresee the harm that the AI caused.”29
The risk that AI will act in an unexpected (and therefore unforeseeable) way may be particularly acute when AI interacts with other software systems. Could negligence be established when neither system working in isolation would have led to the harmful result?
Other forms of tort liability: breach of statutory duty might arise where a statutory obligation is wide enough to encompass AI. In some jurisdictions, strict liability in tort can also arise where a person loses control of something dangerous (it “escapes”) and it harms another.30 It is not clear whether such principles would be sufficiently wide so that “escape” could be equated with the lack of control characteristic of autonomous action by AI or, say, whether it would need to occur physically in conjunction with robotics.
Contract: AI also raises liability issues under contract law. Smart contracts deployed on distributed ledger (or blockchain) technology may act autonomously by purporting to enter into other, separate “follow-on” contracts. In e-commerce, software variously known as “electronic agents”, “intelligent software agents”, or simply “bots”, can act autonomously and purport to enter into contracts on a user's behalf (potentially without the user’s knowledge or involvement). Such technologies have the potential to give rise to liability where a party disputes that a binding contract has been formed. Whether that is the case may vary cross-jurisdictionally, and as between common law and civil law jurisdictions (see Norton Rose Fulbright’s and R3’s publication, Can Smart Contracts be Legally Binding Contracts for more details).
Regulation in relation to AI could include:
Examples of the three approaches are in the early stages of development across a range of jurisdictions. In Europe, for example, the European Commission has so far not focused specifically on AI, although aspects of its digital single market programme address related issues (such as the Internet of Things and Big Data). However, the Commission will probably address AI issues more directly in the future.
The European Parliament:
“The future legislative instrument should be based on an in-depth evaluation by the Commission determining whether the strict liability or the risk management approach should be applied”
European Parliament, Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017
The European Parliament recommended that the measures to be developed by the European Commission should include a number of specific elements, including mandatory registration of AI-enabled robotics and mandatory access to source code to investigate accidents and damage.
The European Parliament also proposed a so-called “Charter on Robotics”, a “Code of Ethical Conduct for Robotics Engineers”, a “Code for Research Ethics Committees”, and licensing criteria for designers and users.
The European Commission has not so far committed to act on the European Parliament’s recommendations. Given the growing interest in this area and the Commission’s strong focus on the digital economy, however, further developments can be expected at the EU level in the coming years.
Codes of practice and compulsory disclosure
Other organisations, such as the Institute of Electrical and Electronic Engineers and Oxford University’s Towards a Code of Ethics for Artificial Intelligence Research project, are working to develop codes of ethical practice in relation to AI.
A so-called “Turing Red Flag Law” has also been proposed under which an AI bot would be required to inform its users that it is a bot, and not a human.27
Regulation, or rules reflecting accepted modes of operation, may ultimately be able to be encoded within the operation of AI technology, so that the code both operates according to, and self-reinforces, applicable regulations.
There will be big winners and losers as … robots and artificial intelligence transform the nature of work. Societies will face further challenges in directing and investing in technologies that benefit humanity instead of destroying it or intruding on basic human rights of privacy and freedom of access to information. Shaping Tomorrow, Artificial Intelligence – Impacts on Society, 30 September 2014
AI has the potential to raise significant ethical-legal questions about human rights. AI-enabled healthcare, weapons systems, criminal justice, and welfare are all areas where human involvement in decision-making may be replaced by autonomous decision-making. For example, human rights can be significantly affected by AI that can undertake suspect identification (based on Big Data analytics) and criminal sentencing (based on predicted recidivism).
“State v. Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing”
State v Loomis, Harvard Law Review, 130 Harv. L. Rev. 1530, 10 March 2017
Recidivism has become a major focus for criminal sentencing. In the U.S., pre-sentencing investigation reports often assess recidivism risk. The Wisconsin Supreme Court28 has recently held that a trial court’s use of an algorithmic risk assessment in sentencing did not violate the defendant’s due process rights. This was so even though the methodology used to provide the assessment was not disclosed (this was because the technology vendor considered it to be a trade secret).
While the court said that warnings should in future accompany the use of algorithmic risk assessments, independent testing of the technology revealed that offenders of colour were more likely to receive higher risk ratings than white offenders.
“There are currently few studies analysing these assessments, in part because the lack of methodological transparency from data assessment companies has made study difficult.”
Although not strictly a human rights issue, businesses may similarly be affected - for example, AI may predict or identify likely tax irregularities or potential corporate fraud, exposing businesses to significant investigatory costs where human intervention could have discounted the need for an investigation.
Existing laws protecting human rights globally vary vastly in the level of protection conferred. They may be ill-adapted to address the very human concerns that may arise from the operation of AI. Businesses will need to consider both the legal implications and the wider reputational issues of failing to factor in the impact that AI used by them may have on human rights.
New data privacy laws, such as the EU General Data Protection Regulation (GDPR), are beginning to deal with AI explicitly. Under such privacy laws, key issues include whether:
As computing power increases with time and as AI algorithms improve, information that was once thought to be private can be linked to individuals at a later stage in time. This raises data protection concerns for AI deployments.
In addition, new data protection laws are addressing decision-making undertaken by AI explicitly, and imposing specific requirements. These will need to be factored into AI systems by design at the outset. Nick Abrahams, Global Head of Technology and Innovation
Typically, AI systems require large amounts of data to make intelligent decisions. Although some of this information alone might not be considered to be personal data, a large amount of it in combination with different data sources might make it possible to identify an individual and so breach data privacy laws.
Profiling
The use of AI raises data protection issues in relation to profiling. As users of AI systems frequently struggle to understand how such systems arrive at particular outputs or decisions, it is likely they will find it difficult to meet obligations under data privacy laws as regards:
It comes as no surprise, therefore, that organisations working in the field of AI, such as the Institute of Electrical and Electronic Engineers, expect there to be an increasing need for rationale and explanation as to how the decision was reached by the algorithm.28
“Artificial Intelligence Poses Data Privacy Challenges”
Stephen Gardner, Artificial Intelligence Poses Data Privacy Challenges, Bloomberg, 26 October 2016
“International privacy regulators are increasingly concerned about the need to balance innovation and consumer protection in artificial intelligence and other data driven technologies. … Data protection officials from more than 60 countries expressed their concerns over challenges posed by the emerging fields of robotics, artificial intelligence and machine learning due to the new tech's unpredictable outcomes. The global privacy regulators also discussed the difficulties of regulating encryption standards and how to balance law enforcement agency access to information with personal privacy rights. … For example, data-driven machines may have the ability to analyse sensitive medical data, make medical diagnoses, thereby potentially revolutionising the health-care industry. … Machines that learn would act like a chef: see the ingredients and comes up with something new.”
Data privacy laws may give rise to other hurdles that will need to be addressed in the context of AI and personal data. For example, the GDPR provides that (subject to a few exceptions) individuals “shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. As AI systems become more complex and draw on increasingly vast amounts of data, it may become very challenging for those operating AI systems to explain how a decision was actually reached.
What are the pre-product design implications for businesses?
A business should consider whether it would be useful or appropriate for a human to have the power to intervene before a decision by an AI system is finalised in order to check the decision, or to have the ability to override the decision afterwards.
New data privacy laws increasingly emphasise accountability and the idea of privacy by design and default. Designing AI to meet data privacy requirements from the outset is likely to be more cost-effective in the long run. This could include, for example, designing AI systems that can generate logs demonstrating what data was considered and what factors were taken into account.
Where AI comes to be regarded as integral to critical national infrastructure or national security, government and security agencies may in time wish to have access to, say, data generated by AI, its algorithms, decision outputs, and explanations, and may use existing or new laws to obtain such access.
As with any IT system, AI is susceptible to cyber intrusion and “hacking”. Given the potential for both economic loss and physical damage (for example, AI-enabled robotics, or AI-controlled infrastructure) as a result of such incidents, businesses will need to ensure that all appropriate steps are implemented to: (1) guard against such risks; and (2) mitigate any breaches, in each case in accordance with applicable legal requirements and industry practices.
The Infrastructure, Mining and Commodities sector already uses AI in a number of areas. Cyber intrusion and hacking in this context could have particularly serious consequences. Take, for example, autonomous drills and haulage, drones for exploration, surveys, and mine operations, and infrastructure AI-enabled robotics. Any loss of control of such technology through cyber intrusion or hacking could result in significant financial losses, personal injury, and reputational damage to a business.
Such risks need to be factored into the risk assessment when deploying AI, with safeguards built in at the design stage of the technology. Nick Merritt, Global Head of Infrastructure, Mining and Commodities
Insurance was an important element of the solutions discussed by the European Parliament Resolution on Civil Law Rules Relating to Robotics.30 However, the implications of AI for insurability have not been explored in any depth so far.
An important element for the pricing of risks is that the potential insured events occur within a certain predictable range and that there is a causal relationship. Although “cyber” policies (covering data destruction, hacking, and liability for damage to third parties caused, for example, by errors and omissions) are available in the market, policy coverage will need to be reviewed to ensure that AI risks do not fall within policy exclusions.
Insurance may not be able to address (or appropriately price) the risks where the “behaviour” of AI goes beyond a predictable range of foreseeable risks - for example, where:
What is an AI swarm?
A metaphor taken from nature (for example, the collective behaviour of ant colonies, bird flocking, and fish schooling), “swarm” effects can arise out of the collective behaviour of disparate AI systems interfacing with each other. There is no centralised control structure dictating how individual AI systems should behave, and local interactions between them may lead to "intelligent" global behaviour.
All data-driven systems are susceptible to bias based on factors such as the choice of training data sets, which are likely to reflect subconscious cultural biases. House of Commons Science and Technology Committee, Robotics and Artificial Intelligence, 5th Report of Session 2016 – 2017, HC 145, 12 October 2016, page 18, quoting Drs Koene and Hatada from the University of Nottingham
A key area of concern in the employment context is the use of AI in the recruitment process. It would be reasonable to assume that an autonomous decision-making process for shortlisting applicants would be more objective and less prone to bias (in particular, unconscious bias) than the traditional process. The reality, however, is that AI in the context of human resources and recruitment can suffer from undetectable bias:
“AI Programs Exhibit Racial and Gender Biases, Research Reveals”
Hannah Devlin, AI Programs Exhibit Racial and Gender Biases, Research Reveals, 13 April 2017
Research published in the journal Science31has shown that an AI tool that helps computers to interpret natural language exhibits marked gender and racial biases. As AI gets closer to acquiring human language abilities it is also absorbing the ingrained biases within the language that humans use.
The subject of the research was AI that used “word embedding”. This approach is already used in web searches and in translation. It interprets language by building a mathematical representation using “vectors” between words (based on words that most frequently appear alongside the word in question).
The research shows that troubling human biases can be acquired by this methodology. For example:
The research suggests that AI dependent on word embedding which is used to select candidates for interview based on their CVs, for example, might be more likely to choose those candidates whose names are associated with the pleasant words. The algorithm picked up social prejudices inherent in the data training sets to which it was exposed.
Employers in many jurisdictions have a legal responsibility to avoid discrimination and bias in the recruitment process, so it will be necessary for them to consider whether it is prudent to accept the results of delegating the decision-making process to an AI system and, if they choose to do so, whether there is any action they can take to avoid or reduce the risk of bias or discrimination.
Robot bosses
AI-enabled “robot bosses” are now in active operation, empowered to make decisions about employee task allocations for day-to-day activities with the object of improving workforce efficiencies.32 What would be the position if, say, less career-enhancing or more mundane tasks were consistently allocated to a group of people sharing a particular protected characteristic (such as race, gender etc)? The issues which arise in relation to recruitment applicable to AI are equally relevant here.
Bias and provision of goods and services
Laws may prohibit bias in the provision of goods and services. For example, in the European Union discrimination in the provision of goods and services is prohibited on grounds of sex and race under the Directive Implementing the Principle of Equal Treatment between Men and Women in the Access to and Supply of Goods and Services (2004/113/EC) and the Race Directive (2000/43/EC).
Some member states of the European Union have “gold plated” such requirements. For example, the United Kingdom’s Equality Act 2010 prohibits discrimination by service providers in relation to goods, services, and facilities in relation to protected characteristics. The protected characteristics are age, disability, gender reassignment, pregnancy and maternity, race, religion or belief, sex, and sexual orientation.
Goods, services, and facilities covered by the legislation could include facilities by way of banking or insurance or for grants, loans, credit or finance; facilities for transport or travel; access to and use of means of communication; and access to and use of information services (among others).
AI systems that make decisions on behalf of a service provider in relation to the provision of goods, services, and facilities could potentially put the service provider into breach of the 2010 Act if, say, the AI results in the service provider, on the basis of one or more of the protected characteristics, not providing a person with the service, goods, or facilities:
The nature of AI gives rise to challenges under existing IP legal frameworks, many of which do not address the IP position in relation to machine-created works and autonomous actions that infringe IP rights.
AI as IP creator: AI can create new works and inventions. Who owns the IP rights in relation to machine-generated original works? Can a machine be an inventor under the applicable patent law? If the machine cannot be an inventor, then another party may be the inventor or owner of the rights. Given the uncertainty regarding ownership:
AI as IP infringer: a machine may act or operate autonomously in a manner that infringes third party IP rights. If existing laws do not extend liability to a machine, then a related stakeholder (such as the owner, developer, operator, or another AI Supply Chain Participant) may be responsible.
What IP protection is available for AI?
AI products and services are protected by different types of IP rights:
An IP strategy: businesses developing or using AI will need to consider developing and rolling out a “layered” approach to IP rights in connection with their AI in order to protect different aspects of the innovation and to reduce the risk of infringement claims against them by third parties. There should be clear agreements in place with those working with a business in connection with AI systems (for example, consultants and suppliers).
The product of AI, based on the machine learning algorithm “Generative Adversarial Network,” is being cited as one of the most sophisticated AIs ever created. This type of AI gives us a glimpse of what is to come – a world where the machine creates new and better things autonomously. But what is the role of IP in such a scenario? Understanding how AI operates, how current IP law is applied to AI and generated works, and how it impacts upon the ability of industry to protect their AI-related work is important, and will require close monitoring of industry voices and governmental policies over the years ahead.
Read our in-depth article on this topic here.
Antitrust authorities, including the European Commission, are increasingly sensitive to the risk that AI, and algorithms more generally, can play a part in antitrust violations.
I don't think competition enforcers need to be suspicious of everyone who uses an automated system for pricing. But we do need to be alert. Because automated systems could be used to make price-fixing more effective. That may be good news for cartelists. But it's very bad news for the rest of us. Margrethe Vestager, EU Commissioner for Competition, Algorithms and competition, Bundeskartellamt 18th Conference on Competition, 16 March 2017
In recent years, the Commission and other European competition authorities have become increasingly concerned about the competitive implications of the growing use of algorithms. While the authorities recognise the pro-competitive uses of algorithms to help consumers find the lowest prices, they have already raised a number of concerns.
These concerns include the possibility that automated systems can help make price-fixing more effective, for instance by helping monitor deviations from price-fixing agreements or to implement price-fixing agreements in the first place, or facilitating other antitrust violations.
The growth of AI is likely to exacerbate these concerns. More speculatively, competitors’ independent use of AI may lead to parallel behaviour without any coordination. Such behaviour is not currently prohibited, but can be expected to attract increasing antitrust scrutiny.
Although not directly related to AI, the Commission has looked closely at Big Data issues in a number of merger reviews, including Microsoft/LinkedIn, Facebook/WhatsApp and Google/Doubleclick. Although the Commission has so far found no concerns, the role of data is set to become an increasingly important factor in the Commission’s assessment of how a merger affects competition.
Checklist – What key legal questions should in-house counsel be asking in relation to AI?
See Stephen W. Wu, Unmanned Vehicles and U.S. Product Liability Law [2012] JILawInfoSci 15; and David C. Vladeck, Machines without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117 (2014).
Software Solutions Partners Ltd, R (on the application of) v HM Customs & Excise [2007] EWHC 971, at paragraph 67.
United States of America v. Athlone Industries, Inc., 746 F.2d 977, 979 (3d Cir. 1984), U.S. Court of Appeals for the Third Circuit.
BGH, judgement of 16 October 2012 – X ZR 37/12.
See BGH, judgment of 7 November 2001 - VIII ZR 13/01; BGH, BGH, judgment of 26. January 2005 - VIII ZR 79/04; BGH, judgement of 16 October 2012 – X ZR 37/12.
Salomon v Salomon and Co Ltd [1897] AC 22 is the seminal English authority establishing the separate legal personality of a company; and Santa Clara Cnty. v S. Pac. R. Co., 118 U.S. 394 (1886) is U.S. authority for the proposition that corporations are persons within the intent of the clause in section 1 of the Fourteenth Amendment to the Constitution of the United States (which forbids a state to deny to any person within its jurisdiction the equal protection of the laws).
For common law examples, see Meridian Global Funds Management Asia Ltd v The Securities Commission [1995] UKPC 5; Jetivia SA v Bilta (UK) Ltd [2015] UKSC 23; and Howmet Ltd v Economy Devices Ltd [2016] EWCA Civ 847.
Report of the 2015 Study Panel, Stanford University, Artificial Intelligence and Life in 2030, page 47.
The Electronic Transactions Act 1999 (Cth).
The Electronic Communications and Transactions Act 2002.
European Parliament, Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017.
European Parliament Committee on Legal Affairs, Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics , 2015/2103 (INL), 31 May 2016, page 5.
Tesco Supermarkets Ltd v Nattrass [1971] UKHL 1.
Wendell Wallach, Implementing Moral Decision-Making Faculties in Computers and Robots, AI & Soc (2008) 22: 463 – 475, page 464.
See Stephen W. Wu, Unmanned Vehicles and U.S. Product Liability Law [2012] JILawInfoSci 15; and David C. Vladeck, Machines without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev. 117 (2014).
Clerk & Lindsell on Torts, 21st ed., chapter 21 (Animals), paragraph 21-01.
See, for example, Corinthian Pharmaceutical Sys., Inc. v. Lederle Labs., 724 F. Supp. 605, 610 (S.D. Ind. 1989); and State Farm Mut. Auto. Ins. Co. v. Bockhorst, 453 F.2d 533 (10th Cir. 1972).
See, for example, Software Solutions Partners Ltd, R (on the application of) v HM Customs & Excise [2007] EWHC 971.
In relation to the issue of contract formation (in relation to email auto-replies), the German courts have held that actions by machines/software are to be regarded as a declaration of intent by a human, in cases where the machine/software merely executes actions that have previously been determined by the individual. Accordingly, such human behaviour causes the actions by the machine/software: BGH, judgment of 16 October 2012 – X ZR 37/12; BGH, judgment of 26. January 2005 - VIII ZR 79/04; Regional Court of Cologne, judgment of 16 April 2003 - 9 S 289/02. Similarly in relation to offering agents at online auctions, according to relevant German case law, it is irrelevant whether a declaration by a machine is made due to a specific programming or due to other external effects the machine is reacting to: District Court of Hannover, judgment of 7 September 2001 - 501 C 1510/01.
Government Office for Science (UK), Artificial Intelligence: Opportunities and Implications for the Future of Decision-Making, GS/16/19, 2015, page 15.
Report of the 2015 Study Panel, Stanford University, Artificial Intelligence and Life in 2030, page 46.
European Parliament, Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017.
Report of the 2015 Study Panel, Stanford University, Artificial Intelligence and Life in 2030, page 46 (emphasis added).
For example, Rylands v Fletcher [1868] UKHL 1.
See for example the proposals of the House of Lords Science and Technology Select Committee, Connected and Autonomous Vehicles: The Future, 2nd report of session 2106 – 2107, HL Paper 115, 15 March 2017.
European Parliament, Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017.
Michael Byrne, AI Professor Proposes 'Turing Red Flag Law', Motherboard.com, 6 June 2016.
State v Loomis, 881 N.W.2d 749 (Wis. 2016).
IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 34.
European Parliament, Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017.
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, Semantics Derived Automatically From Language Corpora Contain Human-Like Biases, volume 356, issue 6334, pages 183 to 186, 14 April 2017.
A decision to adopt AI can raise fundamental and moral issues for society. These are complex and vital issues that are not typically the domain of lawyers.
AI is a field of computer science that includes machine learning, natural language processing, speech processing, expert systems, robotics, and machine vision.
AI will need to meet certain minimum ethical standards to achieve sufficient end user uptake, varying according to the type of AI and sector of deployment.
Courts in a number of countries have already had to address a range of legal questions in relation to the automatic nature of machines and systems.
The key question businesses need to consider is whether deploying AI will result in a shift of ethical and legal responsibility within their supply chain