Legal risk assessment
AI’s autonomous nature, more than any other characteristic, needs to be factored into any legal risk assessment of the technology. Such risk assessment should include a range of considerations, including those set out below.
Can responsibility for loss or damage caused by AI be attributed to someone?
It would currently be unusual for a legal system to confer separate legal personality on AI. For example:
- English law: in English common law an automated system – even a robot - cannot currently be regarded as an agent, because only a person with a mind can be an agent in law;
- US. position: a U.S. court has observed that “robots cannot be sued” for similar reasons; and
- Germany: in the context of contract formation, according to case law of the Federal Supreme Court (Bundesgerichtshof (BGH)), as well as the prevailing opinion in German legal literature, machines and software cannot make a valid declaration of intent (a required element for contract formation). In the courts’ opinion, the subjective element of a declaration of intent requires human behaviour and legal capacity, which is not present in relation to machines/software and cannot be replaced by AI. Accordingly, machines and software are also unable to act as an agent as they lack the legal capacity necessary. Instead the person responsible for the machine or software is deemed to act with a general awareness to declare intent, which is then attributed to the actions made by the machine/software.
Because AI does not have separate legal personality, a key issue in relation to establishing liability in connection with AI is to determine who is to be attributed with liability, if such liability can otherwise be established.
The “AI Supply Chain Participants”
Who might potentially be attributed with liability?
- the commissioner of the AI system: the person paying for its design – for example, a business customer
- the designer of the system: this may be a software developer or manufacturer
- the person who writes the AI design/technical/functional specifications: typically a software developer or manufacturer
- the software programmer: typically a software developer
- the person who licenses or distributes the system: typically the developer or manufacturer, but it could equally be another person
- the person who integrates, installs, or configures the system on the hardware: typically a software developer or manufacturer, but it could include a distributor or even the customer
- the person who tests the system: typically a software developer or manufacturer, but it could include a distributor or even the customer
- the person who trains the system and/or who supplies data sets to train it: this could include the software developer or manufacturer, a distributor, or even the customer
- a subcontractor: any person to whom any of the above activities is subcontracted
- the owner of the AI system or the hardware within which it operates: typically the end user or customer
- the operator of the AI system: an end user or customer may not own the system but may be sublicensed to use it by the owner, or to operate it on the owner’s behalf
Like all IT systems, the development of AI will involve a supply chain and numerous “AI Supply Chain Participants”. Software developers and manufacturers will source expertise by subcontracting. Open source software or third party software components might also be included in the AI coding (will those involved in the development and supply of such components similarly potentially be at risk of liability?)
Contractually, those in the supply chain may need to address more complex liability allocations than would be the case in a non-AI IT supply arrangement. Take the position of a driverless car. In the absence of mechanical fault, the driver is typically liable for loss it causes. AI, however, has the potential to shift that liability up the supply chain to the manufacturer. AI therefore potentially has a disruptive effect on traditional allocations of liability in some supply chains. For more information, see Norton Rose Fulbright, Autonomous Vehicles.
Attribution of both civil and criminal liability in many jurisdictions is determined by rules commonly known as “rules of attribution”. In common law jurisdictions the rules of attribution include agency principles in contract and vicarious liability in tort. They typically attribute liability to a legal person when that liability arises through the acts or omissions of a natural person (that is, a human).
The rules of attribution, separate legal personality, and AI
How does AI fit within the rules of attribution? History may help to provide an answer. “Joint stock companies”, or companies and corporations as we know them today, were incorporated to house risk, share the spoils of joint enterprise, and to create transferrable ownership interests (stocks and shares). Early on the courts in England decided that a company had a separate legal personality from its shareholders, and this model has been widely replicated in legal systems globally. The courts have, until comparatively recently, only rarely “pierced the corporate veil” to attribute liability directly to shareholders (although legislation in many jurisdictions can permit them to do so in certain circumstances).
Separate legal personality was conferred on companies and corporations for compelling public policy and commercial reasons. It may not be such a leap of faith to suggest that legislators may find compelling reasons to do so in the case of AI.
However, conferring separate legal personality on a company or corporation has given rise to a number of problems, not least being the problem of whether legal liability can be attributed to something that is deemed to exist by a legal fiction.
For example, it took some time for common law principles to develop which enabled civil and criminal liability to be attributed to a company or corporation, even though it is a non-human entity that acts only through the “arms and legs” or the “directing mind and will” of natural persons (humans).
In the absence of separate legal personality for AI, could the acts or omissions of AI acting autonomously be attributed to a company or corporation? The law simply has not had to address these questions yet. To take the common law as an example, the courts would have to extend the rules of attribution to attribute liability to a legal person when that liability arises through the acts or omissions of something other than a natural person.
Because of these kinds of problems:
- regulatory bodies in the United States, Canada, and elsewhere are setting the conditions under which, for example, software can enter into a binding contract on behalf of a person. Australia and South Africa already have legislation addressing this issue; and
- the European Parliament has recommended that, in the long run, autonomous AI in conjunction with robotics could be given the status of electronic persons, perhaps in conjunction with a compulsory insurance scheme or a compensation fund.
However, a person may be independently civilly or criminally liable for its own acts or omissions without the need for attribution. Should AI – even if it acts entirely autonomously - be treated for liability purposes just like any other technology or inanimate object in the hands of the user? Or will the courts consider that, the more autonomous AI technologies are, the less they “can be considered as simple tools in the hands of other actors (such as the manufacturer the owner, the user, etc)”?
These types of issues (which we examine in more below) may first come up for consideration by the courts and legislators in relation to an AI technology we already have: autonomous vehicles.
Autonomous vehicles present a number of challenges for legal systems. They are going to change regulation of, and insurance in relation to, transportation. They may also change where liability sits in the supply chain for transport manufacturing. Harry Theochari, Global Head of Transport, Norton Rose Fulbright
Is there a risk of criminal liability?
- often requires intent (or another similar mental element) in order for criminal liability to be established (a mental element is not required for strict liability offences). In the future, where harm is caused by AI, the courts may need to determine whether a person has the necessary intent relating to the harm in order to be criminally liable for it. This is likely to be a complex task;
- generally recognises that offences can be committed by individuals or by corporate entities. Because (as far as we are aware at the date of publication) AI is not legally defined as a legal person for all purposes in any developed legal system, an AI system itself cannot be prosecuted for criminal activity. However, AI can nonetheless be used in criminal activity.
“My Robot Bought Illegal Drugs”
Rose Eveleth, My Robot Bought Illegal Drugs, BBC, 21 July 2015
In a recent incident in Switzerland, a bot named Random Darknet Shopper was given USD 100 of Bitcoin to spend a week and, with algorithmic decision-making, used the Bitcoin to buy drugs and a Hungarian passport over the Internet. The bot and its purchases were seized by the Swiss police.
Because AI is not defined as a legal person, a potential criminal wrong-doer cannot enter into a criminal conspiracy with AI, but in many jurisdictions the potential wrong-doer can commit an offence where the AI is used as a simple tool to commit criminal activity.
Examples of such offences might include using equipment for stealing, computer misuse offences, and possession or control of an article for use in fraud (there are of course jurisdictional variations in the way that such offences may be formulated).
Are there limits on when AI can be taken into account?
There are likely to be limits on when AI can be taken into account in relation to allegedly criminal activity. For example:
- under English criminal law, the acts of an AI system can only be taken into account where the AI constitutes a mere extension of the alleged human offender’s acts; the AI system itself will not be subject to separate criminal liability; and
- although it is already well established that the criminally guilty intent of a person acting as the directing mind and will of a company can be imputed to that company as a matter of common law, as noted in Can Responsibility for Loss or Damage Caused by AI be Attributed to Someone?, the common law rules of attribution do not currently attribute liability to a legal person when that liability arises through the acts or omissions of something other than a natural person (that is, a human).
Such limitations will present significant obstacles for a prosecutor, including where the AI system at issue acted on its own volition (perhaps by autonomously making a decision to do something, and then doing it), as the AI system’s own acts may not be a mere extension or consequence of a human’s acts.
For example, in bringing a prosecution against an individual or corporation, it may be extremely difficult to demonstrate that it had the necessary mental element in the commission of an act (for example, in the case of fraud, an intent to make a gain or cause a loss) where the relevant state of mind is to be found (if at all) in the opaque sub-routines of an AI system.
What are the potential civil liabilities?
AI autonomous decision-making may push the boundaries of civil liability norms. Civil liability connotes a wide spectrum of non-criminal actionable wrongs which vary cross-jurisdictionally. In some respects the liability issues raised by AI are not new: “computers already operate independent of direct human supervision and make decisions that [cannot be predicted] by their designers or programmers”.
Can analogies be drawn from other areas of law?
Other areas of existing law may be helpful in considering the potential civil liabilities in relation to AI. For example, it is possible the courts might draw analogies with, say:
- the U.S. auto-pilot cases: which involve claims against manufacturers or operators of auto-pilot-enabled equipment. Many such claims have failed on the grounds of lack of evidence of manufacturing defects or lack of proof of causation;
- the law relating to escaping pets/animals: which in many jurisdictions treats an owner of an animal (such as a dog) - not the breeder who sells it to the owner – as having legal responsibility for the actions of their animal. This may be under statute or common law, or both (depending on the jurisdiction). “There still remains a separate body of law peculiar to animals, despite the fact that animals are, like other chattels, merely agents and instruments of damage, albeit animate and automotive.” If applied to AI, it would mean that the owner, operator or end user of the AI, rather than the developer or another AI Supply Chain Participant, might be liable; or
- U.S., English and German authorities dealing with contracting systems that operate automatically: some of these authorities suggest that, even though a contract may have been entered into automatically by software on behalf of a party, it might still be binding on that party.
What is new in terms of liability for the courts and legislators is the wholly autonomous nature in which AI technologies can make decisions or (in combination with robotics) undertake acts or omissions. Where a causative link is required between the alleged wrongdoing and the damage that has resulted, a key issue for the courts will be whether that characteristic breaks the chain in causation between the wrongdoing and the damage. That may be as much a factual inquiry as a legal one.
Such an inquiry may present challenges for the courts. “Many artificial intelligence processes can be opaque, which make it hard for an individual to take direct responsibility for them.” Expert evidence may be required.
Identifying the actionable AI “fault”
The courts will have to identify a contractual breach or other actionable conduct in relation to AI autonomous decision-making in order to establish civil liability in connection with it. The focus of enquiry may be on factors that were determined months or even years before the AI system caused the harm.
The code, the ethical parameters within which the AI system was set to operate, and the training data to which the AI system was exposed (among other things) may have determined the system’s “decision” that gave rise to the harm before the system was even set in live operation.
The scope of possible civil liability is potentially as broad as those who could be potentially liable. AI Supply Chain Participants will need to consider the question of their own potential liability.
What could potential civil liability include?
Although not exhaustive, potential civil liability might include:
- product liability;
- strict liability under statute/regulation;
- liability in tort for negligence;
- liability in tort for breach of statutory duty or (in some jurisdictions) for strict liability for the “escape” of something harmful (perhaps in conjunction with robotics); and
- liability in contract.
Product liability: in many jurisdictions, legislation provides that a manufacturer (and sometimes those distributing down the chain of supply) has strict liability for a defect in its products. The claimant does not generally have to prove fault (for example, negligence). In this sense liability is said to be strict. The role of product liability – and the responsibility that falls to businesses manufacturing these products – will probably grow when human actors become less responsible for the actions of a machine.
The courts will need to consider issues such as whether AI is a “product” and whether its programing could be said to be a “defect” when it results in unexpected outcomes. The answers may vary by jurisdiction and on the facts.
Regulations: these may impose other forms of strict liability in relation to AI. For example, the European Parliament Resolution with Recommendations to the Commission on Civil Law Rules on Robotics includes the following recommendations in relation to civil liability:
- basing a legislative instrument on an in-depth evaluation by the European Commission determining whether strict liability or a risk management approach should be applied;
- liability may be apportioned between various responsible parties according to the actual level of instructions given to the technology and its autonomy;
- the creation of legal personhood for AI-enabled robots, backed by a compensation fund; and
- those who produce the technology should be permitted to limit their liability where the compensation fund provides alternative redress for those suffering harm or where insurance is jointly taken out to guarantee compensation.
Tort: the tort of negligence:
- exists in one formulation or another in most developed legal systems. Although the precise requirements vary, generally for the tort to be established there needs to be: (1) a duty of care owed to the claimant (which might be established on the basis that it is reasonably foreseeable that the claimant would be harmed by a failure to take reasonable care); and (2) a failure to take such care;
- in some civil law jurisdictions has slightly different requirements.
- in the context of AI, might include negligent design, programming or training errors, negligent failure to warn about the risks or weaknesses in the AI system, negligent operation, and negligent misstatement (that is, advisory liability, including perhaps flawed investment decisions, credit risk evaluations by AI, or so-called “robo-advice”).
The sheer scale in which AI systems are going to be used by banks and financial institutions in risk evaluations, pricing, lending, investment decisions, and robo-advice - just to name a few activities - means that it will be inevitable that disputes and litigation will arise involving AI autonomous decision-making in the Financial Institutions sector.
In addition, regulators are becoming increasingly focused on the systemic (or macro) implications of such technology in both the wealth management and retail sub-sectors.
Banks and financial institutions will need to undertake robust regulatory and compliance reviews of their proposed AI deployments, and ensure that their operational, contractual, and regulatory commitments can be met James Bateson, Global Head of Financial Institutions
Requirements of civil law jurisdictions
In Germany, for example, in order to establish actionable negligence there needs to be:
- a breach of an obligation or duty that exists for the benefit of another party;
- established causality between that breach and a financial loss;
- at least negligence on the part of the defendant (there are limited exceptions to this – for example, in product liability cases); and
- proof that the actual amount a party lost or did not gain is a direct result of the respective act or omission.
Is the risk of harm foreseeable?
The prospect that AI will behave “in ways designers do not expect challenges the prevailing assumption within tort law that courts only compensate for foreseeable injuries. Courts might arbitrarily assign liability to a human actor even when liability is better located elsewhere for reasons of fairness or efficiency. Alternatively, courts could refuse to find liability because the defendant before the court did not, and could not, foresee the harm that the AI caused.”29
The risk that AI will act in an unexpected (and therefore unforeseeable) way may be particularly acute when AI interacts with other software systems. Could negligence be established when neither system working in isolation would have led to the harmful result?
Other forms of tort liability: breach of statutory duty might arise where a statutory obligation is wide enough to encompass AI. In some jurisdictions, strict liability in tort can also arise where a person loses control of something dangerous (it “escapes”) and it harms another.30 It is not clear whether such principles would be sufficiently wide so that “escape” could be equated with the lack of control characteristic of autonomous action by AI or, say, whether it would need to occur physically in conjunction with robotics.
Contract: AI also raises liability issues under contract law. Smart contracts deployed on distributed ledger (or blockchain) technology may act autonomously by purporting to enter into other, separate “follow-on” contracts. In e-commerce, software variously known as “electronic agents”, “intelligent software agents”, or simply “bots”, can act autonomously and purport to enter into contracts on a user's behalf (potentially without the user’s knowledge or involvement). Such technologies have the potential to give rise to liability where a party disputes that a binding contract has been formed. Whether that is the case may vary cross-jurisdictionally, and as between common law and civil law jurisdictions (see Norton Rose Fulbright’s and R3’s publication, Can Smart Contracts be Legally Binding Contracts for more details).
Are there AI or industry-specific regulations?
Regulation in relation to AI could include:
- regulation of the technology itself;
- industry-specific regulation (for example, the regulation of AI-enabled autonomous vehicles); or
- self-regulation (for example, voluntary codes of practice), or a combination of these things.
Examples of the three approaches are in the early stages of development across a range of jurisdictions. In Europe, for example, the European Commission has so far not focused specifically on AI, although aspects of its digital single market programme address related issues (such as the Internet of Things and Big Data). However, the Commission will probably address AI issues more directly in the future.
The European Parliament:
- has adopted a Resolution on Civil Law Rules Relating to Robotics recommending that the European Commission propose a guiding ethical framework for the design, production, and use of AI-enabled robotics and calling for the creation of a European agency for robotics and AI; and
- noted particular issues relating to autonomous vehicles, care robots, medical robots, human repair and enhancement, and drones, and called for the Commission to propose a new law on liability issues raised by AI-enabled robots (see What are the Potential Civil Liabilities?) and an obligatory insurance scheme.
“The future legislative instrument should be based on an in-depth evaluation by the Commission determining whether the strict liability or the risk management approach should be applied”
European Parliament, Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017
The European Parliament recommended that the measures to be developed by the European Commission should include a number of specific elements, including mandatory registration of AI-enabled robotics and mandatory access to source code to investigate accidents and damage.
The European Parliament also proposed a so-called “Charter on Robotics”, a “Code of Ethical Conduct for Robotics Engineers”, a “Code for Research Ethics Committees”, and licensing criteria for designers and users.
The European Commission has not so far committed to act on the European Parliament’s recommendations. Given the growing interest in this area and the Commission’s strong focus on the digital economy, however, further developments can be expected at the EU level in the coming years.
Codes of practice and compulsory disclosure
Other organisations, such as the Institute of Electrical and Electronic Engineers and Oxford University’s Towards a Code of Ethics for Artificial Intelligence Research project, are working to develop codes of ethical practice in relation to AI.
A so-called “Turing Red Flag Law” has also been proposed under which an AI bot would be required to inform its users that it is a bot, and not a human.
Regulation, or rules reflecting accepted modes of operation, may ultimately be able to be encoded within the operation of AI technology, so that the code both operates according to, and self-reinforces, applicable regulations.
Are there any human rights considerations?
There will be big winners and losers as … robots and artificial intelligence transform the nature of work. Societies will face further challenges in directing and investing in technologies that benefit humanity instead of destroying it or intruding on basic human rights of privacy and freedom of access to information. Shaping Tomorrow, Artificial Intelligence – Impacts on Society, 30 September 2014
AI has the potential to raise significant ethical-legal questions about human rights. AI-enabled healthcare, weapons systems, criminal justice, and welfare are all areas where human involvement in decision-making may be replaced by autonomous decision-making. For example, human rights can be significantly affected by AI that can undertake suspect identification (based on Big Data analytics) and criminal sentencing (based on predicted recidivism).
“State v. Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing”
State v Loomis, Harvard Law Review, 130 Harv. L. Rev. 1530, 10 March 2017
Recidivism has become a major focus for criminal sentencing. In the U.S., pre-sentencing investigation reports often assess recidivism risk. The Wisconsin Supreme Court has recently held that a trial court’s use of an algorithmic risk assessment in sentencing did not violate the defendant’s due process rights. This was so even though the methodology used to provide the assessment was not disclosed (this was because the technology vendor considered it to be a trade secret).
While the court said that warnings should in future accompany the use of algorithmic risk assessments, independent testing of the technology revealed that offenders of colour were more likely to receive higher risk ratings than white offenders.
“There are currently few studies analysing these assessments, in part because the lack of methodological transparency from data assessment companies has made study difficult.”
Although not strictly a human rights issue, businesses may similarly be affected - for example, AI may predict or identify likely tax irregularities or potential corporate fraud, exposing businesses to significant investigatory costs where human intervention could have discounted the need for an investigation.
Existing laws protecting human rights globally vary vastly in the level of protection conferred. They may be ill-adapted to address the very human concerns that may arise from the operation of AI. Businesses will need to consider both the legal implications and the wider reputational issues of failing to factor in the impact that AI used by them may have on human rights.
What are the key data privacy issues?
New data privacy laws, such as the EU General Data Protection Regulation (GDPR), are beginning to deal with AI explicitly. Under such privacy laws, key issues include whether:
- all personal information used by an AI system has been collected with the data subject’s valid consent; and
- such consent covers all purposes for which the AI uses the information.
As computing power increases with time and as AI algorithms improve, information that was once thought to be private can be linked to individuals at a later stage in time. This raises data protection concerns for AI deployments.
In addition, new data protection laws are addressing decision-making undertaken by AI explicitly, and imposing specific requirements. These will need to be factored into AI systems by design at the outset. Nick Abrahams, Global Head of Technology and Innovation
Typically, AI systems require large amounts of data to make intelligent decisions. Although some of this information alone might not be considered to be personal data, a large amount of it in combination with different data sources might make it possible to identify an individual and so breach data privacy laws.
The use of AI raises data protection issues in relation to profiling. As users of AI systems frequently struggle to understand how such systems arrive at particular outputs or decisions, it is likely they will find it difficult to meet obligations under data privacy laws as regards:
- other information provision requirements; and
- providing a meaningful description of the relevant program logic explaining how an output or decision was reached (generic descriptions may not meet the criteria of new data privacy laws).
It comes as no surprise, therefore, that organisations working in the field of AI, such as the Institute of Electrical and Electronic Engineers, expect there to be an increasing need for rationale and explanation as to how the decision was reached by the algorithm.
“Artificial Intelligence Poses Data Privacy Challenges”
Stephen Gardner, Artificial Intelligence Poses Data Privacy Challenges, Bloomberg, 26 October 2016
“International privacy regulators are increasingly concerned about the need to balance innovation and consumer protection in artificial intelligence and other data driven technologies. … Data protection officials from more than 60 countries expressed their concerns over challenges posed by the emerging fields of robotics, artificial intelligence and machine learning due to the new tech's unpredictable outcomes. The global privacy regulators also discussed the difficulties of regulating encryption standards and how to balance law enforcement agency access to information with personal privacy rights. … For example, data-driven machines may have the ability to analyse sensitive medical data, make medical diagnoses, thereby potentially revolutionising the health-care industry. … Machines that learn would act like a chef: see the ingredients and comes up with something new.”
Data privacy laws may give rise to other hurdles that will need to be addressed in the context of AI and personal data. For example, the GDPR provides that (subject to a few exceptions) individuals “shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. As AI systems become more complex and draw on increasingly vast amounts of data, it may become very challenging for those operating AI systems to explain how a decision was actually reached.
What are the pre-product design implications for businesses?
A business should consider whether it would be useful or appropriate for a human to have the power to intervene before a decision by an AI system is finalised in order to check the decision, or to have the ability to override the decision afterwards.
New data privacy laws increasingly emphasise accountability and the idea of privacy by design and default. Designing AI to meet data privacy requirements from the outset is likely to be more cost-effective in the long run. This could include, for example, designing AI systems that can generate logs demonstrating what data was considered and what factors were taken into account.
Where AI comes to be regarded as integral to critical national infrastructure or national security, government and security agencies may in time wish to have access to, say, data generated by AI, its algorithms, decision outputs, and explanations, and may use existing or new laws to obtain such access.
As with any IT system, AI is susceptible to cyber intrusion and “hacking”. Given the potential for both economic loss and physical damage (for example, AI-enabled robotics, or AI-controlled infrastructure) as a result of such incidents, businesses will need to ensure that all appropriate steps are implemented to: (1) guard against such risks; and (2) mitigate any breaches, in each case in accordance with applicable legal requirements and industry practices.
The Infrastructure, Mining and Commodities sector already uses AI in a number of areas. Cyber intrusion and hacking in this context could have particularly serious consequences. Take, for example, autonomous drills and haulage, drones for exploration, surveys, and mine operations, and infrastructure AI-enabled robotics. Any loss of control of such technology through cyber intrusion or hacking could result in significant financial losses, personal injury, and reputational damage to a business.
Such risks need to be factored into the risk assessment when deploying AI, with safeguards built in at the design stage of the technology. Nick Merritt, Global Head of Infrastructure, Mining and Commodities
Could insurance be a solution?
Insurance was an important element of the solutions discussed by the European Parliament Resolution on Civil Law Rules Relating to Robotics. However, the implications of AI for insurability have not been explored in any depth so far.
An important element for the pricing of risks is that the potential insured events occur within a certain predictable range and that there is a causal relationship. Although “cyber” policies (covering data destruction, hacking, and liability for damage to third parties caused, for example, by errors and omissions) are available in the market, policy coverage will need to be reviewed to ensure that AI risks do not fall within policy exclusions.
Insurance may not be able to address (or appropriately price) the risks where the “behaviour” of AI goes beyond a predictable range of foreseeable risks - for example, where:
- an AI-enabled device changes its risk profile as a result of “learning” or “teaching”;
- several AI-enabled devices spread a risk through autonomous replication; or
- there is are unanticipated “swarm” and interaction effects between multiple AI systems.
What is an AI swarm?
A metaphor taken from nature (for example, the collective behaviour of ant colonies, bird flocking, and fish schooling), “swarm” effects can arise out of the collective behaviour of disparate AI systems interfacing with each other. There is no centralised control structure dictating how individual AI systems should behave, and local interactions between them may lead to "intelligent" global behaviour.
What are the potential employment law issues?
All data-driven systems are susceptible to bias based on factors such as the choice of training data sets, which are likely to reflect subconscious cultural biases. House of Commons Science and Technology Committee, Robotics and Artificial Intelligence, 5th Report of Session 2016 – 2017, HC 145, 12 October 2016, page 18, quoting Drs Koene and Hatada from the University of Nottingham
A key area of concern in the employment context is the use of AI in the recruitment process. It would be reasonable to assume that an autonomous decision-making process for shortlisting applicants would be more objective and less prone to bias (in particular, unconscious bias) than the traditional process. The reality, however, is that AI in the context of human resources and recruitment can suffer from undetectable bias:
- first, every time an algorithm is written for AI, embedded within it may be all the biases of the humans who created it, which will then be perpetuated by the AI itself. Choices over what data sets to train AI with may also have this effect. Over time what was once “acceptable” bias may cease to be so, meaning that an AI system’s choices – learned from historical data sets, for example - may cease to satisfy current standards; and
- secondly, AI has the ability to learn from previous experience and to teach itself to carry out pattern recognition tasks without being explicitly programmed to do so – so even where the humans writing the programme or choosing data sets have no intention of creating a biased system, any bias inherent in the original recruitment process could be learned over time and repeated in autonomous decisions - and employers may not even be aware of this (or be able fully to understand the basis on which such decisions have been made). Data sets sourced from different countries, cultures, or age demographics may contain different implicit biases.
“AI Programs Exhibit Racial and Gender Biases, Research Reveals”
Hannah Devlin, AI Programs Exhibit Racial and Gender Biases, Research Reveals, 13 April 2017
Research published in the journal Sciencehas shown that an AI tool that helps computers to interpret natural language exhibits marked gender and racial biases. As AI gets closer to acquiring human language abilities it is also absorbing the ingrained biases within the language that humans use.
The subject of the research was AI that used “word embedding”. This approach is already used in web searches and in translation. It interprets language by building a mathematical representation using “vectors” between words (based on words that most frequently appear alongside the word in question).
The research shows that troubling human biases can be acquired by this methodology. For example:
- the words “female” and “woman” were associated with arts and humanities occupations and with home. On the other hand, the words “male” and “man” were associated more closely with maths and engineering professions; and
- the AI was more likely to associate European American names with pleasant words (such as “gift” and “happy”), while African American names were more commonly associated with unpleasant words.
The research suggests that AI dependent on word embedding which is used to select candidates for interview based on their CVs, for example, might be more likely to choose those candidates whose names are associated with the pleasant words. The algorithm picked up social prejudices inherent in the data training sets to which it was exposed.
Employers in many jurisdictions have a legal responsibility to avoid discrimination and bias in the recruitment process, so it will be necessary for them to consider whether it is prudent to accept the results of delegating the decision-making process to an AI system and, if they choose to do so, whether there is any action they can take to avoid or reduce the risk of bias or discrimination.
AI-enabled “robot bosses” are now in active operation, empowered to make decisions about employee task allocations for day-to-day activities with the object of improving workforce efficiencies. What would be the position if, say, less career-enhancing or more mundane tasks were consistently allocated to a group of people sharing a particular protected characteristic (such as race, gender etc)? The issues which arise in relation to recruitment applicable to AI are equally relevant here.
Bias and provision of goods and services
Laws may prohibit bias in the provision of goods and services. For example, in the European Union discrimination in the provision of goods and services is prohibited on grounds of sex and race under the Directive Implementing the Principle of Equal Treatment between Men and Women in the Access to and Supply of Goods and Services (2004/113/EC) and the Race Directive (2000/43/EC).
Some member states of the European Union have “gold plated” such requirements. For example, the United Kingdom’s Equality Act 2010 prohibits discrimination by service providers in relation to goods, services, and facilities in relation to protected characteristics. The protected characteristics are age, disability, gender reassignment, pregnancy and maternity, race, religion or belief, sex, and sexual orientation.
Goods, services, and facilities covered by the legislation could include facilities by way of banking or insurance or for grants, loans, credit or finance; facilities for transport or travel; access to and use of means of communication; and access to and use of information services (among others).
AI systems that make decisions on behalf of a service provider in relation to the provision of goods, services, and facilities could potentially put the service provider into breach of the 2010 Act if, say, the AI results in the service provider, on the basis of one or more of the protected characteristics, not providing a person with the service, goods, or facilities:
- at all (or terminating their provision);
- of the quality usually provided; or
- on the terms on which they are provided to the rest of the public.
How does AI impact upon a business’s intellectual property (IP) rights?
The nature of AI gives rise to challenges under existing IP legal frameworks, many of which do not address the IP position in relation to machine-created works and autonomous actions that infringe IP rights.
AI as IP creator: AI can create new works and inventions. Who owns the IP rights in relation to machine-generated original works? Can a machine be an inventor under the applicable patent law? If the machine cannot be an inventor, then another party may be the inventor or owner of the rights. Given the uncertainty regarding ownership:
- developers and operators of AI systems that create new works and inventions should use agreements with clear terms regarding ownership of IP rights; and
- IP law may need to be revised to address machine-created works in some jurisdictions.
AI as IP infringer: a machine may act or operate autonomously in a manner that infringes third party IP rights. If existing laws do not extend liability to a machine, then a related stakeholder (such as the owner, developer, operator, or another AI Supply Chain Participant) may be responsible.
What IP protection is available for AI?
AI products and services are protected by different types of IP rights:
- copyright: in many jurisdictions copyright automatically extends to computer code, visual interface features, audio, video guides, application programming interface structures, and other works;
- data sets: AI systems use and generate large data sets. Some jurisdictions provide specific provision for the protection of databases. Contractual terms with end users and third parties should clearly specify permitted use and ownership of such data sets;
- trade secrets: trade secrets (known as the law of confidentiality in some jurisdictions) protect confidential back-end server processes, algorithms, data, code, and ‘‘secret sauce”. As discussed earlier (see AI and Ethics), transparency and accountability are important for the deployment of ethical AI, so a business may face difficult choices in attempting to reconcile these against its overriding commercial imperatives to keep its algorithms and data secret. A business facing such a choice might be able to deliver transparency and accountability by being able to demonstrate how – functionally - its AI gets to decisions without actually having to disclose the algorithms themselves;
- Brands: a brand may constitute a word mark, logo, or icon protected as registered or unregistered trade-marks. Accurate technology and algorithmic accountability (discussed in AI and Ethics) can help a business develop goodwill for its brand and help to engender trust in relation to the use of AI by business;
- Industrial designs: in many jurisdictions, designs can be used to protect visual features of physical articles and robotics, such as computer interfaces, animations, and icons;
- Patents: patents protect new, useful and non-obvious processes and machines. Considerations relation to AI include:
- patents protect others from making, using or selling inventions, which may help businesses to obtain or maintain market share and protect research and development investments in connection with AI;
- patent publications can generally be cited against subsequently filed applications to prevent grant. Given the quickly evolving AI market, obtaining early priority dates is important in view of the patent system’s ‘‘first to file” approach; and
- computer-implemented inventions are under greater scrutiny and not all AI-related innovations are per se patentable. The jurisprudence is continually evolving, and patent offices, along with the courts in many jurisdictions, have struggled to establish clear delineations of what is patentable and what is not.
An IP strategy: businesses developing or using AI will need to consider developing and rolling out a “layered” approach to IP rights in connection with their AI in order to protect different aspects of the innovation and to reduce the risk of infringement claims against them by third parties. There should be clear agreements in place with those working with a business in connection with AI systems (for example, consultants and suppliers).
Who controls the works produced by artificial intelligence?
The product of AI, based on the machine learning algorithm “Generative Adversarial Network,” is being cited as one of the most sophisticated AIs ever created. This type of AI gives us a glimpse of what is to come – a world where the machine creates new and better things autonomously. But what is the role of IP in such a scenario? Understanding how AI operates, how current IP law is applied to AI and generated works, and how it impacts upon the ability of industry to protect their AI-related work is important, and will require close monitoring of industry voices and governmental policies over the years ahead.
Read our in-depth article on this topic here.
When might AI raise competition/antitrust issues?
Antitrust authorities, including the European Commission, are increasingly sensitive to the risk that AI, and algorithms more generally, can play a part in antitrust violations.
I don't think competition enforcers need to be suspicious of everyone who uses an automated system for pricing. But we do need to be alert. Because automated systems could be used to make price-fixing more effective. That may be good news for cartelists. But it's very bad news for the rest of us. Margrethe Vestager, EU Commissioner for Competition, Algorithms and competition, Bundeskartellamt 18th Conference on Competition, 16 March 2017
In recent years, the Commission and other European competition authorities have become increasingly concerned about the competitive implications of the growing use of algorithms. While the authorities recognise the pro-competitive uses of algorithms to help consumers find the lowest prices, they have already raised a number of concerns.
These concerns include the possibility that automated systems can help make price-fixing more effective, for instance by helping monitor deviations from price-fixing agreements or to implement price-fixing agreements in the first place, or facilitating other antitrust violations.
The growth of AI is likely to exacerbate these concerns. More speculatively, competitors’ independent use of AI may lead to parallel behaviour without any coordination. Such behaviour is not currently prohibited, but can be expected to attract increasing antitrust scrutiny.
Although not directly related to AI, the Commission has looked closely at Big Data issues in a number of merger reviews, including Microsoft/LinkedIn, Facebook/WhatsApp and Google/Doubleclick. Although the Commission has so far found no concerns, the role of data is set to become an increasingly important factor in the Commission’s assessment of how a merger affects competition.
Checklist – What key legal questions should in-house counsel be asking in relation to AI?