Artificial Intelligence and Separate Legal Personality

November 12, 2019

Written by James Russell and his trainee, Paimon Abedi.

The International Bar Association recently published a report, “The Future Of Work” (the IBA Report) in which it discussed (among other things) the attribution of liability for damage caused by Artificial Intelligence (AI) in a workplace context.  We have similarly highlighted such problems (see, for example, our publication, Artificial Intelligence and the Future of Work), but of course the issue of liability and AI goes far beyond the workplace (we discuss liability and AI in general in our site, Artificial Intelligence).  

As AI becomes more ubiquitous the scope for economic loss or physical damage – whether on a B2B or B2C basis - inevitably grows too.  One need only think of fully autonomous  vehicles to understand why this might be the case.  Who will be responsible legally for the choices such a system might make in a given situation? (For a consideration of these issues, see our site, Autonomous Vehicles.)

Attributing liability for loss or damage to an AI system itself would be highly problematic in most legal systems.  Typically:

  • Liability attaches to a legal person;
  • AI does not have legal personality,

and as such the source of the loss or damage must be found elsewhere. The IBA Report suggests that this “accountability gap” could be filled by assigning legal personality to AI, and that there is “no a priori reason to prevent autonomous AI machines from being granted with a legal status, as there was no reason to, in principle, prevent corporations and other legal fictions”

Speaking generally, where legal personality is recognised by a legal system, this means that the legal person can:

  • Assert rights: for example, a company can enter into and enforce contracts; or
  • Have liability attributed directly to it: for example, a company can be directly liable for loss or damage caused by it.

There already exist various examples of legal systems where legal personality has been conferred on inanimate things in order to assist in the assertion of rights on behalf of those things (often by human guardians with legal standing to take legal action).  For example:

  • Legal personality has been conferred by statute onto New Zealand’s Whanganui River in order to enable indigenous rights of protection to be asserted on its behalf by guardians; and
  • In India the courts have recognised that religious idols may in some circumstances have rights as legal persons, to be asserted on their behalf by humans.

While one might see a policy rationale for providing a route for asserting rights on behalf of an inanimate object in some circumstances, attributing liability to it raises a very different set of issues:

  • Would insulating those involved in the development or training process of AI from liability by attributing liability (via legal personality) to the resulting AI disincentivise them to take appropriate care in the development and training process?
  • What would it actually mean to hold an AI system liable as a separate legal person? With companies, the directors and shareholders can ultimately be held to account for decisions or actions of the company (whether in the form of reduction in business value through damages awards, fines etc, through to personal liability of directors in some circumstances).  Such an approach does not really work with AI, where its decisions may be made completely independently from its manufacturers, programmers, engineers or users. 

The European Parliament suggested some time back that such problems could be addressed by an underlying compensation scheme.  To date, there has been no further action at European level in relation to this. 

The IBA Report considered a number of other potential legal solutions to deal with attributing responsibility for loss or damage caused by AI.  These include: 

  • Applying product liability rules where damage results from human error (other than by the user);
  • Holding manufacturers liable where no human error is involved (manufacturers are most familiar with the technology and are more likely able to absorb the costs);
  • A strict liability approach holding users of the technology responsible for loss or damage caused; or
  • Vicarious liability, holding the owner or user of the technology liable to the extent that the AI deviates from the intended instructions or use.

Our view

It is likely that many legal systems will eventually specifically address the question of liability in relation to AI by legislation.  They may even do so sectorally in relation to particular industries.

In the meantime the courts will need to grapple with novel fact situations brought about by novel technologies.  We can already see this in cases where the courts have had to consider liability in relation to automated trading – a court in one case drawing an interesting distinction between deterministic computing (“the same outcome with the same inputs”) and AI (“having a mind of its own”) – see our client briefing, Singapore Court’s Cryptocurrency Decision: Implications for Cryptocurrency Trading, Smart Contracts and AI.

There are many steps AI stakeholders can take to reduce the risk of liability.  For more information, see our site, Artificial Intelligence.