Artificial Intelligence and the future of work

June 20, 2018

Introduction

This article was originally published in Employment Law Journal.

Following a recent panel discussion organised by his firm, Paul Griffin considers the legal challenges posed by AI, particularly in the recruitment process

Artificial Intelligence (AI) will have an increasing impact on the workplace in the coming years – indeed, some have compared its significance with the impact the industrial revolution had on work. In looking at the dark side of technology, Charlie Brooker, the creator of the TV series Black Mirror, has said: ‘technology is like a drug and we need to understand its side effects’. The drug in question here is AI.

The predictions about AI’s impact on the workplace are contingent on many variables, including the level of employees’ education and training, the cost of the technology, its adoption rate, regulation, ethics and how far AI creates new jobs. Those who think it is too early to discuss the issue should reflect on the AI developed by Alphabet Inc’s Deep-Mind and Oxford University, which, having ingested a data set of thousands of BBC programmes, can now reportedly lip read better than humans. Similarly, AI developed by scientists at Stanford University can apparently read x-rays better than human beings, while fake skin developed by scientists at the Georgia Institute of Technology can, we are told, recognise objects by touch.

AI’s potential applications in the workplace seem to be limitless and evolving all the time. However, one of the uses of particular interest to employment and in-house lawyers is for recruitment purposes, where it creates a number of legal complexities.

AI and recruitment: the issue of bias

According to a December 2016 article in the Harvard Business Review, AI is being used by businesses to screen out up to 70 per cent of job applicants without any of the candidates having interacted with a human being. If you were overlooked by an algorithm for a potential job, wouldn’t you like to know how the algorithm made that decision?

Some of the AI used for recruitment now even evaluates candidates’ facial expressions and body language as part of a wider data set to determine their suitability for positions. However, the scientific community is not in agreement over the appropriateness of ascribing particular meanings to the expression of emotions. According to How Emotions are Made (The Secret Life of the Brain) by Professor Lisa Feldman, it appears that how people express emotions is not innate but learned, so our perception of others’ emotions can be very inaccurate.

Therefore, it seems that it is not that human expressions of emotion as they may be interpreted by AI have no meaning, but rather, those expressions may not lend themselves to a standard interpretation. This is just one example of why purchasers of such AI applications need to be cautious.

This problem highlights a general issue with the use of AI in the workplace: to reduce the risk of liability associated with its deployment, a business is likely to need to understand at a basic level how the algorithm is working and what its fallibilities are. If it has no such understanding, how can it take steps to mitigate the risks?

For example, an employer might use a third party developer’s AI software to reject unsuitable candidates automatically, without owning the software itself. If a candidate then claims the decision to exclude them is based on a protected characteristic (such as race, sex or disability), the employer may be vicariously liable for unlawful discrimination. It may attempt to use the statutory defence available under the Equality Act 2010 and claim that it has done everything reasonably possible in the circumstances to prevent the discrimination from occurring. Putting aside the inherent difficulties in successfully arguing the statutory defence, it could only ever be a real possibility if the employer had attempted to understand how the AI was making decisions.

It is unlikely that the software developer would allow the employer to delve into its coding in any meaningful way, as allowing access could damage its commercial interests. However, even if such access were to be authorised, many employers may face what is called the ‘black box’ issue. This term is used to describe the inability of many types of AI software to explain the logic used in reaching decisions. In some cases, there is a requirement for software to be developed to run alongside the AI application to interpret, in a way that humans understand, how decisions have been reached. Commentary in this area of work suggests that this solution is by no means straightforward.

If a developer assumed responsibility for the AI avoiding making unlawful, biased decisions, then it would open itself up to significant costs. It may be reluctant to warrant that the operation (and therefore decision-making) of AI complies with all laws. If the relevant AI uses machine learning, the developer may argue that the provider of the data sets used to teach the system ought to be responsible, not it. For example, at the client’s request, the developer may customise the AI using the client’s own data sets relating to what the business considers to be a successful recruitment outcome for it. If those data sets contain implicit biases, these may be reflected in the decision-making of the AI and the developer might argue that it ought not to be blamed for the outcome. If, for example, the AI decides the ideal candidate is a white, able-bodied, male employee in his thirties practising what might be perceived to be a mainstream religion, this is likely to reflect the business’s inherent bias.

Risk of reputational damage and unexpected outcomes

Businesses will need to address such issues as a commercial and legal imperative, particularly given the reputational damage that can follow. For example, in 2016, a well-known tech giant launched a chatbot through messaging platforms, which was intended to mimic the way a 19-year-old American girl might speak. The aim was reportedly to conduct research on conversational understanding. The chatbot was programmed to respond to messages in an entertaining way and to impersonate the audience she was created to target: American 18 to 20 year olds.

Hours after the chatbot’s launch, among other offensive things, she was providing support for Hitler’s views and agreeing that 9/11 was probably an inside job. She seemed to choose consistently the most inflammatory responses possible. By the evening of her launch, the chatbot was taken offline.

As this shows, AI can result in unexpected and unwanted outcomes. The chatbot’s responses were modelled on those she got from humans, so her evolution simply reflected the data sets to which she was exposed. What AI learns from can determine whether its outputs are perceived as intelligent or unhelpful.

Some commentators have suggested that one way to provide safeguards against inappropriate outcomes in the use of AI is to ‘stress test’ the software by pushing it beyond its normal operational capacity. It may also be appropriate to introduce a human element earlier in the decision-making chain to consider the AI’s proposed outcomes, and to measure and correct ‘automation bias’. Automation bias is the name given for the problem that humans tend to give undue weight to the conclusions presented by automated decision-makers and to ignore other evidence that suggests they should make a different decision.

For example, an automated diagnostician may help a GP reach a diagnosis, but it may also have the effect of reducing the GP’s own independent competence as they may be unduly influenced by the AI. The result may be that the augmented system may not be as much of an improvement overall as had been expected.

Data protection issues

Pursuant to the current Data Protection Act 1998, individuals have the right to object to an organisation reaching a decision about them based solely on automated means. However, the position under the forthcoming General Data Protection Regulation (GDPR) is more difficult. Article 22(1) of the GDPR says:

“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

In other words, without making any active objection, individuals have the right not to be subjected to a decision evaluating their personal aspects that is based solely on automated processing.

Meaning of 'solely'

An important point to consider is that these provisions only catch decisions based solely on automated decision-making, meaning that there is no human intervention in the process. This may well be the case in recruitment exercises where the employer routinely uses AI to filter out large numbers of candidates before creating a shortlist.

The Article 29 Data Protection Working Party (a group made up of the EU’s national data protection watchdogs) recently issued guidance on the level of human involvement required to avoid the prohibition. It gave the following example:

“An automated process produces what is in effect a recommendation concerning a data subject. If a human being reviews and takes account of other factors in making the final decisions, that decision would not be ‘based solely’ on automated processing.”

However, the working party went on to say that a data controller cannot avoid the prohibition:

“... by fabricating human involvement. For example, if someone routinely applies automatically generated profiles to individuals without any actual influence on the result, this would still be a decision based solely on automated processing. To qualify as human involvement, the controller must ensure that any oversight of the decision is meaningful, rather than just a token gesture. It should be carried out by someone who has the authority and competence to change the decision. As part of the analysis, they should consider all the relevant data.”

Exceptions to the prohibition on automated decision-making

Despite the general prohibition under the GDPR (and as is the case under the existing UK data protection legislation), a data subject could still be subject to an automated decision based on profiling if this is:

  • necessary for entering into or performing a contract;
  • authorised by EU or member state law which lays down suitable measures to safeguard the data subject’s legitimate interests; or
  • based on the data subject’s explicit consent (Article 22 (2)).

Businesses using AI during recruitment may not be able to rely on the first exception that it is necessary to carry out the employment contract because they can assess candidates using alternative methods (such as standard interview processes). As the working party observed:

If other effective and less intrusive means to achieve the same goal exist, then it would not be ‘necessary’.

Necessity, in other words, should be interpreted narrowly.

As there are no other general legal provisions enabling such decision-making, the question then arises whether it is possible for a candidate to give explicit consent to the recruitment method instead.

Article 4(11) of the GDPR defines consent as a freely given, specific, informed and unambiguous indication of the data subject’s wishes by which they, through a statement or clear affirmative action, signify their agreement to the processing of their personal data. Guidance provided by the Article 29 working party suggests that consent in this context should be through a clear written statement.

So, for a candidate to give informed consent, they would need to know what the automated processing involves. The working party’s guidelines suggest that the organisation must:

  • tell the individual that it is using automated decision-making for these purposes;
  • provide meaningful information about the logic involved (for example, by explaining the rationale behind, or the criteria relied on in reaching, a decision); and
  • explain the significance and envisaged consequences of the processing.

The employer may need to seek information from the AI platform provider so it can communicate the logic behind the decision-making to the job applicant. The black box issue referred to above, as well as commercial sensitivity, may be an impediment to explaining this properly. Similarly, the learning component of the AI may be difficult to map and explain.

So even if one of the exceptions applies (the processing is necessary to enter into an employment contract or based on the data subject’s explicit consent), the employer must put adequate safeguards in place to protect the candidate’s rights, freedoms and legitimate interests. At the very least, these must enable the individual to:

  • obtain human intervention in the decision-making process;
  • express their point of view; and
  • contest the decision.

The provider of the platform and the employer will therefore have to offer alternatives to the prospective employee if required to do so. However, it remains to be seen how many candidates will actually avail themselves of this right in a mass recruitment exercise, especially if they feel they are asking to be treated differently from other applicants.

Benefits and risks

Many organisations will find the business case for AI compelling, particularly in recruitment. The time and effort involved in interviews is significant for both the employer and employee (who may have to attend an assessment centre for a job which they stand little chance of getting). However, there may be significant legal challenges to AI as its use becomes more common. Businesses must avoid blindly adopting technology and ensure they understand what it is doing so they can mitigate these legal risks.

 

For more posts on Artificial Intelligence and the Future of Work - click on the Employment topic on the blog.