Data privacy: AI and the GDPR

November 02, 2017

The EU General Data Protection Regulation (GDPR) is not just one of the biggest changes to the regulatory framework of data protection for all businesses located in the EU that provide services to EU nationals, but it also specifically addresses automated individual decision-making, including profiling. The European data protection authorities – working together in the so-called Article 29 Data Protection Working Party (WP 29) – have recently adopted Guidelines on Automated Decision-making, which are likely to profoundly impact AI-based business models. Automated decision-making is often defined as the ability to make decisions by technological means without human involvement. This definition also includes so-called artificial general intelligence.

According to Article 22(1) GDPR, any person – the data subject – has the right “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. Accordingly, this prohibition only applies when a decision based solely on automated processing, including profiling, has a legal effect on or similarly significantly affects someone.

According to the WP 29’s interpretation of Article 22(1) GDPR, the reference to “based solely” means that there is no human involvement in the decision process. In addition, a controller cannot avoid the Article 22 GDPR prohibition by fabricating human involvement (e.g. if there is no actual influence by the human involved on the result).

The GDPR does not define “legal effect” or “similarly significant effect”. In its guidance, the WP 29 took a very broad interpretation of these terms. The WP 29 argued that “a legal effect suggests a processing activity that has an impact on someone’s legal rights, such as the freedom to associate with others, vote in an election or take legal action”, and it may also be something that affects a person’s legal status or their rights under a contract.

But even if a decision-making process does not have an effect on people’s legal rights, it could still fall within the scope of the prohibition in Article 22 GDPR if it produces a negative or positive effect that is equivalent or similarly significant in its impact. The WP 29 argued that “for data processing to significantly affect someone the effects of the processing must be more than trivial and must be sufficiently great or important to be worthy of attention”, i.e. the decision must have the potential to significantly influence the circumstances, behavior or choices of the individuals concerned.[1]

Article 22(2) GDPR provides for exceptions from the prohibition on automated decision-making. Data processing for automated decision-making may be undertaken if such processing is: (a) necessary for the performance of or entering into a contract; (b) authorised by Union or Member State law; or (c) based on the data subject’s consent.

Article 22(3) GDPR adds a further layer of protection by giving the data subject “at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision” in a simple way.

Additionally, with regard to automated decision-making, a data subject has the right to be informed about (Articles 13(2)(f) and 14(2)(g) GDPR) and given access to his or her personal information (Article 15(1)(h) GDPR). If data controllers are making automated decisions within the scope of Article 22(1) GDPR, they must:

  • Tell the data subject that are engaging in this type of activity;
  • Provide meaningful information about the logic involved; and
  • Explain the significance and envisaged consequences of the processing.

Our take

The recent guidance by the WP 29 provides additional challenges for AI-based services regarding the interpretation of the new EU legal framework on data privacy. In particular, the data subjects’ access and information rights and the requirement to provide “meaningful information about the logic involved” are difficult to comply with.

The WP 29 acknowledges that “the growth and complexity of machine-learning can make it challenging to understand how an automated decision-making process works”, but nevertheless it expects data controllers using such processes to “find simple ways to tell the data subject about the rationale behind, or the criteria relied in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm” and that “complexity is no excuse for failing to provide information to the data subject”.

However, AI decision-making is often opaque as AI systems may not be able to indicate how a decision is reached,[2] data logs are not typically produced and proprietary data sets (often protected as trade and business secrets) may determine such automated decision-making. These problems are likely to be exacerbated as AI-based services become more sophisticated.  Furthermore, as AI systems often rely on machine learning, a disclosure of algorithms does not provide a full and thorough picture of how a decision was reached, as the learning component has not been factored in. While requiring human intervention may ameliorate data protection risks, it may also negate the intended benefits of using AI.

Even though the guidance provided by the WP 29 creates more challenges for AI-based service providers, the GDPR also offers a framework that can adapt to the kinds of challenges described above. Businesses should, however, start taking these requirements into account at the design stage in AI product development.

Footnotes

[1] Recital 71 provides the following typical examples: “automatic refusal of an online credit application” or “e-recruiting practices without any human intervention”.

[2] 'It is currently rare for AI systems to be set up to provide a reason for reaching a particular decision’, House of Commons Science and Technology Committee, Robotics and Artificial Intelligence, 2016.