ICO blog post on AI and solely automated decision-making
The ICO has published a blog post on the role of “meaningful” human reviews in AI systems to prevent them from being categorised as “solely automated decision-making” under Article 22 of the GDPR. That Article imposes strict conditions on making decisions with legal or similarly significant effects based on personal data where there is no human input, or where there is limited human input (e.g. a decision is merely “rubber-stamped”).
The blog post recognises that guidance on these issues has already been published by both the ICO and the EDPB, advising (among other things) that human reviewers be involved in checking the system’s recommendation and do not “routinely” apply the automated recommendation to an individual. The post considers issues that may additionally apply in complex AI systems, specifically, the risk areas of:
- “automation bias” (when human users, believing that an AI system is based on maths and data, stop using their own judgment); and
- “lack of interpretability” (when human users struggle to understand the output particularly from deep learning AI systems and simply agree with the recommendations without challenge).
Mitigating “automation bias”
Controls to mitigate automation bias should be in place from the design and build phase.
System designers should consider what features they would expect the AI system to consider and which additional factors the human reviewers should take into account before finalising their decision. For instance, in a recruitment context, the AI system could consider measurable properties like how many years’ experience a job applicant has, while a human reviewer assesses the skills of applicants which cannot be captured in application forms.
Human reviewers should have access to other information which is not considered by the AI system so as to make a meaningful decision. Otherwise, there is a tendency to rely on the AI system’s output and their review may not be sufficiently meaningful. Using the example above, human reviewers might have a face-to-face interview with the candidate and consider the results of that interview alongside the recommendation from the AI system.
Mitigating a “lack of interpretability”
This should also be considered from the design phase, allowing explanation tools to be developed as part of the system if required. Designers should consider tools such as the use of ‘local’ explanations, using methods like Local Interpretable Model-agnostic Explanation (LIME), which provides an explanation of a specific output rather than the model in general; or the provision of confidence scores alongside each output, which could help a human reviewer in their own decision-making.
Training and monitoring
Training is pivotal to ensuring an AI system is considered non-solely automated. As a starting point, human reviewers should be trained for example to have a healthy level of scepticism in the AI system’s output and given a sense of how often the system could be wrong. It is important that human reviewers have the authority to override the output generated by the AI system and they and are confident that they will not be penalised for so doing.
Regular risk monitoring reports also need to flag where human reviewers are routinely agreeing with the AI system’s outputs, and cannot demonstrate they have genuinely assessed them, as their decisions may effectively be classed as solely automated under GDPR.
Our take
The ICO is inviting comments and feedback on this topic, which they consider to be a key area of focus in their proposed AI Auditing Framework.
The guidance provided so far is useful in understanding the requirements for “meaningful” human involvement, but issues relating to interpretability are likely to remain challenging in the context of true machine learning-based systems where correlations between the input and output data are harder to identify. This may mean that many of these systems cannot ensure the type of human involvement contemplated by the ICO and so are considered to be “solely automated”. Whether the stricter Article 22 requirements apply to these systems will then turn on the nature of the decisions taken.