Algorithmic decision-making and the UK ICO's Guidance on AI

September 08, 2020

Algorithmic decision-making has been in the news of late. From Ofqual’s downgrading of students’ A-level results[1] to the complaint lodged by None of Your Business’ against the credit rating agency CRIF for failing (amongst other things) to be transparent about the reasons why a particular applicant had been given a negative rating[2]. We have been reminded of the potential backlash that could result from decisions that are perceived as incorrect or unfair by algorithms where the workings of which are largely unknown to the individuals they affect. This presents challenges for organisations which are increasingly adopting Artificial Intelligence-based solutions to make more efficient decisions. It is timely, then, that the UK Information Commissioner’s Office (ICO) has recently issued its “Guidance on AI and data protection[3] to assist organisations in determining how they should navigate the complex trade-offs that use of an AI system may require.

Click here to read more about the background of the guidance and key takeaways.

 

[1] Financial Times, “Were this year’s A-level results fair?”: https://www.ft.com/content/425a4112-a14c-4b88-8e0c-e049b4b6e099

[2] Data Guidance, “Austria: NOYB issues complaint against CRIF for violating right to information, data correctness, and transparency”:

https://www.dataguidance.com/news/austria-noyb-issues-complaint-against-crif-violating-right-information-data-correctness-and

[3] https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-ai-and-data-protection/