Explain yourself: The legal requirements governing explainability
Agentic AI brings the promise of AI making a range of decisions autonomously. It has been proposed as the way forward for some of the most impactful decisions in our lives: interacting with customers and actioning requests, triaging requests for medical appointments, and hiring candidates — to name a few.
But many of the models we are looking to build agentic applications on (or to assist decisions in other ways) are black boxes: users can provide the system with data and receive corresponding output but cannot see the logic that leads to the system’s output. As a result, organizations may be in the position of saying “computer says no” without being able to pinpoint why. This lack of information can create organizational and ethical challenges as well as legal challenges.
This article examines the legal obligations to explain decisions to affected persons from both data protection and AI-specific legal perspectives across three legal regimes in the UK and EU. It considers the EU’s General Data Protection Regulation (EU GDPR), the UK’s post-Brexit assimilated version (UK GDPR) (together the GDPR), and the more recent EU AI Act, as well as guidance from regulators in the EU and UK and relevant case law. The article provides practical tips for compliance and incorporating explainability into your wider governance program.