AI regulation and the right to meaningful explanation. Pt 1. Why (not)?
11 November 2024Ask anyone except a gun rights activist, and they will agree that the rise of new technologies requires the implementation of new regulations. Not too long ago, a law against human cloning would have sounded ridiculous. Since we are now actually able to clone human embryos,[1] we should expect legislators to consider which forms of human cloning, if any, are permissible.
The seemingly omnipresent use of improved AI technologies[2] in decision-making imposes similar demands. Having an automated decision algorithm rank hundreds of applicants for a job or assess thousands of mortgage applications was something we simply did not have to worry about until recently. But now lawmakers must figure out how the use of such AI systems should be regulated.
The recent EU AI Act[3] provides a wide-ranging and important test case for legislating AI use. The 2024 final draft restricts the allowable use of biometric information, regulates fines in cases of breaches, and promises to provide us with a right to receive ‘meaningful’ explanations of important decisions made with the help of AI technologies.
The promise of a right to meaningful explanations is an interesting one. A previous attempt at consolidating such a right in the General Data Protection and Regulation Act (GDPR) has been criticized for delivering such a right in letter only.[4] Earlier drafts of the EU AI act risked doing even worse. As late as February 2024, the draft mentioned a right to request meaningful explanations, without any mention of a right to obtain one. A right to request is an incredibly weak right and does not deliver a right to obtain, even in letter. I take it I already have a right to request Google, Microsoft, OpenAI, and most anyone else to pay off my mortgage and walk my dog. This right to request leaves them with the right to politely say no, or (more realistically) to ignore my request entirely. And notably, the EU’s press release already promised a right to explanation in 2023, when the actual text only formulated a right to request one.[5]
The current EU AI act draft finally lives up to the promise of the earlier press release by replacing a right to request explanations in article 86[6] with a right to obtain explanations. Even setting aside everyone’s duty to live up to one’s promises, this is a laudable step forward. Philosophers, civil society, expert panels, and popular science writers have all argued that, if AI is to be implemented ethically, its decisions had better be explainable to us. Their arguments should make us hope that the current formulation of the EU AI Act indeed delivers a robust right to explanation, and that other regulatory bodies follow suit.
Two considerations speak heavily in favour of such a right. The first is due to the possibility of hidden mistakes in opaque decision procedures.[7] The second is due to the nature of explanation and its importance to planning ahead in life.[8]
The risk of hidden mistakes in opaque decision procedures should be immediately apparent. Suppose we know which criteria a college uses to assess its applicants. This knowledge is crucial in assessing whether any of them are problematic. Gender and taste in music should be irrelevant to one’s chances of making it into one’s college of choice. If the explanation for my not getting accepted into my college of choice involves any of these features, I know that I was treated unfairly. Transparency allows us to check whether the decision procedures is unfairly prejudiced against female applicants or fans of the Red-Hot Chili Peppers. By making explanations of decisions available, those who make them can be held accountable for their fairness and those affected by the decisions can trust that any unfair treatment will be detected. If we keep the explanations of outcomes hidden, fairness, trust, and accountability is put at considerable risk.
Even if hidden mistakes could be avoided, transparency and explainability play an important moral, political, and social role. Suppose no untoward criteria show up in the decision procedure for college admission. Math grades and service to the community are natural candidates for criteria in university applications. As a future applicant, I have a distinct interest in knowing that these factors play a role in the decision procedure. This knowledge is crucial for my effectively planning ahead when preparing myself for university. The same holds true for other life-changing decision procedures that the EU AI Act seeks to legislate. I have a distinct interest in knowing which criteria determine my chances of getting a mortgage, citizenship, or social benefits. Not only because I want to know whether I will be treated fairly, but also because I want to plan ahead effectively.
Governmental bodies have a duty to regulate society such that it promotes human flourishing. That is why we expect them to provide a right to education, housing, and proper care. It is for similar reasons that we should expect them to provide for a right to meaningful explanation of important decisions, such as those pertaining to college admission, job applications, access to social benefits and citizenship, and so on. Human flourishing requires planning ahead, and proper planning ahead requires transparent procedures for life-changing decisions.
Why then, have the GDPR and the former drafts of the AI Act been so coy about delivering such a right?
Two arguments against a right to explanation show up in the debates. The first is that transparency also has drawbacks. It might violate the proprietary rights of the companies developing decision algorithms to divulge too much information of their workings. Transparency might aid bad actors to game the system,[9] or perhaps force us to make decision procedures that ought to be very complex problematically simple.[10] The second is that providing explanations for the outcomes of complex AI algorithms is simply impossible.
With regards to the first, I can only agree that these are valid concerns. However, that does not make the goods of providing explanations go away. For each case, we will need to tally up the goods and the bads of providing the explanations and make our best guess as to how they weigh up. Conflicting rights and goods are a challenging business, but they are not unique to this issue and there is no avoiding them.
The second argument concerns me less. For one thing, the claim that complex AI decisions defy meaningful explanations strikes me as undersupported, and I will dedicate my second contribution to this blog to arguing my case. But suppose AI decisions can indeed not be explained. This would not affect the moral, social, or political value of explainability in high-stakes decision scenarios. Instead, it means that such AI algorithms are not to be used at all in those scenarios.[11] If AI decisions defy explanation, the right to explanation would not lose its importance. Quite the contrary.
So why did it take so long to introduce a robust right to explanation? I couldn’t really tell you. I’m just happy the EU AI Act finally contains a trace of a serious effort to protect such a right, and hope that other legislative bodies follow in its footsteps.
[1] https://www.scientificamerican.com/article/the-first-human-cloned-em/ (accessed July 24th, 2024)
[2] https://research.aimultiple.com/ai-usecases/#ai-use-cases-for-hr (accessed July 24th, 2024)
[3] https://www.aiact-info.eu/ (accessed July 24th, 2024)
[4] https://academic.oup.com/idpl/article/7/2/76/3860948?login=false (accessed July 24th, 2024)
[5] https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai (Accessed Feb 17th, 2024)
[6] https://www.aiact-info.eu/article-86-right-to-explanation-of-individual-decision-making-2/ (accessed July 24th, 2024)
[7] https://www.harvardmagazine.com/2021/08/meredith-broussard-ai-bias-documentary (accessed July 24th, 2024)
[8] https://link.springer.com/article/10.1007/s13347-022-00577-5 (accessed Jily 24th, 2024)
[9] https://en.wikipedia.org/wiki/Campbell%27s_law (accessed July 24th, 2024)
[10] https://newworkinphilosophy.substack.com/p/c-thi-nguyen-university-of-utah-transparency (accessed July 24th, 2024)
[11] https://arxiv.org/abs/1811.10154 (accessed July 24th, 2024)
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017