Meaningful Human Control to Detect Algorithmic Errors
Résumé
Is human control an effective way to detect algorithmic errors? For the CJEU, the systematic verification of algorithmic results by a human is an important safeguard to reduce the number of false positives, i.e. the number of people wrongly targeted by an algorithm designed to detect terrorism risks. But are we sure that human controls can actually detect these errors? And what kind of errors are we talking about? How should we organize human controls in order to best detect errors? In addition to the October 6, 2020 ruling of the CJEU, many other statutes and court decisions require human control over algorithmic decisions: the EU’s proposed AI Act, the GDPR, the Council of Europe’s Convention 108+, the State of Washington’s law on facial recognition, the EU’s PNR directive, international humanitarian law on lethal autonomous weapon systems, the CJEU’s 26 July 2017 opinion on PNR passenger data, and the EU regulation on online terrorist content, among others.
From the standpoint of protecting individual rights, human control has two functions: first, it helps reduce the number of algorithmic errors, the objective referred to by the CJEU in its 6 October 2020 decision. Second, it helps guarantee a procedure that is respectful of individual rights. For the first function, the value of human control is linked solely to its success in reducing errors; it has a purely instrumental value. For the second function, human control has an intrinsic value of its own, related to the quality of the decision process for humans: process values. Having a human decision maker in the loop makes the decision process fairer, more respectful of human values, regardless of whether the human intervention reduces errors. This article focuses on the first function of human control, i.e. the correction of algorithmic errors. I will not discuss in this article the other objectives linked to human control, including its role in allocating liability and in demonstrating compliance in an accountability framework.