Fairness and explainability in automatic decision-making systems. A challenge for computer science and law - Archive ouverte HAL
Article Dans Une Revue EURO journal on decision processes Année : 2023

Fairness and explainability in automatic decision-making systems. A challenge for computer science and law

Résumé

The paper offers a contribution to the interdisciplinary constructs of analyzing fairness issues in automatic algorithmic decisions. Section 1 shows that technical choices in supervised learning have social implications that need to be considered. Section 2 proposes a contextual approach to the issue of unintended group discrimination, i.e. decision rules that are facially neutral but generate disproportionate impacts across social groups (e.g., gender, race or ethnicity). The contextualization will focus on the legal systems of the United States on the one hand and Europe on the other. In particular, legislation and case law tend to promote different standards of fairness on both sides of the Atlantic. Section 3 is devoted to the explainability of algorithmic decisions; it will confront and attempt to crossreference legal concepts (in European and French law) with technical concepts and will highlight the plurality, even polysemy, of European and French legal texts relating to the explicability of algorithmic decisions. The conclusion proposes directions for further research.
Fichier principal
Vignette du fichier
interfaire02.pdf (524.69 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04239867 , version 1 (27-11-2022)
hal-04239867 , version 2 (12-10-2023)

Licence

Identifiants

Citer

Thierry Kirat, Olivia Tambou, Virginie Do, Alexis Tsoukias. Fairness and explainability in automatic decision-making systems. A challenge for computer science and law. EURO journal on decision processes, 2023, 11, pp.100036. ⟨10.1016/j.ejdp.2023.100036⟩. ⟨hal-04239867v2⟩
104 Consultations
125 Téléchargements

Altmetric

Partager

More