Operational fairness when coding facial authentication
Résumé
When dealing with machine learning, engineers tend to focus on improving certain aspects of performance of their system, such as efficiency, possibly dismissing other important criteria, like fairness. This mindset can have dreadful consequences for companies as well as for end users and may yield discrimination, for instance when resulting in automated facial recognition systems that work better for white men than for women of color (Buolamwini & Gebru, 2018). Researchers have long reduced fairness to a data issue: if the learning data is unbalanced, the system is quite likely to be biased. But this belief overlooks other parameters or coding choices that are also likely to affect fairness. Which coding choices really affect fairness and what are the trade-offs with efficiency? In this paper, focusing on facial recognition, various choices are considered regarding data sampling, normalization and augmentation, neural network depth, loss function margin, learning rate, and the authentication threshold. All of these choices have been tested on different metrics for efficiency and fairness. The results show that all of them have an impact on fairness at various scales. The best choice for fairness is not always the best for efficiency and trade-offs are sometimes necessary. Ethical discussions should therefore come with the design of machine learning systems, making such conflicts explicit and guiding the decisions at coding and software maintenance times.
Domaines
Intelligence artificielle [cs.AI]
Fichier principal
Operational_fairness_when_coding_facial_authentication_GORNET_KIRCHNER_TESSIER.pdf (1.16 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|