Mitigating Gender Bias of Pre-Trained Face Recognition Models with an Ethical Module - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Mitigating Gender Bias of Pre-Trained Face Recognition Models with an Ethical Module

Mitigating Gender Bias of Pre-Trained Face Recognition Models with an Ethical Module

Résumé

In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, BFAR and BFRR, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, a careful selection significantly reduces gender bias. This paper, in its previous form, has been accepted at ICML 2022.
Fichier non déposé

Dates et versions

Identifiants

  • HAL Id : hal-03773368 , version 1

Citer

Jean-Rémy Conti, Nathan Noiry, Vincent Despiegel, Stéphane Gentric, Stephan Clémençon. Mitigating Gender Bias of Pre-Trained Face Recognition Models with an Ethical Module. Workshop on Trustworthy Artificial Intelligence as a part of the ECML/PKDD 22 program, IRT SystemX [IRT SystemX], Sep 2022, Grenoble, France, France. ⟨hal-03773368⟩
64 Consultations
0 Téléchargements

Partager

More