Obtaining fairness using optimal transport theory - Archive ouverte HAL
Conference Papers Year : 2019

Obtaining fairness using optimal transport theory

Abstract

Statistical algorithms are usually helping in making decisions in many aspects of our lives. But, how do we know if these algorithms are biased and commit unfair discrimination of a particular group of people, typically a minority? Fairness is generally studied in a probabilistic framework where it is assumed that there exists a protected variable, whose use as an input of the algorithm may imply discrimination. There are different definitions of Fairness in the literature. In this paper we focus on two of them which are called Disparate Impact (DI) and Balanced Error Rate (BER). Both are based on the outcome of the algorithm across the different groups determined by the protected variable. The relationship between these two notions is also studied. The goals of this paper are to detect when a binary classification rule lacks fairness and to try to fight against the potential discrimination attributable to it. This can be done by modifying either the classifiers or the data itself. Our work falls into the second category and modifies the input data using optimal transport theory.
Fichier principal
Vignette du fichier
OTFair_Final.pdf (424.23 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-01806912 , version 1 (25-06-2018)

Licence

Copyright

Identifiers

  • HAL Id : hal-01806912 , version 1

Cite

Eustasio del Barrio, Fabrice Gamboa, Paula Gordaliza, Jean-Michel Loubes. Obtaining fairness using optimal transport theory. International Conference on Machine Learning, Jun 2019, Los Angeles, United States. ⟨hal-01806912⟩
160 View
769 Download

Share

More