Making ML models fairer through explanations: the case of LimeOut - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Making ML models fairer through explanations: the case of LimeOut

Guilherme Alves
Vaishnavi Bhargava
  • Fonction : Auteur
  • PersonId : 1072913
Miguel Couceiro
Amedeo Napoli

Résumé

Algorithmic decisions are now being used on a daily basis, and based on Machine Learning (ML) processes that may be complex and biased. This raises several concerns given the critical impact that biased decisions may have on individuals or on society as a whole. Not only unfair outcomes affect human rights, they also undermine public trust in ML and AI. In this paper we address fairness issues of ML models based on decision outcomes, and we show how the simple idea of "fea-ture dropout" followed by an "ensemble approach" can improve model fairness. To illustrate, we will revisit the case of "LimeOut" that was proposed to tackle "process fairness", which measures a model's reliance on sensitive or discriminatory features. Given a classifier, a dataset and a set of sensitive features, LimeOut first assesses whether the classifier is fair by checking its reliance on sensitive features using "Lime explana-tions". If deemed unfair, LimeOut then applies feature dropout to obtain a pool of classifiers. These are then combined into an ensemble classifier that was empirically shown to be less dependent on sensitive features without compromising the classifier's accuracy. We present different experiments on multiple datasets and several state of the art classifiers, which show that LimeOut's classifiers improve (or at least maintain) not only process fairness but also other fairness metrics such as individual and group fairness, equal opportunity, and demographic parity.
Fichier principal
Vignette du fichier
LimeOut_AIST2020-Final.pdf (406.76 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02864059 , version 1 (10-06-2020)
hal-02864059 , version 2 (11-06-2020)
hal-02864059 , version 3 (30-07-2020)
hal-02864059 , version 4 (08-10-2020)
hal-02864059 , version 5 (27-10-2020)

Identifiants

  • HAL Id : hal-02864059 , version 5

Citer

Guilherme Alves, Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli. Making ML models fairer through explanations: the case of LimeOut. 9th International Conference on Analysis of Images, Social Networks, and Texts 2020 (AIST 2020), Oct 2020, Moscow, Russia. ⟨hal-02864059v5⟩
423 Consultations
669 Téléchargements

Partager

Gmail Facebook X LinkedIn More