LimeOut: An Ensemble Approach To Improve Process Fairness
Résumé
Artificial Intelligence and Machine Learning are becoming increasingly present in several aspects of human life, especially, those dealing with decision making. Many of these algorithmic decisions are taken without human supervision and through decision making processes that are not transparent. This raises concerns regarding the potential bias of these processes towards certain groups of society, which may entail unfair results and, possibly, violations of human rights. Dealing with such biased models is one of the major concerns to maintain the public trust. In this paper, we address the question of process or procedural fairness. More precisely, we consider the problem of making classifiers fairer by reducing their dependence on sensitive features while increasing (or, at least, maintaining) their accuracy. To achieve both, we draw inspiration from "dropout" techniques in neural based approaches, and propose a framework that relies on "feature drop-out" to tackle process fairness. We make use of "LIME Explanations" to assess a classifier's fairness and to determine the sensitive features to remove. This produces a pool of classifiers (through feature dropout) whose ensemble is shown empirically to be less dependent on sensitive features, and with improved or no impact on accuracy.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...