Achieving Fairness with Decision Trees: An Adversarial Approach - Archive ouverte HAL
Article Dans Une Revue Data Science and Engineering Année : 2020

Achieving Fairness with Decision Trees: An Adversarial Approach

Vincent Grari
Boris Ruf
  • Fonction : Auteur
Sylvain Lamprier
Marcin Detyniecki

Résumé

Abstract Fair classification has become an important topic in machine learning research. While most bias mitigation strategies focus on neural networks, we noticed a lack of work on fair classifiers based on decision trees even though they have proven very efficient. In an up-to-date comparison of state-of-the-art classification algorithms in tabular data, tree boosting outperforms deep learning (Zhang et al. in Expert Syst Appl 82:128–150, 2017). For this reason, we have developed a novel approach of adversarial gradient tree boosting. The objective of the algorithm is to predict the output Y with gradient tree boosting while minimizing the ability of an adversarial neural network to predict the sensitive attribute S . The approach incorporates at each iteration the gradient of the neural network directly in the gradient tree boosting. We empirically assess our approach on four popular data sets and compare against state-of-the-art algorithms. The results show that our algorithm achieves a higher accuracy while obtaining the same level of fairness, as measured using a set of different common fairness definitions.

Dates et versions

hal-03923322 , version 1 (04-01-2023)

Identifiants

Citer

Vincent Grari, Boris Ruf, Sylvain Lamprier, Marcin Detyniecki. Achieving Fairness with Decision Trees: An Adversarial Approach. Data Science and Engineering, 2020, 5 (2), pp.99-110. ⟨10.1007/s41019-020-00124-2⟩. ⟨hal-03923322⟩
61 Consultations
0 Téléchargements

Altmetric

Partager

More