Towards Formal Fairness in Machine Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Towards Formal Fairness in Machine Learning

Résumé

One of the challenges of deploying machine learning (ML) systems is fairness. Datasets often include sensitive features, which ML algorithms may unwittingly use to create models that exhibit unfairness. Past work on fairness offers no formal guarantees in their results. This paper proposes to exploit formal reasoning methods to tackle fairness. Starting from an intuitive criterion for fairness of an ML model, the paper formalises it, and shows how fairness can be represented as a decision problem, given some logic representation of an ML model. The same criterion can also be applied to assessing bias in training data. Moreover, we propose a reasonable set of axiomatic properties which no other definition of dataset bias can satisfy. The paper also investigates the relationship between fairness and explainability, and shows that approaches for computing explanations can serve to assess fairness of particular predictions. Finally, the paper proposes SAT-based approaches for learning fair ML models, even when the training data exhibits bias, and reports experimental trials.

Dates et versions

hal-02950860 , version 1 (28-09-2020)

Identifiants

Citer

Alexey Ignatiev, Martin Cooper, Mohamed Siala, Emmanuel Hébrard, Joao Marques-Silva. Towards Formal Fairness in Machine Learning. 26th International Conference on Principles and Practice of Constraint Programming (CP 2020), Sep 2020, Louvain (online), Belgium. pp.846-867, ⟨10.1007/978-3-030-58475-7_49⟩. ⟨hal-02950860⟩
128 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More