Differential Privacy has Bounded Impact on Fairness in Classification - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Differential Privacy has Bounded Impact on Fairness in Classification

Résumé

We theoretically study the impact of differential privacy on fairness in classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on accuracy conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use this Lipschitz property to prove a non-asymptotic bound showing that, as the number of samples increases, the fairness level of private models gets closer to the one of their non-private counterparts. This bound also highlights the importance of the confidence margin of a model on the disparate impact of differential privacy.
Fichier principal
Vignette du fichier
paper.pdf (870.61 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03902203 , version 1 (27-01-2023)
hal-03902203 , version 2 (18-09-2023)

Identifiants

Citer

Paul Mangold, Michaël Perrot, Aurélien Bellet, Marc Tommasi. Differential Privacy has Bounded Impact on Fairness in Classification. International Conference on Machine Learning, Jul 2023, Honolulu, United States. ⟨hal-03902203v2⟩
147 Consultations
71 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More