Empirical Analysis of Bias in Federated Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Empirical Analysis of Bias in Federated Learning

Résumé

Federated Learning (FL) is a machine learning paradigm that allows distributed clients to collaboratively train a global model without having to share their local data, preserving data privacy. However, it presents new challenges, including the potential for models to exhibit bias towards specific demographic groups. Motivated by this inherent issue, we conduct an extensive empirical analysis where we measure FL bias through the disparity in terms of model quality and demographic parity. First, we conduct an empirical evaluation on four widely used datasets, to evaluate the impact of data size and heterogeneity on FL model bias. Then, we analyze the actual effectiveness of the state-of-the-art bias mitigation methods on different datasets. Our findings reveal interesting observations indicating that an increase on data size or heterogeneity level comes with an increase of FL bias. It also shows that the bias mitigation mechanisms are more effective for datasets with less FL bias.
Fichier non déposé

Dates et versions

hal-04394744 , version 1 (15-01-2024)

Identifiants

  • HAL Id : hal-04394744 , version 1

Citer

Nawel Benarba, Sara Bouchenak. Empirical Analysis of Bias in Federated Learning. Conférence francophone d'informatique en Parallélisme, Architecture et Système, Jul 2023, Annecy, France. ⟨hal-04394744⟩
24 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More