Empirical Analysis of Bias in Federated Learning
Résumé
Federated Learning (FL) is a machine learning paradigm that allows distributed clients to collaboratively train a global model without having to share their local data, preserving data privacy. However, it presents new challenges, including the potential for models to exhibit bias towards specific demographic groups. Motivated by this inherent issue, we conduct an extensive empirical analysis where we measure FL bias through the disparity in terms of model quality and demographic parity. First, we conduct an empirical evaluation on four widely used datasets, to evaluate the impact of data size and heterogeneity on FL model bias. Then, we analyze the actual effectiveness of the state-of-the-art bias mitigation methods on different datasets. Our findings reveal interesting observations indicating that an increase on data size or heterogeneity level comes with an increase of FL bias. It also shows that the bias mitigation mechanisms are more effective for datasets with less FL bias.