Quantifying fairness of federated learning LPPM models
Résumé
Despite the great potential offered by Artificial Intelligence in the context of smart mobility, it comes with the greater challenge of preserving the privacy of users. Federated Learning (FL) has gained popularity as a privacy-friendly approach, however, an equally important aspect rarely addressed in the literature, is its fairness. In this work we audit a FL-based privacy-preserving model. We use Entropy to determine similarity within the system's input data and compare its value against that of the output to detect unfair treatment.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|