Personalized Privacy-Preserving Federated Learning
Résumé
Federated Learning (FL) enables collaborative model training among several participants while keeping local data private. However, FL remains vulnerable to privacy membership inference attacks (MIAs) that allow adversaries to deduce confidential information about participants’ training data. Existing defense mechanisms against MIAs compromise model performance and utility, and incur significant overheads. In this paper, we propose DINAR, a novel FL middleware for privacy-preserving neural networks that precisely handles these issues. DINAR leverages personalized FL and follows a fine-grained approach that specifically tackles FL neural network layers that leak more private information than other layers, thus, efficiently protecting FL model against MIAs in a non-intrusive way, while compensating for any potential loss in the model accuracy. The paper presents our extensive empirical evaluation of DINAR, conducted with six widely used datasets, four neural networks, and comparing against five state-of-the-art FL privacy protection mechanisms. The evaluation results show that DINAR reduces the membership inference attack success rate to reach its optimal value, without hurting model accuracy, and without inducing computational overhead. In contrast, existing FL defense mechanisms incur an overhead of up to +35% and +3,000% on respectively FL client-side and FL server-side computation times.
Domaines
Informatique [cs]
Fichier principal
Personalized Privacy-Preserving Federated Learning.pdf (1.76 Mo)
Télécharger le fichier
Origine | Publication financée par une institution |
---|