What else is leaked when eavesdropping Federated Learning? - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

What else is leaked when eavesdropping Federated Learning?

Résumé

In this paper, we initiate the study of local model reconstruction attacks for federated learning, where a honest-but-curious adversary eavesdrops the messages exchanged between the client and the server and reconstructs the local model of the client. The success of this attack enables better performance of other known attacks, such as the membership attack, attribute inference attacks, etc. We provide analytical guarantees for the success of this attack when training a linear least squares problem with full batch size and arbitrary number of local steps. One heuristic is proposed to generalize the attack to other machine learning problems. Experiments are conducted on logistic regression tasks, showing high reconstruction quality, especially when clients' datasets are highly heterogeneous (as it is common in federated learning).
Fichier principal
Vignette du fichier
What_else_is_leaked_in_FL_ACM (2).pdf (592.55 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03364766 , version 1 (04-10-2021)
hal-03364766 , version 2 (05-10-2021)

Identifiants

Citer

Chuan Xu, Giovanni Neglia. What else is leaked when eavesdropping Federated Learning?. CCS workshop Privacy Preserving Machine Learning (PPML), Nov 2021, Nice, France. ⟨10.1145/1122445.1122456⟩. ⟨hal-03364766v1⟩
474 Consultations
608 Téléchargements

Altmetric

Partager

More