What else is leaked when eavesdropping Federated Learning? - Archive ouverte HAL Access content directly
Conference Papers Year : 2021

What else is leaked when eavesdropping Federated Learning?

Abstract

In this paper, we initiate the study of local model reconstruction attacks for federated learning, where a honest-but-curious adversary eavesdrops the messages exchanged between the client and the server and reconstructs the local model of the client. The success of this attack enables better performance of other known attacks, such as the membership attack, attribute inference attacks, etc. We provide analytical guarantees for the success of this attack when training a linear least squares problem with full batch size and arbitrary number of local steps. One heuristic is proposed to generalize the attack to other machine learning problems. Experiments are conducted on logistic regression tasks, showing high reconstruction quality, especially when clients' datasets are highly heterogeneous (as it is common in federated learning).
Fichier principal
Vignette du fichier
What_else_is_leaked_in_FL_ACM(1).pdf (454.63 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03364766 , version 1 (04-10-2021)
hal-03364766 , version 2 (05-10-2021)

Identifiers

Cite

Chuan Xu, Giovanni Neglia. What else is leaked when eavesdropping Federated Learning?. CCS workshop Privacy Preserving Machine Learning (PPML), Nov 2021, Soeul, South Korea. ⟨10.1145/1122445.1122456⟩. ⟨hal-03364766v2⟩
385 View
512 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More