Privacy Attacks in Decentralized Learning - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Privacy Attacks in Decentralized Learning

Résumé

Decentralized Gradient Descent (D-GD) allows a set of users to perform collaborative learning without sharing their data by iteratively averaging local model updates with their neighbors in a network graph. The absence of direct communication between non-neighbor nodes might lead to the belief that users cannot infer precise information about the data of others. In this work, we demonstrate the opposite, by proposing the first attack against D-GD that enables a user (or set of users) to reconstruct the private data of other users outside their immediate neighborhood. Our approach is based on a reconstruction attack against the gossip averaging protocol, which we then extend to handle the additional challenges raised by D-GD. We validate the effectiveness of our attack on real graphs and datasets, showing that the number of users compromised by a single or a handful of attackers is often surprisingly large. We empirically investigate some of the factors that affect the performance of the attack, namely the graph topology, the number of attackers, and their position in the graph.

Dates et versions

hal-04610652 , version 1 (13-06-2024)

Identifiants

Citer

Abdellah El Mrini, Edwige Cyffers, Aurélien Bellet. Privacy Attacks in Decentralized Learning. ICML 2024 - Forty-first International Conference on Machine Learning, Jul 2024, Vienne (Austria), Austria. ⟨10.48550/arXiv.2402.10001⟩. ⟨hal-04610652⟩
39 Consultations
0 Téléchargements

Altmetric

Partager

More