Protection against Source Inference Attacks in Federated Learning using Unary Encoding and Shuffling
Résumé
Federated Learning (FL) enables clients to train a joint model without disclosing their local data. Instead, they share their local model updates with a central server that moderates the process and creates a joint model. However, FL is susceptible to a series of privacy attacks. Recently, the source inference attack (SIA) has been proposed where an honest-but-curious central server tries to identify exactly which client owns a specific data record.
n this work, we propose a defense against SIAs by using a trusted shuffler, without compromising the accuracy of the joint model. We employ a combination of unary encoding with shuffling, which can effectively blend all clients' model updates, preventing the central server from inferring information about each client's model update separately. In order to address the increased communication cost of unary encoding we employ quantization. Our preliminary experiments show promising results; the proposed mechanism notably decreases the accuracy of SIAs without compromising the accuracy of the joint model.
Mots clés
Source Inference Attack
Unary Encoding
Shuffling
Security and privacy
Machine learning
Federated Learning
CCS Concepts Security and privacy • Computing methodologies → Machine learning Federated Learning, Source Inference Attack, Unary Encoding, Shuffling ACM
CCS Concepts
• Computing methodologies → Machine learning
Federated Learning, Source Inference Attack, Unary Encoding, Shuffling ACM
Domaines
Cryptographie et sécurité [cs.CR]Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|