Towards Mitigating Poisoning Attacks in Federated Learning
Résumé
Federated learning is a new machine learning trend that, guided by privacy goals, distributes learning across multiple participants who train the model collaboratively without sharing their data. Nonetheless, it is vulnerable to a variety of attacks such as data and model poisoning. In these attacks, adversaries attempt to inject a backdoor task in the model along with its main task during the training phase. After that, the injected backdoor is exploited at inference-time given a specific trigger. Many state-of-the-art mechanisms that rely on model update auditing have been proposed to mitigate poisoning attacks. We show in this paper that attackers are still capable to evade such detectors by crafting model updates that mimic benign ones. In this paper, we propose ARMOR, a novel mechanism that successfully detects these backdoor attacks in Federated Learning. We describe the design principles of ARMOR based on generative adversarial networks. And we present ARMOR's evaluation results on a real world dataset, which demonstrates that it outperforms its competitors.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|