Towards Mitigating Poisoning Attacks in Federated Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Towards Mitigating Poisoning Attacks in Federated Learning

Résumé

Federated learning is a new machine learning trend that, guided by privacy goals, distributes learning across multiple participants who train the model collaboratively without sharing their data. Nonetheless, it is vulnerable to a variety of attacks such as data and model poisoning. In these attacks, adversaries attempt to inject a backdoor task in the model along with its main task during the training phase. After that, the injected backdoor is exploited at inference-time given a specific trigger. Many state-of-the-art mechanisms that rely on model update auditing have been proposed to mitigate poisoning attacks. We show in this paper that attackers are still capable to evade such detectors by crafting model updates that mimic benign ones. In this paper, we propose ARMOR, a novel mechanism that successfully detects these backdoor attacks in Federated Learning. We describe the design principles of ARMOR based on generative adversarial networks. And we present ARMOR's evaluation results on a real world dataset, which demonstrates that it outperforms its competitors.
Fichier principal
Vignette du fichier
ComPAS_2021_ARMOR.pdf (256.77 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03815244 , version 1 (14-10-2022)

Identifiants

  • HAL Id : hal-03815244 , version 1

Citer

Elhattab Fatima, Rania Talbi, Sara Bouchenak, Vlad Nitu. Towards Mitigating Poisoning Attacks in Federated Learning. ComPAS’2021 : Parallélisme/ Architecture/ Système MILC-Lyon, Jul 2021, Lyon, France. ⟨hal-03815244⟩
66 Consultations
145 Téléchargements

Partager

Gmail Facebook X LinkedIn More