Towards Mitigating Poisoning Attacks in Federated Learning - Archive ouverte HAL Access content directly
Conference Papers Year :

Towards Mitigating Poisoning Attacks in Federated Learning

Abstract

Federated learning is a new machine learning trend that, guided by privacy goals, distributes learning across multiple participants who train the model collaboratively without sharing their data. Nonetheless, it is vulnerable to a variety of attacks such as data and model poisoning. In these attacks, adversaries attempt to inject a backdoor task in the model along with its main task during the training phase. After that, the injected backdoor is exploited at inference-time given a specific trigger. Many state-of-the-art mechanisms that rely on model update auditing have been proposed to mitigate poisoning attacks. We show in this paper that attackers are still capable to evade such detectors by crafting model updates that mimic benign ones. In this paper, we propose ARMOR, a novel mechanism that successfully detects these backdoor attacks in Federated Learning. We describe the design principles of ARMOR based on generative adversarial networks. And we present ARMOR's evaluation results on a real world dataset, which demonstrates that it outperforms its competitors.
Fichier principal
Vignette du fichier
ComPAS_2021_ARMOR.pdf (256.77 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03815244 , version 1 (14-10-2022)

Identifiers

  • HAL Id : hal-03815244 , version 1

Cite

Elhattab Fatima, Rania Talbi, Sara Bouchenak, Vlad Nitu. Towards Mitigating Poisoning Attacks in Federated Learning. ComPAS’2021 : Parallélisme/ Architecture/ Système MILC-Lyon, Jul 2021, Lyon, France. ⟨hal-03815244⟩
18 View
25 Download

Share

Gmail Facebook Twitter LinkedIn More