Towards Mitigation of Edge-Case Backdoor Attacks in Federated Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Towards Mitigation of Edge-Case Backdoor Attacks in Federated Learning

Résumé

Federated Learning (FL) allows many data owners to train a joint model without sharing their training data. However, FL is vulnerable to poisoning attacks where malicious workers attempt to inject a backdoor task in the model at training time, along with the main task that the model was initially trained for. Recent works show that FL is particularly sensitive to edge-case backdoors that are introduced by data points having unusual out-of-distribution features. Such attacks are among the most difficult to counter in today's FL robust systems. In this paper, we first implement two poisoning attacks and show that state-of-the-art robust FL systems, that are meant to counter malicious behavior, are actually vulnerable to this type of attacks. Then, we propose a defense mechanism called ARMOR that uses Generative Adversarial Networks to uncover edge-case backdoor attacks. Instead of monitoring the statistical shapes of users' model updates as in most of existing defense mechanisms, ARMOR extracts data features from the model updates in order to identify the backdoor patterns. In addition, ARMOR is the first FL defense mechanism against targeted poisoning attacks that is compatible with secure aggregation, thus providing better privacy than its competitors. Our extensive experimental evaluations with different datasets and neural network models show that ARMOR is able to counter edge-case backdoors, and outperforms existing robust FL systems by +48% to +100% in terms of resilience to attacks, while providing equivalent model quality.
Fichier principal
Vignette du fichier
eurodw22-final26.pdf (685.02 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03834460 , version 1 (29-10-2022)

Licence

Paternité

Identifiants

  • HAL Id : hal-03834460 , version 1

Citer

Fatima Elhattab. Towards Mitigation of Edge-Case Backdoor Attacks in Federated Learning. 16th EuroSys Doctoral Workshop, Apr 2022, Rennes, France. ⟨hal-03834460⟩
92 Consultations
75 Téléchargements

Partager

Gmail Facebook X LinkedIn More