Towards Mitigation of Edge-Case Backdoor Attacks in Federated Learning
Résumé
Federated Learning (FL) allows many data owners to train a joint model without sharing their training data. However, FL is vulnerable to poisoning attacks where malicious workers attempt to inject a backdoor task in the model at training time, along with the main task that the model was initially trained for. Recent works show that FL is particularly sensitive to edge-case backdoors that are introduced by data points having unusual out-of-distribution features. Such attacks are among the most difficult to counter in today's FL robust systems. In this paper, we first implement two poisoning attacks and show that state-of-the-art robust FL systems, that are meant to counter malicious behavior, are actually vulnerable to this type of attacks. Then, we propose a defense mechanism called ARMOR that uses Generative Adversarial Networks to uncover edge-case backdoor attacks. Instead of monitoring the statistical shapes of users' model updates as in most of existing defense mechanisms, ARMOR extracts data features from the model updates in order to identify the backdoor patterns. In addition, ARMOR is the first FL defense mechanism against targeted poisoning attacks that is compatible with secure aggregation, thus providing better privacy than its competitors. Our extensive experimental evaluations with different datasets and neural network models show that ARMOR is able to counter edge-case backdoors, and outperforms existing robust FL systems by +48% to +100% in terms of resilience to attacks, while providing equivalent model quality.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|