Security of Federated Learning: Attacks, Defensive Mechanisms, and Challenges
Résumé
Recently, a new Artificial Intelligence (AI) paradigm, known as Federated Learning (FL), has been introduced. It is a decentralized approach to apply Machine Learning (ML) ondevice without risking the disclosure and tracing of sensitive and private information. Instead of training the global model on a centralized server (by aggregating the clients' private data), FL trains a global shared model by only aggregating clients' locally-computed updates (the clients' private data remains distributed across the clients' devices). However, as secure as the FL seems, it by itself does not give the levels of privacy and security required by today's distributed systems. This paper seeks to provide a holistic view of FL's security concerns. We outline the most important attacks and vulnerabilities that are highly relevant to FL systems. Then, we present the recent proposed defensive mechanisms. Finally, we highlight the outstanding challenges, and we discuss the possible future research directions.
Origine | Fichiers produits par l'(les) auteur(s) |
---|