Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone
Résumé
Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises. Specifically, in FL, models are trained on the users' devices and only model updates (i.e., gradients) are sent to a central server for aggregation purposes.
However, the long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms to incentivize the adoption of FL at scale.
While there exist solutions to mitigate these attacks on the server side, little has been done to protect users from attacks performed on the client side.
In this context, the use of Trusted Execution Environments (TEEs) on the client side are among the most proposing solutions.
However, existing frameworks (e.g., DarkneTZ) require statically putting a large portion of the machine learning model into the TEE to effectively protect against complex attacks or a combination of attacks.
We present \sys, a solution that allows protecting in a TEE only sensitive layers of a machine learning model, either statically or dynamically, hence reducing both the Trusted Computing Base (TCB) size and the overall training time by up to 30\% and 56\%, respectively compared to state-of-the-art competitors.
Fichier principal
Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone.pdf (844.1 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|