Heterogeneous SplitFed: Federated Learning with Trainable and Untrainable Clients
Résumé
With the advent of edge computing and distributed learning paradigms, the integration of low-resource devices and embedded systems within these frameworks has become a focal point of numerous research initiatives. These resource constraints, particularly in memory and computational capacity, are exacerbated by the demands of increasingly complex neural network models that are deployed on such devices. SplitFed Learning (SFL) has been proposed as an innovative approach that combines two prominent distributed machine learning strategies, namely federated learning (FL) and split learning (SL), which facilitates model usage among clients with resource limitations while preserving their privacy. However, there are specific devices where training is difficult or impossible, which SFL, FL, or SL do not consider. For instance, devices based on Field Programmable Gate Arrays (FPGAs) may face such challenges. Despite this, these devices could still benefit from the federation and contribute to it with their own data. Therefore, in this paper, we introduce a new federated learning approach for deep neural networks, called Heterogeneous SplitFed Learning (HSFL), designed to support low-resource clients that are only capable of performing model inference and that can cope with heterogeneous data, thus enabling their active participation. This enhances privacy while improving model performance in collaboration with clients owning more substantial computational resources. We demonstrate empirically on image classification benchmarks and common deep learning models that HSFL can match the performance of other FL approaches that accommodate heterogeneous data, maintaining efficacy and including clients with limited resources.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |
Copyright (Tous droits réservés)
|
Commentaire | © 2024 IEEE. This is the author’s version of the work. It is posted here for personal use, not for redistribution. The definitive version was published in 2024 2nd International Conference on Federated Learning Technologies and Applications (FLTA). |