Verifiable cross-silo federated learning - Archive ouverte HAL
Autre Publication Scientifique Année : 2024

Verifiable cross-silo federated learning

Résumé

Federated Learning (FL) is a widespread approach that allows training machine learning (ML) models with data distributed across multiple devices. In cross-silo FL, which often appears in domains like healthcare or finance, the number of participants is moderate, and each party typically represents a well-known organization. However, malicious agents may still attempt to disturb the training procedure in order to obtain certain benefits, for example, a biased result or a reduction in computational load. While one can easily detect a malicious agent when data used for training is public, the problem becomes much more acute when it is necessary to maintain the privacy of the training dataset. To address this issue, there is recently growing interest in developing verifiable protocols, where one can check that parties do not deviate from the training procedure and perform computations correctly. In this paper, we conduct a comprehensive analysis of such protocols, and fit them in a taxonomy. We perform a comparison of the efficiency and threat models of various approaches. We next identify research gaps and discuss potential directions for future scientific work.
Fichier principal
Vignette du fichier
ProtectIT24.pdf (186.01 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04612305 , version 1 (14-06-2024)

Identifiants

  • HAL Id : hal-04612305 , version 1

Citer

Aleksei Korneev, Jan Ramon. Verifiable cross-silo federated learning. 2024. ⟨hal-04612305⟩
50 Consultations
105 Téléchargements

Partager

More