Sample Optimality and All-for-all Strategies in Personalized Federated and Collaborative Learning - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Sample Optimality and All-for-all Strategies in Personalized Federated and Collaborative Learning

Résumé

In personalized Federated Learning, each member of a potentially large set of agents aims to train a model minimizing its loss function averaged over its local data distribution. We study this problem under the lens of stochastic optimization. Specifically, we introduce informationtheoretic lower bounds on the number of samples required from all agents to approximately minimize the generalization error of a fixed agent. We then provide strategies matching these lower bounds, in the all-for-one and all-for-all settings where respectively one or all agents desire to minimize their own local function. Our strategies are based on a gradient filtering approach: provided prior knowledge on some notions of distances or discrepancies between local data distributions or functions, a given agent filters and aggregates stochastic gradients received from other agents, in order to achieve an optimal bias-variance trade-off.
Fichier principal
Vignette du fichier
personalized_arxiv.pdf (740.3 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03550407 , version 1 (01-02-2022)

Identifiants

  • HAL Id : hal-03550407 , version 1

Citer

Mathieu Even, Laurent Massoulié, Kévin Scaman. Sample Optimality and All-for-all Strategies in Personalized Federated and Collaborative Learning. 2022. ⟨hal-03550407⟩
45 Consultations
34 Téléchargements

Partager

Gmail Facebook X LinkedIn More