Sparse Asynchronous Distributed Learning - Archive ouverte HAL
Communication Dans Un Congrès Année : 2020

Sparse Asynchronous Distributed Learning

Résumé

In this paper, we propose an asynchronous distributed learning algorithm where parameter updates are performed by worker machines simultaneously on a local sub-part of the training data. These workers send their updates to a master machine that coordinates all received parameters in order to minimize a global empirical loss. The communication exchanges between workers and the master machine are generally the bottleneck of most asynchronous scenarios. We propose to reduce this communication cost by a sparsification mechanism which, for each worker machine, consists in randomly and independently choosing some local update entries that will not be transmitted to the master. We provably show that if the probability of choosing such local entries is high and that the global loss is strongly convex, then the whole process is guaranteed to converge to the minimum of the loss. In the case where this probability is low, we empirically show on three datasets that our approach converges to the minimum of the loss in most of the cases with a better convergence rate and much less parameter exchanges between the master and the worker machines than without using our sparsification technique.
Fichier principal
Vignette du fichier
main.pdf (414.23 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04763780 , version 1 (02-11-2024)

Identifiants

Citer

Dmitry Grischenko, Franck Iutzeler, Massih-Reza Amini. Sparse Asynchronous Distributed Learning. Neural Information Processing - 27th International Conference, ICONIP, Nov 2020, Bangkok, Thailand. pp.429-438, ⟨10.1007/978-3-030-63823-8_50⟩. ⟨hal-04763780⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More