WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2024

WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average

Résumé

The performance of deep neural networks is enhanced by ensemble methods, which average the output of several models. However, this comes at an increased cost at inference. Weight averaging methods aim at balancing the generalization of ensembling and the inference speed of a single model by averaging the parameters of an ensemble of models. Yet, naive averaging results in poor performance as models converge to different loss basins, and aligning the models to improve the performance of the average is challenging. Alternatively, inspired by distributed training, methods like DART and PAPA have been proposed to train several models in parallel such that they will end up in the same basin, resulting in good averaging accuracy. However, these methods either compromise ensembling accuracy or demand significant communication between models during training. In this paper, we introduce WASH, a novel distributed method for training model ensembles for weight averaging that achieves state-of-the-art image classification accuracy. WASH maintains models within the same basin by randomly shuffling a small percentage of weights during training, resulting in diverse models and lower communication costs compared to standard parameter averaging methods.
Fichier principal
Vignette du fichier
hal.pdf (528.08 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04588075 , version 1 (25-05-2024)

Identifiants

Citer

Louis Fournier, Adel Nabli, Masih Aminbeidokhti, Marco Pedersoli, Eugene Belilovsky, et al.. WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average. 2024. ⟨hal-04588075⟩
71 Consultations
63 Téléchargements

Altmetric

Partager

More