Decoupled Greedy Learning of CNNs - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Proceedings of the 37th International Conference on Machine Learning Année : 2020

Decoupled Greedy Learning of CNNs

Résumé

A commonly cited inefficiency of neural network training by back-propagation is the update locking problem: each layer must wait for the signal to propagate through the full network before updating. Several alternatives that can alleviate this issue have been proposed. In this context, we consider a simpler, but more effective, substitute that uses minimal feedback, which we call Decoupled Greedy Learning (DGL). It is based on a greedy relaxation of the joint training objective, recently shown to be effective in the context of Convolutional Neural Networks (CNNs) on large-scale image classification. We consider an optimization of this objective that permits us to decouple the layer training, allowing for layers or modules in networks to be trained with a potentially linear parallelization in layers. With the use of a replay buffer we show this approach can be extended to asynchronous settings, where modules can operate with possibly large communication delays. We show theoretically and empirically that this approach converges. Then, we empirically find that it can lead to better generalization than sequential greedy optimization. We demonstrate the effectiveness of DGL against alternative approaches on the CIFAR-10 dataset and on the large-scale ImageNet dataset.

Dates et versions

hal-02945327 , version 1 (22-09-2020)

Identifiants

Citer

Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon. Decoupled Greedy Learning of CNNs. International Conference on Machine Learning, Jul 2020, Vienna (virtual), Austria. pp.5368-5377. ⟨hal-02945327⟩
37 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More