Spherical Perspective on Learning with Normalization Layers - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Neurocomputing Année : 2022

Spherical Perspective on Learning with Normalization Layers

Résumé

Normalization Layers (NLs) are widely used in modern deep-learning architectures. Despite their apparent simplicity, their effect on optimization is not yet fully understood. This paper introduces a spherical framework to study the optimization of neural networks with NLs from a geometric perspective. Concretely, the radial invariance of groups of parameters, such as filters for convolutional neural networks, allows to translate the optimization steps on the L2 unit hypersphere. This formulation and the associated geometric interpretation shed new light on the training dynamics. Firstly, the first effective learning rate expression of Adam is derived. Then the demonstration that, in the presence of NLs, performing Stochastic Gradient Descent (SGD) alone is actually equivalent to a variant of Adam constrained to the unit hypersphere, stems from the framework. Finally, this analysis outlines phenomena that previous variants of Adam act on and their importance in the optimization process are experimentally validated.
Fichier principal
Vignette du fichier
2006.13382.pdf (13.87 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04496492 , version 1 (08-03-2024)

Identifiants

Citer

Simon Roburin, Yann de Mont-Marin, Andrei Bursuc, Renaud Marlet, Patrick Pérez, et al.. Spherical Perspective on Learning with Normalization Layers. Neurocomputing, 2022, 487, pp.66-74. ⟨10.1016/j.neucom.2022.02.021⟩. ⟨hal-04496492⟩
5 Consultations
3 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More