Spherical Perspective on Learning with Normalization Layers
Résumé
Normalization Layers (NLs) are widely used in modern deep-learning architectures. Despite
their apparent simplicity, their effect on optimization is not yet fully understood. This paper
introduces a spherical framework to study the optimization of neural networks with NLs from
a geometric perspective. Concretely, the radial invariance of groups of parameters, such as
filters for convolutional neural networks, allows to translate the optimization steps on the L2 unit
hypersphere. This formulation and the associated geometric interpretation shed new light on the
training dynamics. Firstly, the first effective learning rate expression of Adam is derived. Then
the demonstration that, in the presence of NLs, performing Stochastic Gradient Descent (SGD)
alone is actually equivalent to a variant of Adam constrained to the unit hypersphere, stems from
the framework. Finally, this analysis outlines phenomena that previous variants of Adam act on
and their importance in the optimization process are experimentally validated.
Origine | Fichiers produits par l'(les) auteur(s) |
---|