Gaussian Pre-Activations in Neural Networks: Myth or Reality? - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail (Preprint/Prepublication) Année : 2023

Gaussian Pre-Activations in Neural Networks: Myth or Reality?

Résumé

The study of feature propagation at initialization in neural networks lies at the root of numerous initialization designs. An assumption very commonly made in the field states that the pre-activations are Gaussian. Although this convenient Gaussian hypothesis can be justified when the number of neurons per layer tends to infinity, it is challenged by both theoretical and experimental works for finite-width neural networks. Our major contribution is to construct a family of pairs of activation functions and initialization distributions that ensure that the pre-activations remain Gaussian throughout the network’s depth, even in narrow neural networks. In the process, we discover a set of constraints that a neural network should fulfill to ensure Gaussian pre-activations. Additionally, we provide a critical review of the claims of the Edge of Chaos line of works and build an exact Edge of Chaos analysis. We also propose a unified view on pre-activations propagation, encompassing the framework of several well-known initialization procedures. Finally, our work provides a principled framework for answering the much-debated question: is it desirable to initialize the training of a neural network whose pre-activations are ensured to be Gaussian?

Mots clés

Fichier principal
Vignette du fichier
2205.12379v3.pdf (3.12 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03933169 , version 1 (10-01-2023)
hal-03933169 , version 2 (03-03-2023)

Licence

Paternité

Identifiants

Citer

Pierre Wolinski, Julyan Arbel. Gaussian Pre-Activations in Neural Networks: Myth or Reality?. 2023. ⟨hal-03933169v2⟩
43 Consultations
51 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More