Growing Neural Networks have Flat Optima and Generalize Better
Résumé
In this work, we study the loss landscape of growing neural networks and show that they have flatter minima than when trained with all of their parameters from random initialization. Then, we further evaluate and compare the generalization properties of both growing and non-growing models using, along with standard measures such as the training loss and the validation accuracy, an uncommon approximation of the population risk. The results we find suggest that growing models have better generalization properties. This supports the argument that flatness of the loss positively correlates with generalization in the current debate in the scientific community about flatness. We validate our approach on a wide range of binary Natural Language Processing tasks with large state-of-the-art deep learning models. Our theoretical and experimental results open new perspectives to study these questions through the prism of growing neural networks and risk approximations.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |