Growing Neural Networks have Flat Optima and Generalize Better - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Growing Neural Networks have Flat Optima and Generalize Better

Paul Caillon
  • Fonction : Auteur
  • PersonId : 1096067

Résumé

In this work, we study the loss landscape of growing neural networks and show that they have flatter minima than when trained with all of their parameters from random initialization. Then, we further evaluate and compare the generalization properties of both growing and non-growing models using, along with standard measures such as the training loss and the validation accuracy, an uncommon approximation of the population risk. The results we find suggest that growing models have better generalization properties. This supports the argument that flatness of the loss positively correlates with generalization in the current debate in the scientific community about flatness. We validate our approach on a wide range of binary Natural Language Processing tasks with large state-of-the-art deep learning models. Our theoretical and experimental results open new perspectives to study these questions through the prism of growing neural networks and risk approximations.
Fichier principal
Vignette du fichier
paper.pdf (498.49 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04697428 , version 1 (13-09-2024)

Licence

Identifiants

  • HAL Id : hal-04697428 , version 1

Citer

Paul Caillon, Christophe Cerisara. Growing Neural Networks have Flat Optima and Generalize Better. 2024. ⟨hal-04697428⟩
0 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More