Growing Neural Networks Achieve Flatter Minima
Résumé
Deep neural networks of sizes commonly encountered in practice are proven to converge towards a global minimum. The flatness of the surface of the loss function in a neighborhood of such minima is often linked with better generalization performances. In this paper, we present a new model of growing neural network in which we incrementally add neurons throughout the learning phase. We study the characteristics of the minima found by such a network compared to those obtained with standard feedforward neural networks. The results of this analysis show that a neural network grown with our procedure converges towards a flatter minimum than a standard neural network with the same number of parameters learned from scratch. Furthermore, our results confirm the link between flatter minima and better generalization performances as the grown models tend to outperform the standard ones. We validate this approach both with small neural networks and with large deep learning models that are state-of-the-art in Natural Language Processing tasks.
Domaines
Intelligence artificielle [cs.AI]
Fichier principal
Papier.pdf (256.88 Ko)
Télécharger le fichier
Fig.jpg (16.37 Ko)
Télécharger le fichier
Fig.png (95.88 Ko)
Télécharger le fichier