Geometry-induced Implicit Regularization in Deep ReLU Neural Networks - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Geometry-induced Implicit Regularization in Deep ReLU Neural Networks

Résumé

It is well known that neural networks with many more parameters than training examples do not overfit. Implicit regularization phenomena, which are still not well understood, occur during optimization and \lq good\rq~networks are favored. Thus the number of parameters is not an adequate measure of complexity if we do not consider all possible networks but only the \lq good\rq~ones. To better understand which networks are favored during optimization, we study the geometry of the output set as parameters vary. When the inputs are fixed, we prove that the dimension of this set changes and that the local dimension, called {\em batch functional dimension}, is almost surely determined by the activation patterns in the hidden layers. We prove that the batch functional dimension is invariant to the symmetries of the network parameterization: neuron permutations and positive rescalings. Empirically, we establish that the batch functional dimension decreases during optimization. As a consequence, optimization leads to parameters with low batch functional dimensions. We call this phenomenon {\it geometry-induced implicit regularization}. The batch functional dimension depends on both the network parameters and inputs. To understand the impact of the inputs, we study, for fixed parameters, the largest attainable batch functional dimension when the inputs vary. We prove that this quantity, called {\em computable full functional dimension}, is also invariant to the symmetries of the network's parameterization, and is determined by the achievable activation patterns. We also provide a sampling theorem, showing a fast convergence of the estimation of the computable full functional dimension for a random input of increasing size. Empirically we find that the computable full functional dimension remains close to the number of parameters, which is related to the notion of local identifiability. This differs from the observed values for the batch functional dimension computed on training inputs and test inputs. The latter are influenced by geometry-induced implicit regularization.
Fichier principal
Vignette du fichier
depot_HAL.pdf (1.18 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04452356 , version 1 (12-02-2024)

Identifiants

Citer

Joachim Bona-Pellissier, François Malgouyres, François Bachoc. Geometry-induced Implicit Regularization in Deep ReLU Neural Networks. 2024. ⟨hal-04452356⟩
45 Consultations
18 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More