Local Identifiability of Deep ReLU Neural Networks: the Theory - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Local Identifiability of Deep ReLU Neural Networks: the Theory

Résumé

Is a sample rich enough to determine, at least locally, the parameters of a neural network? To answer this question, we introduce a new local parameterization of a given deep ReLU neural network by fixing the values of some of its weights. This allows us to define local lifting operators whose inverses are charts of a smooth manifold of a high dimensional space. The function implemented by the deep ReLU neural network composes the local lifting with a linear operator which depends on the sample. We derive from this convenient representation a geometrical necessary and sufficient condition of local identifiability. Looking at tangent spaces, the geometrical condition provides: 1/ a sharp and testable necessary condition of identifiability and 2/ a sharp and testable sufficient condition of local identifiability. The validity of the conditions can be tested numerically using backpropagation and matrix rank computations.
Fichier principal
Vignette du fichier
Local_identifiability_of_deep_relu_neural_networks_the_theory.pdf (612.25 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03687395 , version 1 (14-06-2022)
hal-03687395 , version 2 (21-11-2022)

Identifiants

Citer

Joachim Bona-Pellissier, François Malgouyres, François Bachoc. Local Identifiability of Deep ReLU Neural Networks: the Theory. 2022. ⟨hal-03687395v1⟩
181 Consultations
95 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More