Parameter identifiability of a deep feedforward ReLU neural network - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Machine Learning Année : 2023

Parameter identifiability of a deep feedforward ReLU neural network

Résumé

The possibility for one to recover the parameters-weights and biases-of a neural network thanks to the knowledge of its function on a subset of the input space can be, depending on the situation, a curse or a blessing. On one hand, recovering the parameters allows for better adversarial attacks and could also disclose sensitive information from the dataset used to construct the network. On the other hand, if the parameters of a network can be recovered, it guarantees the user that the features in the latent spaces can be interpreted. It also provides foundations to obtain formal guarantees on the performances of the network. It is therefore important to characterize the networks whose parameters can be identified and those whose parameters cannot. In this article, we provide a set of conditions on a deep fully-connected feedforward ReLU neural network under which the parameters of the network are uniquely identified-modulo permutation and positive rescaling-from the function it implements on a subset of the input space.
Fichier principal
Vignette du fichier
Parameter_identifiability_of_a_deep_ReLU_NN.pdf (966.05 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03501784 , version 1 (23-12-2021)
hal-03501784 , version 2 (21-03-2023)

Identifiants

Citer

Joachim Bona-Pellissier, François Bachoc, François Malgouyres. Parameter identifiability of a deep feedforward ReLU neural network. Machine Learning, 2023, 112, pp.4431-4493. ⟨hal-03501784v2⟩
119 Consultations
312 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More