Lipschitz Certificates for Layered Network Structures Driven by Averaged Activation Operators - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue SIAM Journal on Mathematics of Data Science Année : 2020

Lipschitz Certificates for Layered Network Structures Driven by Averaged Activation Operators

Résumé

Obtaining sharp Lipschitz constants for feed-forward neural networks is essential to assess their robust-ness in the face of perturbations of their inputs. We derive such constants in the context of a general layered network model involving compositions of nonexpansive averaged operators and affine operators. By exploiting this architecture, our analysis finely captures the interactions between the layers, yielding tighter Lipschitz constants than those resulting from the product of individual bounds for groups of layers. The proposed framework is shown to cover in particular most practical instances encountered in feed-forward neural networks. Our Lipschitz constant estimates are further improved in the case of structures employing scalar nonlinear functions, which include standard convolutional networks as special cases.
Fichier principal
Vignette du fichier
https:pcombet.math.ncsu.edu:simods1.pdf (437.14 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02428111 , version 1 (05-01-2020)
hal-02428111 , version 2 (09-02-2021)

Identifiants

Citer

Patrick L Combettes, Jean-Christophe Pesquet. Lipschitz Certificates for Layered Network Structures Driven by Averaged Activation Operators. SIAM Journal on Mathematics of Data Science, 2020, ⟨10.1137/19M1272780⟩. ⟨hal-02428111v2⟩
134 Consultations
389 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More