Separation and Concentration in Deep Networks - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Separation and Concentration in Deep Networks

Résumé

Numerical experiments demonstrate that deep neural network classifiers progressively separate class distributions around their mean, achieving linear separability on the training set, and increasing the Fisher discriminant ratio. We explain this mechanism with two types of operators. We prove that a rectifier without biases applied to sign-invariant tight frames can separate class means and increase Fisher ratios. On the opposite, a soft-thresholding on tight frames can reduce within-class variabilities while preserving class means. Variance reduction bounds are proved for Gaussian mixture models. For image classification, we show that separation of class means can be achieved with rectified wavelet tight frames that are not learned. It defines a scattering transform. Learning 1 × 1 convolutional tight frames along scattering channels and applying a soft-thresholding reduces within-class variabilities. The resulting scattering network reaches the classification accuracy of ResNet-18 on CIFAR-10 and ImageNet, with fewer layers and no learned biases.
Fichier principal
Vignette du fichier
paper_ICLR2021.pdf (372.86 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03169904 , version 1 (15-03-2021)

Identifiants

  • HAL Id : hal-03169904 , version 1

Citer

John Zarka, Florentin Guth, Stéphane Mallat. Separation and Concentration in Deep Networks. ICLR 2021 - 9th International Conference on Learning Representations, May 2021, Vienna / Virtual, Austria. ⟨hal-03169904⟩
109 Consultations
72 Téléchargements

Partager

Gmail Facebook X LinkedIn More