Bayesian neural networks increasingly sparsify their units with depth - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2018

Bayesian neural networks increasingly sparsify their units with depth

Résumé

We investigate deep Bayesian neural networks with Gaussian priors on the weights and ReLU-like nonlinearities, shedding light on novel sparsity-inducing mechanisms at the level of the units of the network, both pre-and post-nonlinearities. The main thrust of the paper is to establish that the units prior distribution becomes increasingly heavy-tailed with depth. We show that first layer units are Gaussian, second layer units are sub-Exponential, and we introduce sub-Weibull distributions to characterize the deeper layers units. Bayesian neural networks with Gaussian priors are well known to induce the weight decay penalty on the weights. In contrast, our result indicates a more elaborate regularization scheme at the level of the units, ranging from convex penalties for the first two layers-weight decay for the first and Lasso for the second to non convex penalties for deeper layers. Thus, despite weight decay does not allow for the weights to be set exactly to zero, sparse solutions tend to be selected for the units from the second layer onward. This result provides new theoretical insight on deep Bayesian neural networks, underpinning their natural shrinkage properties and practical potential.
Fichier principal
Vignette du fichier
BNN.pdf (733.98 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01950657 , version 1 (11-12-2018)

Identifiants

  • HAL Id : hal-01950657 , version 1

Citer

Mariia Vladimirova, Julyan Arbel, Pablo Mesejo. Bayesian neural networks increasingly sparsify their units with depth. 2018. ⟨hal-01950657⟩
99 Consultations
112 Téléchargements

Partager

Gmail Facebook X LinkedIn More