Understanding Priors in Bayesian Neural Networks at the Unit Level - Archive ouverte HAL Access content directly
Conference Papers Year : 2019

Understanding Priors in Bayesian Neural Networks at the Unit Level

Abstract

We investigate deep Bayesian neural networks with Gaussian priors on the weights and a class of ReLU-like nonlinearities. Bayesian neural networks with Gaussian priors are well known to induce an L2, “weight decay”, regularization. Our results indicate a more intricate regularization effect at the level of the unit activations. Our main result establishes that the induced prior distribution on the units before and after activation becomes increasingly heavy-tailed with the depth of the layer. We show that first layer units are Gaussian, second layer units are sub-exponential, and units in deeper layers are characterized by sub-Weibull distributions. Our results provide new theoretical insight on deep Bayesian neural networks, which we corroborate with simulation experiments.
Fichier principal
Vignette du fichier
paper_arXiv_ICML2019-3.pdf (596.69 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02177151 , version 1 (08-07-2019)

Identifiers

Cite

Mariia Vladimirova, Jakob Verbeek, Pablo Mesejo, Julyan Arbel. Understanding Priors in Bayesian Neural Networks at the Unit Level. ICML 2019 - 36th International Conference on Machine Learning, Jun 2019, Long Beach, United States. pp.6458-6467. ⟨hal-02177151⟩
190 View
705 Download

Altmetric

Share

Gmail Facebook X LinkedIn More