Bayesian neural network unit priors and generalized Weibull-tail property - Archive ouverte HAL Access content directly
Conference Papers Year : 2021

Bayesian neural network unit priors and generalized Weibull-tail property

Abstract

The connection between Bayesian neural networks and Gaussian processes gained a lot of attention in the last few years. Hidden units are proven to follow a Gaussian process limit when the layer width tends to infinity. Recent work has suggested that finite Bayesian neural networks may outperform their infinite counterparts because they adapt their internal representations flexibly. To establish solid ground for future research on finite-width neural networks, our goal is to study the prior induced on hidden units. Our main result is an accurate description of hidden units tails which shows that unit priors become heavier-tailed going deeper, thanks to the introduced notion of generalized Weibull-tail. This finding sheds light on the behavior of hidden units of finite Bayesian neural networks.
Fichier principal
Vignette du fichier
ACML___Accurate_tails_of_BNN_unit_priors-3.pdf (767.1 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03368522 , version 1 (06-10-2021)

Identifiers

Cite

Mariia Vladimirova, Julyan Arbel, Stéphane Girard. Bayesian neural network unit priors and generalized Weibull-tail property. ACML 2021 - 13th Asian Conference on Machine Learning, Nov 2021, Virtual, Unknown Region. pp.1-16, ⟨10.48550/arXiv.2110.02885⟩. ⟨hal-03368522⟩
79 View
67 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More