A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors
Résumé
The distribution of modern deep neural networks (DNNs) weights - crucial for uncertainty quantification and robustness - is an eminently complex object due to its extremely high dimensionality. This paper proposes one of the first large-scale explorations of the posterior distribution of deep Bayesian Neural Networks (BNNs), expanding its study to real-world vision tasks and architectures. Specifically, we investigate the optimal approach for approximating the posterior, analyze the connection between posterior quality and uncertainty quantification, delve into the impact of modes on the posterior, and explore methods for visualizing the posterior. Moreover, we uncover weight-space symmetries as a critical aspect for understanding the posterior. To this extent, we develop an in-depth assessment of the impact of both permutation and scaling symmetries that tend to obfuscate the Bayesian posterior. While the first type of transformation is known for duplicating modes, we explore the relationship between the latter and L2 regularization, challenging previous misconceptions. Finally, to help the community improve our understanding of the Bayesian posterior, we release the \href{https://huggingface.co/datasets/torch-uncertainty/Checkpoints}{first large-scale checkpoint dataset}, including thousands of real-world models, along with our \href{https://github.com/ENSTA-U2IS-AI/torch-uncertainty}{codes}.