A General Framework for the Disintegration of PAC-Bayesian Bounds
Abstract
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers. However, when applied to some family of deterministic models such as neural networks, they require a loose and costly derandomization step. As an alternative to this step, we introduce new PAC-Bayesian generalization bounds that have the originality to provide disintegrated bounds, i.e., they give guarantees over one single hypothesis instead of the usual averaged analysis. Our bounds are easily optimizable and can be used to design learning algorithms. We illustrate the interest of our result on neural networks and show a significant practical improvement over the state-of-the-art framework.
Domains
Machine Learning [stat.ML]Origin | Files produced by the author(s) |
---|