Risk upper bounds for general ensemble methods with an application to multiclass classification
Résumé
This paper generalizes a pivotal result from the PAC-Bayesian literature —the −bound— primarily designed for binary classification to the general case of ensemble methods of voters with arbitrary outputs. We provide a generic version of the −bound, an upper bound over the risk of models expressed as a weighted majority vote that is based on the first and second statistical moments of the vote's margin. On the one hand, this bound may advantageously be applied on more complex outputs than mere binary outputs, such as multiclass labels and multilabel, and on the other hand, it allows us to consider margin relaxations. We provide a specialization of the bound to multiclass classification together with empirical evidence that the presented theoretical result is tightly bound to the risk of the majority vote classifier. We also give insights as to how the proposed bound may be of use to characterize the risk of multilabel predictors.
Loading...