Boosting mixture models for semi-supervised learning task
Résumé
This paper introduces MixtBoost, a variant of AdaBoost dedicated to solve problems in which both labeled and unlabeled data are available. We propose several definitions of loss for unlabeled data, from which margins are defined. The resulting boosting schemes implement mixture models as base classifiers. Preliminary experiments are analyzed and the relevance of loss choices is discussed. MixtBoost improves on both mixture models and AdaBoost provided classes are structured, and is otherwise similar to AdaBoost.