Learning from partially supervised data using mixture models and belief functions. - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Pattern Recognition Année : 2009

Learning from partially supervised data using mixture models and belief functions.

Résumé

This paper addresses classification problems in which the class membership of training data is only partially known. Each learning sample is assumed to consist in a feature vector $\bx_i\in\mathcal{X}$ and an imprecise and/or uncertain ``soft'' label $m_i$ defined as a Dempster-Shafer basic belief assignment over the set of classes. This framework thus generalizes many kinds of learning problems including supervised, unsupervised and semi-supervised learning. Here, it is assumed that the feature vectors are generated from a mixture model. Using the Generalized Bayesian Theorem, an extension of Bayes' theorem in the belief function framework, we derive a criterion generalizing the likelihood function. A variant of the EM algorithm dedicated to the optimization of this criterion is proposed, allowing us to compute estimates of model parameters. Experimental results demonstrate the ability of this approach to exploit partial information about class labels.
Fichier principal
Vignette du fichier
prup.pdf (412.8 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00446583 , version 1 (13-01-2010)

Identifiants

Citer

Etienne Côme, Latifa Oukhellou, Thierry Denoeux, Patrice Aknin. Learning from partially supervised data using mixture models and belief functions.. Pattern Recognition, 2009, 42 (3), pp.334-348. ⟨10.1016/j.patcog.2008.07.014⟩. ⟨hal-00446583⟩
197 Consultations
410 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More