Self-training: A survey
Résumé
Self-training methods have gained significant attention in recent years due to their effectiveness in leveraging small labeled datasets and large unlabeled observations for prediction tasks. These models identify decision boundaries in low-density regions without additional assumptions about data distribution, using the confidence scores of a learned classifier. The core principle of self-training involves iteratively assigning pseudo-labels to unlabeled samples with confidence scores above a certain threshold, enriching the labeled dataset and retraining the classifier. This paper presents self-training methods for binary and multi-class classification, along with variants and related approaches such as consistency-based methods and transductive learning. We also briefly describe self-supervised learning and reinforced self-training. Furthermore, we highlight popular applications of self-training and discuss the importance of dynamic thresholding and reducing pseudo-label noise for performance improvement. To the best of our knowledge, this is the first thorough and complete survey on self-training.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|