Self-training: A survey - Archive ouverte HAL
Article Dans Une Revue Neurocomputing Année : 2025

Self-training: A survey

Résumé

Self-training methods have gained significant attention in recent years due to their effectiveness in leveraging small labeled datasets and large unlabeled observations for prediction tasks. These models identify decision boundaries in low-density regions without additional assumptions about data distribution, using the confidence scores of a learned classifier. The core principle of self-training involves iteratively assigning pseudo-labels to unlabeled samples with confidence scores above a certain threshold, enriching the labeled dataset and retraining the classifier. This paper presents self-training methods for binary and multi-class classification, along with variants and related approaches such as consistency-based methods and transductive learning. We also briefly describe self-supervised learning and reinforced self-training. Furthermore, we highlight popular applications of self-training and discuss the importance of dynamic thresholding and reducing pseudo-label noise for performance improvement. To the best of our knowledge, this is the first thorough and complete survey on self-training.
Fichier principal
Vignette du fichier
self-training_survey.pdf (631.39 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04796226 , version 1 (21-11-2024)

Identifiants

Citer

Massih-Reza Amini, Vasilii Feofanov, Loïc Pauletto, Liès Hadjadj, Émilie Devijver, et al.. Self-training: A survey. Neurocomputing, 2025, pp.128904. ⟨10.1016/j.neucom.2024.128904⟩. ⟨hal-04796226⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More