Self-Training and Multi-Task Learning for Limited Data: Evaluation Study on Object Detection - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Self-Training and Multi-Task Learning for Limited Data: Evaluation Study on Object Detection

Résumé

Self-training allows a network to learn from the predictions of a more complicated model, thus often requires well-trained teacher models and mixture of teacher-student data while multi-task learning jointly optimizes different targets to learn salient interrelationship and requires multi-task annotations for each training example. These frameworks, despite being particularly data demanding have potentials for data exploitation if such assumptions can be relaxed. In this paper, we compare self-training object detection under the deficiency of teacher training data where students are trained on unseen examples by the teacher, and multi-task learning with partially annotated data, i.e. single-task annotation per training example. Both scenarios have their own limitation but potentially helpful with limited annotated data. Experimental results show the improvement of performance when using a weak teacher with unseen data for training a multi-task student. Despite the limited setup we believe the experimental results show the potential of multi-task knowledge distillation and self-training, which could be beneficial for future study. Source code and data splits are at https://lhoangan.github.io/multas
Fichier non déposé

Dates et versions

hal-04357151 , version 1 (21-12-2023)

Identifiants

  • HAL Id : hal-04357151 , version 1

Citer

Hoàng-Ân Lê, Minh-Tan Pham. Self-Training and Multi-Task Learning for Limited Data: Evaluation Study on Object Detection. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Oct 2023, Paris, France. pp.6580-6583. ⟨hal-04357151⟩
8 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More