Entropy Based Feature Regularization to Improve Transferability of Deep Learning Models - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Entropy Based Feature Regularization to Improve Transferability of Deep Learning Models

Résumé

When dealing with signals, labeling a classification dataset implies to define classes that may approximate a smoother and more complicated ground truth. For example, natural images may contain multiple objects, only one of which is labeled in many vision datasets, or classes may result from the discretization of a regression problem where targets are continuous. Using cross-entropy to train deep models on such coarse labels is likely to roughly cut through the feature space, potentially disregarding the most meaningful such features, in particular losing information on the underlying fine-grain task. In this paper we are interested in the problem of solving fine-grain classification or regression, using a model trained on coarse-grain labels only. We show that standard cross-entropy can lead to overfitting to coarse-related features. We introduce an entropy-based regularization to promote more diversity in the feature space of trained models, and empirically demonstrate the efficacy of this methodology to reach better performance on the fine-grain problems. Our results are supported by theoretical developments and empirical validation.
Fichier non déposé

Dates et versions

hal-04173117 , version 1 (28-07-2023)

Identifiants

Citer

Raphael Baena, Lucas Drumetz, Vincent Gripon. Entropy Based Feature Regularization to Improve Transferability of Deep Learning Models. ICASSP 2023: IEEE International Conference on Acoustics, Speech and Signal Processing, Jun 2023, Rhodes Island, Greece. pp.1-5, ⟨10.1109/ICASSP49357.2023.10095195⟩. ⟨hal-04173117⟩
22 Consultations
0 Téléchargements

Altmetric

Partager

More