Encouraging Intra-Class Diversity Through a Reverse Contrastive Loss for Better Single-Source Domain Generalization - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Encouraging Intra-Class Diversity Through a Reverse Contrastive Loss for Better Single-Source Domain Generalization

Résumé

Traditional deep learning algorithms often fail to generalize when they are tested outside of the domain of the training data. The issue can be mitigated by using unlabeled data from the target domain at training time, but because data distributions can change dynamically in real-life applications once a learned model is deployed, it is critical to create networks robust to unknown and unforeseen domain shifts. In this paper we focus on one of the reasons behind the inability of neural networks to be so: deep networks focus only on the most obvious, potentially spurious, clues to make their predictions and are blind to useful but slightly less efficient or more complex patterns. This behaviour has been identified and several methods partially addressed the issue. To investigate their effectiveness and limits, we first design a publicly available MNIST-based benchmark to precisely measure the ability of an algorithm to find the "hidden" patterns. Then, we evaluate state-of-the-art algorithms through our benchmark and show that the issue is largely unsolved. Finally, we propose a partially reversed contrastive loss to encourage intra-class diversity and find less strongly correlated patterns, whose efficiency is demonstrated by our experiments.
Fichier principal
Vignette du fichier
main.pdf (7.79 Mo) Télécharger le fichier
RCL_ICCV-AROW2021.zip (601.46 Ko) Télécharger le fichier
main.bbl (11.86 Ko) Télécharger le fichier
main.log (29.77 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03260124 , version 1 (14-06-2021)
hal-03260124 , version 2 (13-10-2022)
hal-03260124 , version 3 (31-01-2023)
hal-03260124 , version 4 (23-02-2023)

Identifiants

Citer

Thomas Duboudin, Emmanuel Dellandréa, Corentin Abgrall, Gilles Hénaff, Liming Chen. Encouraging Intra-Class Diversity Through a Reverse Contrastive Loss for Better Single-Source Domain Generalization. ICCV - Workshop on Adversarial Robustness In the Real World 2021, Oct 2021, Virtual, France. ⟨hal-03260124v4⟩
69 Consultations
147 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More