Layerwise Early Stopping for Test Time Adaptation - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2024

Layerwise Early Stopping for Test Time Adaptation

Résumé

Test Time Adaptation (TTA) addresses the problem of distribution shift by enabling pretrained models to learn new features on an unseen domain at test time. However, it poses a significant challenge to maintain a balance between learning new features and retaining useful pretrained features. In this paper, we propose Layerwise EArly STopping (LEAST) for TTA to address this problem. The key idea is to stop adapting individual layers during TTA if the features being learned do not appear beneficial for the new domain. For that purpose, we propose using a novel gradient-based metric to measure the relevance of the current learnt features to the new domain without the need for supervised labels. More specifically, we propose to use this metric to determine dynamically when to stop updating each layer during TTA. This enables a more balanced adaptation, restricted to layers benefiting from it, and only for a certain number of steps. Such an approach also has the added effect of limiting the forgetting of pretrained features useful for dealing with new domains. Through extensive experiments, we demonstrate that Layerwise Early Stopping improves the performance of existing TTA approaches across multiple datasets, domain shifts, model architectures, and TTA losses.
Fichier principal
Vignette du fichier
LEAST_for_TTA_arxiv.pdf (960.68 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04533467 , version 1 (04-04-2024)

Identifiants

  • HAL Id : hal-04533467 , version 1

Citer

Sabyasachi Sahoo, Mostafa Elaraby, Jonas Ngnawe, Yann Pequignot, Frédéric Precioso, et al.. Layerwise Early Stopping for Test Time Adaptation. 2024. ⟨hal-04533467⟩
3 Consultations
1 Téléchargements

Partager

Gmail Facebook X LinkedIn More