Transductive Information Maximization For Few-Shot Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Transductive Information Maximization For Few-Shot Learning

Malika Boudiaf
  • Fonction : Auteur
Imtiaz Masud Ziko
  • Fonction : Auteur
Jérôme Rony
  • Fonction : Auteur
Dolz Jose
  • Fonction : Auteur
Ismail Ben Ayed
  • Fonction : Auteur

Résumé

We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. Furthermore, we propose a new alternating-direction solver for our mutual-information loss, which substantially speeds up transductiveinference convergence over gradient-based optimization, while yielding similar accuracy. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments 2 demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2% and 5% improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios, with domain shifts and larger numbers of classes.
Fichier principal
Vignette du fichier
Transductive_Information_Maximization_for_Few_Shot_Learning__Arxiv_version_.pdf (917.05 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03979795 , version 1 (09-02-2023)
hal-03979795 , version 2 (22-06-2023)

Identifiants

  • HAL Id : hal-03979795 , version 1

Citer

Malika Boudiaf, Imtiaz Masud Ziko, Jérôme Rony, Dolz Jose, Pablo Piantanida, et al.. Transductive Information Maximization For Few-Shot Learning. Conference on Neural Information Processing Systems, Dec 2020, Vancouver, Canada. ⟨hal-03979795v1⟩
16 Consultations
10 Téléchargements

Partager

Gmail Facebook X LinkedIn More