Sparse Context Transformer for Few-Shot Object Detection - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Sparse Context Transformer for Few-Shot Object Detection

Mingyuan Jiu
  • Fonction : Auteur
Jie Mei
  • Fonction : Auteur
Hichem Sahbi
Xiaoheng Jiang
  • Fonction : Auteur
Mingliang Xu
  • Fonction : Auteur
  • PersonId : 1437986

Résumé

Few-shot detection is a major task in pattern recognition which seeks to localize objects using models trained with few labeled data. One of the mainstream few-shot methods is transfer learning which consists in pretraining a detection model in a source domain prior to its fine-tuning in a target domain. However, it is challenging for the finetuned models to effectively identify new classes in the target domain, particularly when the underlying labeled training data are scarce. In this paper, we devise a novel sparse context transformer (SCT) that effectively leverages object knowledge in the source domain, and automatically learns a sparse context from only few training images in the target domain. As a result, it combines different relevant clues in order to enhance the discrimination power of the learned detectors and reduce class confusion. We evaluated the proposed method on two challenging few-shot object detection benchmarks, and empirical results show that the proposed method obtains competitive performance compared to the related state-of-the-art.
Fichier principal
Vignette du fichier
paper.pdf (5.02 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04771749 , version 1 (07-11-2024)

Identifiants

  • HAL Id : hal-04771749 , version 1

Citer

Mingyuan Jiu, Jie Mei, Hichem Sahbi, Xiaoheng Jiang, Mingliang Xu. Sparse Context Transformer for Few-Shot Object Detection. The International Conference on Artificial Intelligence (PRICAI), Nov 2024, Kyoto, Japan. ⟨hal-04771749⟩
18 Consultations
6 Téléchargements

Partager

More