Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment

Résumé

Vision and Language Pretraining has become the prevalent approach for tackling multimodal downstream tasks. The current trend is to move towards ever larger models and pretraining datasets. This computational headlong rush does not seem reasonable in the long term to move toward sustainable solutions, and de facto excludes academic laboratories with limited resources. In this work, we propose a new framework, dubbed ViCHA, that efficiently exploits the input data to boost the learning by: (a) a new hierarchical cross-modal alignment loss, (b) new self-supervised scheme based on masked image modeling, (c) leveraging image-level annotations, called Visual Concepts, obtained with existing foundation models such as CLIP to boost the performance of the image encoder. Although pretrained on four times less data, our ViCHA strategy outperforms other approaches on several downstream tasks such as Image-Text Retrieval, VQA, Visual Reasoning, Visual Entailment and Visual Grounding. The code will be made publicly available here: https://github.com/mshukor/ViCHA
Fichier principal
Vignette du fichier
VLP_BMVC22-10.pdf (770.92 Ko) Télécharger le fichier
Long_version_arxiv.pdf (4.96 Mo) Télécharger le fichier

Dates et versions

hal-03811336 , version 1 (11-10-2022)

Identifiants

Citer

Mustafa Shukor, Guillaume Couairon, Matthieu Cord. Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment. 33rd British Machine Vision Conference (BMVC), Nov 2022, London, United Kingdom. ⟨hal-03811336⟩
32 Consultations
173 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More