Cross-modal Retrieval for Knowledge-based Visual Question Answering - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

Cross-modal Retrieval for Knowledge-based Visual Question Answering

Résumé

Knowledge-based Visual Question Answering about Named Entities is a challenging task that requires retrieving information from a multimodal Knowledge Base. Named entities have diverse visual representations and are therefore difficult to recognize. We argue that cross-modal retrieval may help bridge the semantic gap between an entity and its depictions, and is foremost complementary with mono-modal retrieval. We provide empirical evidence through experiments with a multimodal dual encoder, namely CLIP, on the recent ViQuAE, InfoSeek, and Encyclopedic-VQA datasets. Additionally, we study three different strategies to fine-tune such a model: mono-modal, cross-modal, or joint training. Our method, which combines mono-and cross-modal retrieval, is competitive with billion-parameter models on the three datasets, while being conceptually simpler and computationally cheaper.
Fichier principal
Vignette du fichier
camera_ecir_2024_cross_modal_arXiv.pdf (8.85 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04384431 , version 1 (10-01-2024)

Licence

Identifiants

  • HAL Id : hal-04384431 , version 1

Citer

Paul Lerner, Olivier Ferret, Camille Guinaudeau. Cross-modal Retrieval for Knowledge-based Visual Question Answering. 46th European Conference on Information Retrieval (ECIR 2024), 2024, Glasgow, United Kingdom. ⟨hal-04384431⟩
151 Consultations
62 Téléchargements

Partager

More