Image search using multilingual texts: a cross-modal learning approach between image and text - Archive ouverte HAL Accéder directement au contenu
Rapport (Rapport De Recherche) Année : 2019

Image search using multilingual texts: a cross-modal learning approach between image and text

Maxime Portaz
  • Fonction : Auteur
Adrien Nivaggioli
  • Fonction : Auteur
  • PersonId : 1044559
Estelle Maudet
  • Fonction : Auteur
  • PersonId : 1044558

Résumé

Multilingual (or cross-lingual) embeddings represent several languages in a unique vector space. Using a common embedding space enables for a shared semantic between words from different languages. In this paper, we propose to embed images and texts into a unique distributional vector space, enabling to search images by using text queries expressing information needs related to the (visual) content of images, as well as using image similarity. Our framework forces the representation of an image to be similar to the representation of the text that describes it. Moreover, by using multilingual embeddings we ensure that words from two different languages have close descriptors and thus are attached to similar images. We provide experimental evidence of the efficiency of our approach by experimenting it on two datasets: Common Objects in COntext (COCO) [19] and Multi30K [7].
Fichier principal
Vignette du fichier
egpaper_for_review.pdf (2 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02077556 , version 1 (25-03-2019)

Identifiants

Citer

Maxime Portaz, Hicham Randrianarivo, Adrien Nivaggioli, Estelle Maudet, Christophe Servan, et al.. Image search using multilingual texts: a cross-modal learning approach between image and text. [Research Report] qwant research. 2019. ⟨hal-02077556⟩
343 Consultations
211 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More