Contrastive Visual and Language Learning for Visual Relationship Detection - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Contrastive Visual and Language Learning for Visual Relationship Detection

Résumé

Visual Relationship Detection (VRD) aims to understand real-world objects' interactions by grounding visual concepts to compositional visual relation triples, written in the form of (subject, predicate, object). Previous work explored the use of contrastive learning to implicitly predict predicates (representing relations) from the relevant image regions. However, these models often directly leverage in-distribution spatial and language co-occurrences biases during training, preventing the models from generalizing to out-of-distribution compositions. In this work, we examined whether contrastive vision and language models, pre-trained on largescale external image and text datasets, can assist the detection of compositional visual relations. To this end, we propose a contrastive fine-tuning approach for the VRD task. The results obtained from this investigation show that larger models yield better performance when compared with their smaller counterparts, while models pre-trained on larger datasets do not necessarily present the best performance.
Fichier principal
Vignette du fichier
2022.alta-1.23.pdf (413.6 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04216168 , version 1 (23-09-2023)

Identifiants

  • HAL Id : hal-04216168 , version 1

Citer

Thanh Tran, Maëlic Neau, Paulo E. Santos, David Powers. Contrastive Visual and Language Learning for Visual Relationship Detection. The 20th Annual Workshop of the Australasian Language Technology Association, Australasian Language Technology Association, Dec 2022, Adelaide, SA, Autralia, Australia. pp.170-177. ⟨hal-04216168⟩
44 Consultations
25 Téléchargements

Partager

Gmail Facebook X LinkedIn More