Contrastive Visual and Language Learning for Visual Relationship Detection
Résumé
Visual Relationship Detection (VRD) aims to understand real-world objects' interactions by grounding visual concepts to compositional visual relation triples, written in the form of (subject, predicate, object). Previous work explored the use of contrastive learning to implicitly predict predicates (representing relations) from the relevant image regions. However, these models often directly leverage in-distribution spatial and language co-occurrences biases during training, preventing the models from generalizing to out-of-distribution compositions. In this work, we examined whether contrastive vision and language models, pre-trained on largescale external image and text datasets, can assist the detection of compositional visual relations. To this end, we propose a contrastive fine-tuning approach for the VRD task. The results obtained from this investigation show that larger models yield better performance when compared with their smaller counterparts, while models pre-trained on larger datasets do not necessarily present the best performance.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|