When does CLIP generalize better than unimodal models? When judging human-centric concepts - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

When does CLIP generalize better than unimodal models? When judging human-centric concepts

Résumé

CLIP, a vision-language network trained with a multimodal contrastive learning objective on a large dataset of images and captions, has demonstrated impressive zero-shot ability in various tasks. However, recent work showed that in comparison to unimodal (visual) networks, CLIP’s multimodal training does not benefit generalization (e.g. few-shot or transfer learning) for standard visual classification tasks such as object, street numbers or animal recognition. Here, we hypothesize that CLIP’s improved unimodal generalization abilities may be most prominent in domains that involve human-centric concepts (cultural, social, aesthetic, affective...); this is because CLIP’s training dataset is mainly composed of image annotations made by humans for other humans. To evaluate this, we use 3 tasks that require judging human-centric concepts: sentiment analysis on tweets, genre classification on books or movies. We introduce and publicly release a new multimodal dataset for movie genre classification. We compare CLIP’s visual stream against two visually trained networks and CLIP’s textual stream against two linguistically trained networks, as well as multimodal combinations of these networks. We show that CLIP generally outperforms other networks, whether using one or two modalities. We conclude that CLIP’s multimodal training is beneficial for both unimodal and multimodal tasks that require classification of human-centric concepts.
Fichier principal
Vignette du fichier
2022.repl4nlp-1.4.pdf (795.3 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03874203 , version 1 (27-11-2022)

Licence

Identifiants

Citer

Romain Bielawski, Benjamin Devillers, Tim van de Cruys, Rufin VanRullen. When does CLIP generalize better than unimodal models? When judging human-centric concepts. 7th Workshop on Representation Learning (Repl4NLP 2022), ACL Special Interest Group on Representation Learning (SIGREP), May 2022, Dublin, Ireland. pp.29-38, ⟨10.18653/v1/2022.repl4nlp-1.4⟩. ⟨hal-03874203⟩
157 Consultations
133 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More