Weakly-supervised segmentation of referring expressions - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Weakly-supervised segmentation of referring expressions

Résumé

Visual grounding localizes regions (boxes or segments) in the image corresponding to given referring expressions. In this work we address image segmentation from referring expressions, a problem that has so far only been addressed in a fully-supervised setting. A fullysupervised setup, however, requires pixel-wise supervision and is hard to scale given the expense of manual annotation. We therefore introduce a new task of weakly-supervised image segmentation from referring expressions and propose Text grounded semantic SEGgmentation (TSEG) that learns segmentation masks directly from image-level referring expressions without pixel-level annotations. Our transformer-based method computes patch-text similarities and guides the classification objective during training with a new multi-label patch assignment mechanism. The resulting visual grounding model segments image regions corresponding to given natural language expressions. Our approach TSEG demonstrates promising results for weakly-supervised referring expression segmentation on the challenging PhraseCut and RefCOCO datasets. TSEG also shows competitive performance when evaluated in a zero-shot setting for semantic segmentation on Pascal VOC.
Fichier principal
Vignette du fichier
wseg_arxiv.pdf (7.85 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04071277 , version 1 (17-04-2023)

Identifiants

  • HAL Id : hal-04071277 , version 1

Citer

Robin Strudel, Ivan Laptev, Cordelia Schmid. Weakly-supervised segmentation of referring expressions. 2022. ⟨hal-04071277⟩
18 Consultations
14 Téléchargements

Partager

Gmail Facebook X LinkedIn More