A Global-Local Approach to Extracting Deformable Fashion Items from Web Images - Archive ouverte HAL Accéder directement au contenu
Chapitre D'ouvrage Année : 2016

A Global-Local Approach to Extracting Deformable Fashion Items from Web Images

Résumé

In this work we propose a new framework for extracting deformable clothing items from images by using a three stage global-local fitting procedure. First, a set of initial segmentation templates are generated from a handcrafted database. Then, each template initiates an object extraction process by a global alignment of the model, followed by a local search minimizing a measure of the misfit with respect to the potential boundaries in the neighborhood. Finally, the results provided by each template are aggregated, with a global fitting criterion, to obtain the final segmentation. The method is validated on the Fashionista database and on a new database of manually segmented images. Our method compares favorably with the Paper Doll clothing parsing and with the recent GrabCut on One Cut foreground extraction method. We quantitatively analyze each component, and show examples of both successful segmentation and difficult cases.
Fichier principal
Vignette du fichier
yang16global-local.pdf (1.77 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02435296 , version 1 (10-01-2020)

Identifiants

Citer

Lixuan Yang, Helena Rodriguez, Michel Crucianu, Marin Ferecatu. A Global-Local Approach to Extracting Deformable Fashion Items from Web Images. Enqing Chen; Yihong Gong; Yun Tie. Advances in Multimedia Information Processing - PCM 2016, 10132, Springer, pp.1-12, 2016, Lecture Notes in Computer Science, 978-3-319-48889-9. ⟨10.1007/978-3-319-48896-7_1⟩. ⟨hal-02435296⟩
25 Consultations
60 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More