SCNet: Learning Semantic Correspondence - Archive ouverte HAL Access content directly
Conference Papers Year : 2017

SCNet: Learning Semantic Correspondence

Abstract

This paper addresses the problem of establishing semantic correspondences between images depicting different instances of the same object or scene category. Previous approaches focus on either combining a spatial regular-izer with hand-crafted features, or learning a correspondence model for appearance only. We propose instead a convolutional neural network architecture, called SCNet, for learning a geometrically plausible model for semantic correspondence. SCNet uses region proposals as matching primitives, and explicitly incorporates geometric consistency in its loss function. It is trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and a comparative evaluation on several standard benchmarks demonstrates that the proposed approach substantially out-performs both recent deep learning architectures and previous methods based on hand-crafted features.
Fichier principal
Vignette du fichier
SCNet_ICCV.pdf (5.68 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01576117 , version 1 (22-08-2017)

Identifiers

Cite

Kai K Han, Rafael S Rezende, Bumsub Ham, Kwan-Yee K Wong, Minsu Cho, et al.. SCNet: Learning Semantic Correspondence. ICCV 2017 - International Conference on Computer Vision, Oct 2017, Venise, Italy. pp.1849-1858, ⟨10.1109/ICCV.2017.203⟩. ⟨hal-01576117⟩
952 View
539 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More