Spatially Localized Visual Dictionary Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Spatially Localized Visual Dictionary Learning

Résumé

This paper addresses the challenge of devising new representation learning algorithms that overcome the lack of interpretability of classical visual models. Therefore, it introduces a new recursive visual patch selection technique built on top of a Shared Nearest Neighbors embedding method. The main contribution of the paper is to drastically reduce the high-dimensionality of such over-complete representation thanks to a recursive feature elimination method. We show that the number of spatial atoms of the representation can be reduced by up to two orders of magnitude without much degrading the encoded information. The resulting representations are shown to provide competitive image classification performance with the state-of-the-art while enabling to learn highly interpretable visual models.
Fichier principal
Vignette du fichier
LeveauICMR2016.pdf (9.03 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01373778 , version 1 (05-10-2016)

Identifiants

Citer

Valentin Leveau, Alexis Joly, Olivier Buisson, Patrick Valduriez. Spatially Localized Visual Dictionary Learning. ICMR: International Conference on Multimedia Retrieval, Jun 2016, New York, United States. pp.367-370, ⟨10.1145/2911996.2912070⟩. ⟨hal-01373778⟩
290 Consultations
103 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More