Multimodal feature generation framework for semantic image classification - Archive ouverte HAL
Communication Dans Un Congrès Année : 2012

Multimodal feature generation framework for semantic image classification

Résumé

The automatic attribution of semantic labels to unlabeled or weakly labeled images has received considerable attention but, given the complexity of the problem, remains a hard research topic. Here we propose a unified classification framework which mixes textual and visual information in a seamless manner. Unlike most recent previous works, computer vision techniques are used as inspiration to process textual information. To do so, we consider two types of complementary tag similarities, respectively computed from a conceptual hierarchy and from data collected from a photo sharing platform. Visual content is processed using recent techniques for bag-of visual-words feature generation. A central contribution of our work is to infer the coding step of the general bag-of-word framework with such similarities and to aggregate these tag-codes by max-pooling to obtain a single representative vector (signature). Final image annotations are obtained via late fusion, where the three modalities (two text-based and one visual-based) are merged during the classification step. Experimental results on the Pascal VOC 2007 and MIR Flickr datasets show an improvement over the state-of-the-art methods, while significantly decreasing the computational complexity of the learning system.
Fichier non déposé

Dates et versions

hal-00825190 , version 1 (23-05-2013)

Identifiants

  • HAL Id : hal-00825190 , version 1

Citer

Amel Znaidia, Aymen Shabou, Adrian Popescu, Hervé Le Borgne, Céline Hudelot. Multimodal feature generation framework for semantic image classification. ICMR '12 Proceedings of the 2nd ACM International Conference on Multimedia Retrieval, 2012, Hong Kong, Hong Kong SAR China. pp.article n° 38. ⟨hal-00825190⟩
211 Consultations
0 Téléchargements

Partager

More