Visual Concept Detection and Annotation via Multiple Kernel Learning of multiple models - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2013

Visual Concept Detection and Annotation via Multiple Kernel Learning of multiple models

Résumé

This paper presents a multi-model framework for Visual Concept Detection and Annotation(VCDA) task based on Multiple Kernel Learning(MKL). To extract discriminative visual features and built visual kernels, meanwhile the tags associated with images are used to build the textual kernels. Finally, in order to benefit from both visual models and textual models, fusion is carried out by MKL efficiently embed. Traditionally the term frequencies model is used to capture this useful textual information. However, the shortcoming in the term frequencies model lies that the performance seriously depends on the dictionary construction and the valuable semantic information can not be captured. To solve this problem, we propose one textual feature construction approach based on $WordNet$ distance. The advantages of this approach are three-fold: (1) It is robust, because our feature construction approach does not depend on dictionary construction. (2) It can capture tags semantic information which is hardly described by the term frequencies model. (3) It efficiently fuses visual models and textual models. The experimental results on the ImageCLEF 2011 show that our approach effectively improves the recognition accuracy.

Dates et versions

hal-01339303 , version 1 (29-06-2016)

Identifiants

Citer

Yu Zhang, Stéphane Bres, Liming Chen. Visual Concept Detection and Annotation via Multiple Kernel Learning of multiple models. The International Conference on Image Analysis and Processing (ICIAP 2013), Sep 2013, Naples, Italy. pp.581-590, ⟨10.1007/978-3-642-41184-7_59⟩. ⟨hal-01339303⟩
129 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More