Exploiting Visual Concepts to Improve Text-Based Image Retrieval - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2009

Exploiting Visual Concepts to Improve Text-Based Image Retrieval

Résumé

In this paper, we study how to automatically exploit visual concepts in a text-based image retrieval task. First, we use Forest of Fuzzy Decision Trees (FFDTs) to automatically annotate images with visual concepts. Second, using optionally WordNet, we match visual concepts and textual query. Finally, we filter the text-based image retrieval result list using the FFDTs. This study is performed in the context of two tasks of the CLEF2008 international campaign: the Visual Concept Detection Task (VCDT) (17 visual concepts) and the photographic retrieval task (ImageCLEFphoto) (39 queries and 20k images). Our best VCDT run is the 4th best of the 53 submitted runs. The ImageCLEFphoto results show that there is a clear improvement, in terms of precision at 20, when using the visual concepts explicitly appearing in the query.

Dates et versions

hal-00402448 , version 1 (07-07-2009)

Identifiants

Citer

Sabrina Tollari, Marcin Detyniecki, Christophe Marsala, Ali Fakeri-Tabrizi, Massih-Reza Amini, et al.. Exploiting Visual Concepts to Improve Text-Based Image Retrieval. European Conference on Information Retrieval (ECIR), Apr 2009, Toulouse, France. pp.701 - 705, ⟨10.1007/978-3-642-00958-7_70⟩. ⟨hal-00402448⟩
76 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More