UPMC/LIP6 at ImageCLEFphoto 2008: on the exploitation of visual concepts (VCDT)
Résumé
In this paper, we focus our efforts on the study of how to automatically extract and exploit visual concepts. First, in the Visual Concept Detection Task (VCDT), we look at the mutual exclusion and implication relations between VCDT concepts in order to improve the automatic image annotation by Forest of Fuzzy Decision Trees (FFDTs). In our experiments, the use of the relations do not improve nor worsen the quality of the annotation. Our best VCDT run is the 4th ones under 53 submitted runs (3rd team under 11 teams). Second, in the Photo Retrieval Task (ImageCLEFphoto), we use the FFDTs learn in VCDT task and WordNet to improve image retrieval. We analyse the influence of extracted visual concept models to the diversity and precision. This study shows that there is a clear improvement, in terms of precision or cluster recall at 20, when using the visual concepts explicitly appearing in the query.