UPMC/LIP6 at ImageCLEF's WikipediaMM: An Image-Annotation Model for an Image Search-Engine
Résumé
In this paper, we present the LIP6 retrieval system which automatically ranks the most similar images to a given query constituted of both textual and/or visual information through a given textual-visual collection. The system first preprocesses the data set in order to remove stop-words as well as non-informative terms. For each given query, it then finds a ranked list of its most similar images using only their textual informations. Visual features are then used to obtain a second ranking list from a manifold and a linear combination of these two ranking lists gives the final ranking of images.