Retina enhanced bag of words descriptors for video classification - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2014

Retina enhanced bag of words descriptors for video classification

Résumé

This paper addresses the task of detecting diverse semantic concepts in videos. Within this context, the Bag Of Visual Words (BoW) model, inherited from sampled video keyframes analysis, is among the most popular methods. However, in the case of image sequences, this model faces new difficulties such as the added motion information, the extra computational cost and the increased variability of content and concepts to handle. Considering this spatio-temporal context, we propose to extend the BoW model by introducing video preprocessing strategies with the help of a retinamodel, before extracting BoW descriptors. This preprocessing increases the robustness of local features to disturbances such as noise and lighting variations. Additionally, the retina model is used to detect potentially salient areas and to construct spatio-temporal descriptors. We experiment with three state of the art local features, SIFT, SURF and FREAK, and we evaluate our results on the TRECVid 2012 Semantic Indexing (SIN) challenge.
Fichier non déposé

Dates et versions

hal-01096114 , version 1 (16-12-2014)

Identifiants

  • HAL Id : hal-01096114 , version 1

Citer

Tiberius Strat, Alexandre Benoit, Patrick Lambert. Retina enhanced bag of words descriptors for video classification. EUSIPCO 2014, Sep 2014, Lisbon, Portugal. ⟨hal-01096114⟩
41 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More