Temporal Video Indexing based on Early Vision Using Laguerre Filters
Résumé
Visual information of videos is based on spatial and temporal extents. However, most of video indexing techniques work in the spatial extent. Thus, spatial features are extracted from individual frames and then temporal information is introduced by their temporal evolution or tracking in order to construct motion vectors that serve as temporal features. In this paper we present a novel approach for video indexing based on temporal features extracted basically from the temporal extent. The approach is based on Laguerre filters of the Laguerre transform, which is a polynomial transform, that preserve the causality constraint in the temporal domain and model the early vision stages (V1 and MT) in the visual system for extraction and representation of visual motion (temporal events). The motion pathway is constructed by subsampling the spatial low-pass versions of frames (spatial integration) and by decomposing subsequently local temporal vectors at spatial positions. Results encourage our model for video indexing and retrieval.