Fast Pixelwise Adaptive Visual Tracking of Non-Rigid Objects
Résumé
In this paper, we present a new algorithm for real-time single-object tracking in videos in unconstrained environments. The algorithm comprises two different components that are trained “in one shot” at the first video frame: a detector that makes use of the generalized Hough transform with color and gradient descriptors and a probabilistic segmentation method based on global models for foreground and background color distributions. Both components work at pixel level and are used for tracking in a combined way adapting each other in a co-training manner. Moreover, we propose an adaptive shape model as well as a new probabilistic method for updating the scale of the tracker. Through effective model adaptation and segmentation, the algorithm is able to track objects that undergo rigid and non-rigid deformations and considerable shape and appearance variations. The proposed tracking method has been thoroughly evaluated on challenging benchmarks, and outperforms the state-of-the-art tracking methods designed for the same task. Finally, a very efficient implementation of the proposed models allows for extremely fast tracking.