ATLAS: adaptive single object tracking using offline learned motion and visual patterns
Résumé
In this paper we introduce ATLAS, a novel generic single object tracker based on two convolutional neural networks (CNN) trained offline. The key principle consists in alternating between tracking using motion information and predicting the object location in time based on visual similarity. The proposed tracker uses a regression-based approach to learn offline generic relationships between object appearances and its associated motion patterns. Then, by continuously updating the target appearance model, the system adaptively modifies the object bounding box position, size and shape. Starting from the initial candidate location estimated using motion patterns, the object's position is successively shifted within the context search area based on a patch similarity function that does not require any manually designed features. The final track location corresponds to the instance that provides the maximum similarity value. The experimental evaluation, performed on the challenging datasets considered by the Visual Object Tracking (VOT) international contest in 2016 (http://www.votchallenge.net/), demonstrates the performance of our technique when compared with state-of the art methods. Our tracker runs at more than 20 fps using generic motion and visual patterns