DeTracker: A Joint Detection and Tracking Framework - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

DeTracker: A Joint Detection and Tracking Framework

Résumé

We propose a unified network for simultaneous detection and tracking. Instead of basing the tracking framework on object detections, we focus our work directly on tracklet detection whilst obtaining object detection. We take advantage of the spatio-temporal information and features from 3D CNN networks and output a series of bounding boxes and their corresponding identifiers with the use of Graph Convolution Neural Networks. We put forward our approach in contrast to traditional tracking-by-detection methods, the major advantages of our formulation are the creation of more reliable tracklets, the enforcement of the temporal consistency, and the absence of data association mechanism for a given set of frames. We introduce DeTracker, a truly joint detection and tracking network. We enforce an intra-batch temporal consistency of features by enforcing a triplet loss over our tracklets, guiding the features of tracklets with different identities separately clustered in the feature space. Our approach is demonstrated on two different datasets, including natural images and synthetic images, and we obtain 58.7% on MOT and 56.79% on a subset of the JTA-dataset.
Fichier principal
Vignette du fichier
VISAPP.pdf (361.24 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03541517 , version 1 (24-01-2022)

Identifiants

  • HAL Id : hal-03541517 , version 1

Citer

Juan Diego Gonzales Zuniga, Ujjwal Ujjwal, Francois F Bremond. DeTracker: A Joint Detection and Tracking Framework. VISAPP 2022 - 17th International Conference on Computer Vision Theory and Applications, Feb 2022, online, France. ⟨hal-03541517⟩
217 Consultations
822 Téléchargements

Partager

More