Markerless 3D Human Pose Tracking in the Wild with fusion of Multiple Depth Cameras: Comparative Experimental Study with Kinect 2 and 3
Résumé
Human-robot interaction requires a robust estimate of human motion in real-time. This work presents a fusion algorithm for joint center positions tracking from multiple depth cameras to improve human motion analysis accuracy. The main contribution is the proposed algorithm based on body tracking measurements fusion with an extended Kalman filter and anthropomorphic constraints, independent of sensors. As an illustration of the use of this algorithm, this paper presents the direct comparison of joint center positions estimated with a reference stereophotogrammetric system and the ones estimated with the new Kinect 3 (Azure Kinect) sensor and its older version the Kinect 2 (Kinect for Windows). The experiment was made in two parts, one for each model of Kinect, by comparing raw and merging body tracking data of two sided Kinect with the proposed algorithm. The proposed approach improves body tracker data for Kinect 3 which has not the same characteristics as Kinect 2. This study shows also the importance of defining good heuristics to merge data depending on how the body tracking works. Thus, with proper heuristics, the joint center position estimates are improved by at least 14.6 %. Finally, we propose an additional comparison between Kinect 2 and Kinect 3 exhibiting the pros and cons of the two sensors.
Origine | Fichiers produits par l'(les) auteur(s) |
---|