Real-time 3D motion capture by monocular vision and virtual rendering - Archive ouverte HAL
Article Dans Une Revue Machine Vision and Applications Année : 2017

Real-time 3D motion capture by monocular vision and virtual rendering

Résumé

Networked 3D virtual environments allow multiple users to interact over the Internet by means of avatars and to get some feeling of a virtual telepresence. However, avatar control may be tedious. 3D sensors for motion capture systems based on 3D sensors have reached the consumer market, but webcams remain more widespread and cheaper. This work aims at animating a user's avatar by real-time motion capture using a personal computer and a plain webcam. In a classical model-based approach, we register a 3D articulated upper-body model onto video sequences and propose a number of heuristics to accelerate particle filtering while robustly tracking user motion. Describing the body pose using wrists 3D positions rather than joint angles allows efficient handling of depth ambiguities for probabilistic tracking. We demonstrate experimentally the robustness of our 3D body tracking by real-time monocular vision, even in the case of partial occlusions and motion in the depth direction

Dates et versions

hal-01630015 , version 1 (07-11-2017)

Identifiants

Citer

David Antonio Gómez Jáuregui, Patrick Horain. Real-time 3D motion capture by monocular vision and virtual rendering. Machine Vision and Applications, 2017, 28 (8), pp.839 - 858. ⟨10.1007/s00138-017-0861-3⟩. ⟨hal-01630015⟩
85 Consultations
0 Téléchargements

Altmetric

Partager

More