Head and Facial Action Tracking: Comparison of two Robust Approaches - Archive ouverte HAL Access content directly
Conference Papers Year : 2006

Head and Facial Action Tracking: Comparison of two Robust Approaches

Abstract

In this work, we address a method that is able to track simultaneously 3D head movements and facial actions like lip and eyebrow movements in a video sequence. In a baseline framework, an adaptive appearance model is estimated online by the knowledge of a monocular video sequence. This method uses a 3D model of the face and a facial adaptive texture model. Then, we consider and compare two improved models in order to increase robustness to occlusions. First, we use robust statistics in order to downweight the hidden regions or outlier pixels. In a second approach, mixture models provides better integration of occlusions. Experiments demonstrate the benefit of the two robust models. The latter are compared under various occlusions.
Fichier principal
Vignette du fichier
fg2006.pdf (2.7 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-00442753 , version 1 (24-12-2009)

Identifiers

  • HAL Id : hal-00442753 , version 1

Cite

Romain Hérault, Franck Davoine, Yves Grandvalet. Head and Facial Action Tracking: Comparison of two Robust Approaches. 7th IEEE International Conference on Automatic Face and Gesture Recognition, Apr 2006, Southampton, UK, United Kingdom. pp.287-292. ⟨hal-00442753⟩
147 View
81 Download

Share

Gmail Facebook Twitter LinkedIn More