Lip-synching using speaker-specific articulation, shape and appearance models - Archive ouverte HAL
Article Dans Une Revue EURASIP Journal on Audio, Speech, and Music Processing Année : 2009

Lip-synching using speaker-specific articulation, shape and appearance models

Résumé

We describe here the control, shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation, anatomy, and texture. Two original contributions are put forward here: the trainable trajectory formation model that predicts articulatory trajectories of a talking face from phonetic input and the texture model that computes a texture for each 3D facial shape according to articulation.Usingmotion capture data from different speakers and module-specific evaluation procedures, we show here that this cloning system restores detailed idiosyncrasies and the global coherence of visible articulation. Results of a subjective evaluation of the global system with competing trajectory formation models are further presented and commented.
Fichier principal
Vignette du fichier
proof_jasmp_GB09_v2.pdf (954.93 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte
Loading...

Dates et versions

hal-00447061 , version 1 (14-01-2010)

Identifiants

Citer

Gérard Bailly, Oxana Govokhina, Frédéric Elisei, Gaspard Breton. Lip-synching using speaker-specific articulation, shape and appearance models. EURASIP Journal on Audio, Speech, and Music Processing, 2009, Special issue on animating virtual speakers or singers from audio: Lip-synching facial animation, pp.ID 769494. ⟨10.1155/2009/769494⟩. ⟨hal-00447061⟩
277 Consultations
170 Téléchargements

Altmetric

Partager

More