Toward a multi-speaker visual articulatory feedback system - Archive ouverte HAL
Communication Dans Un Congrès Année : 2011

Toward a multi-speaker visual articulatory feedback system

Résumé

In this paper, we present recent developments on the HMMbased acoustic-to-articulatory inversion approach that we develop for a "visual articulatory feedback" system. In this approach, multi-stream phoneme HMMs are trained jointly on synchronous streams of acoustic and articulatory data, acquired by electromagnetic articulography (EMA). Acousticto- articulatory inversion is achieved in two steps. Phonetic and state decoding is first performed. Then articulatory trajectories are inferred from the decoded phone and state sequence using the maximum-likelihood parameter generation algorithm (MLPG). We introduce here a new procedure for the reestimation of the HMM parameters, based on the Minimum Generation Error criterion (MGE). We also investigate the use of model adaptation techniques based on maximum likelihood linear regression (MLLR), as a first step toward a multispeaker visual articulatory feedback system.
Fichier principal
Vignette du fichier
aby_IS11.pdf (71.99 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00618781 , version 1 (05-09-2011)

Identifiants

  • HAL Id : hal-00618781 , version 1

Citer

Atef Ben Youssef, Thomas Hueber, Pierre Badin, Gérard Bailly. Toward a multi-speaker visual articulatory feedback system. Interspeech 2011 - 12th Annual Conference of the International Speech Communication Association, Aug 2011, Florence, Italy. pp.589-592. ⟨hal-00618781⟩
376 Consultations
269 Téléchargements

Partager

More