Multimodal HMM-based NAM-to-speech conversion - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2009

Multimodal HMM-based NAM-to-speech conversion

Résumé

Although the segmental intelligibility of converted speech from silent speech using direct signal-to-signal mapping proposed by Toda et al. [1] is quite acceptable, listeners have sometimes difficulty in chunking the speech continuum into meaningful words due to incomplete phonetic cues provided by output signals. This paper studies another approach consisting in combining HMM-based statistical speech recognition and synthesis techniques, as well as training on aligned corpora, to convert silent speech to audible voice. By introducing phonological constraints, such systems are expected to improve the phonetic consistency of output signals. Facial movements are used in order to improve the performance of both recognition and synthesis procedures. The results show that including these movements improves the recognition rate by 6.2% and a final improvement of the spectral distortion by 2.7% is observed. The comparison between direct signal-to-signal and phonetic-based mappings is finally commented in this paper.
Fichier principal
Vignette du fichier
vat_IS09.pdf (1.08 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00419232 , version 1 (23-09-2009)

Identifiants

  • HAL Id : hal-00419232 , version 1

Citer

Viet-Anh Tran, Gérard Bailly, Hélène Loevenbruck, Tomoki Toda. Multimodal HMM-based NAM-to-speech conversion. Interspeech 2009 - 10th Annual Conference of the International Speech Communication Association, Sep 2009, Brighton, United Kingdom. pp.656-659. ⟨hal-00419232⟩
438 Consultations
175 Téléchargements

Partager

Gmail Facebook X LinkedIn More