From 3-D speaker cloning to text-to-audiovisual speech - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2008

From 3-D speaker cloning to text-to-audiovisual speech

Résumé

Visible speech movements were optically motion captured and parameterized by means of a guided PCA. Co-articulated consonantal targets were extracted from VCVs, vocalic targets were extracted from these VCVs and from sustained vowels. Targets were selected or combined to derive target sequences for phone chains of arbitrary German utterances. Parameter trajectories for these utterances are generated by interpolating targets through linear to quadratic functions that reflect the degree of co-articulatory influence. Videos of test words embedded in a carrier sentence were rendered from parameter trajectories for an evaluation in the form of a rhyme test in noise. Results show that the synthetic videos - although intelligible only somewhat above chance level when played alone - significantly increase the recognition scores from 45.6% in audio alone presentation to 60.4% in audiovisual presentation.
Fichier principal
Vignette du fichier
sf_AVSP08.pdf (110.33 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00361888 , version 1 (16-02-2009)

Identifiants

  • HAL Id : hal-00361888 , version 1

Citer

Sascha Fagel, Gérard Bailly. From 3-D speaker cloning to text-to-audiovisual speech. AVSP 2008 - 7th International Conference on Auditory-Visual Speech Processing, Sep 2008, Moreton Island, Australia. pp.43-46. ⟨hal-00361888⟩
97 Consultations
38 Téléchargements

Partager

Gmail Facebook X LinkedIn More