A Multilinear Tongue Model Derived from Speech Related MRI Data of the Human Vocal Tract
Résumé
We present a multilinear statistical model of the human tongue that captures anatomical and tongue pose related shape variations separately. The model is derived from 3D magnetic resonance imaging data of 11 speakers sustaining speech related vocal tract configurations. To extract model parameters, we use a minimally supervised method based on an image segmentation approach and a template fitting technique. Furthermore, we use image denoising to deal with possibly corrupt data, palate surface information reconstruction to handle palatal tongue contacts, and a bootstrap strategy to refine the obtained shapes. Our evaluation shows that, by limiting the degrees of freedom for the anatomical and speech related variations, to 5 and 4, respectively, we obtain a model that can reliably register unknown data while avoiding overfitting effects. Furthermore, we show that it can be used to generate plausible tongue animation by tracking sparse motion capture data.
Fichier principal
article.pdf (2.94 Mo)
Télécharger le fichier
anc/01MRIM_projection_68.mp4 (510.82 Ko)
Télécharger le fichier
anc/01MRIM_projection_69.mp4 (512.59 Ko)
Télécharger le fichier
anc/01MRIM_projection_70.mp4 (515.33 Ko)
Télécharger le fichier
anc/01MRIM_projection_73.mp4 (511.8 Ko)
Télécharger le fichier
anc/01MRIM_projection_74.mp4 (516.94 Ko)
Télécharger le fichier
anc/01MRIM_projection_76.mp4 (512.42 Ko)
Télécharger le fichier
anc/01MRIM_projection_77.mp4 (511.98 Ko)
Télécharger le fichier
anc/VP05_fixed_anatomy.mp4 (11.04 Mo)
Télécharger le fichier
anc/VP05_full.mp4 (10.37 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...