Can we use speaker embeddings on spontaneous speech obtained from medical conversations to predict intelligibility?
Résumé
The automatic prediction of speech intelligibility is a recurrent problem in the context of pathological speech. Despite recent developments, these systems are normally applied to specific speech tasks recorded in clean conditions that do not necessarily reflect a healthcare environment. In the present paper, we intend to test the reliability of an intelligibility predictor on data obtained in clinical conditions, in the specific case of head and neck cancer. In order to do so, we present a system based on speaker embeddings trained on a multi-task methodology to simultaneous predict speech intelligibility and speech disorder severity. The results obtained on the different evaluation tasks display correlations as high as 0.891 on a hospital patient set, showing robustness to the type of speech material used in these automatic assessments. Moreover, the usage of spontaneous speech during the evaluation shed light on an understudied, but with more ecological validity, type of speech material which displayed promising results. The reliability displayed across the different tasks suggests a direct deployment of the developed systems in a hospital setting.
Origine | Fichiers produits par l'(les) auteur(s) |
---|