Multiparametric speech assessment - Archive ouverte HAL
Communication Dans Un Congrès Année : 2010

Multiparametric speech assessment

Alain Ghio
Thierry Legou
  • Fonction : Auteur
  • PersonId : 850065
  • IdRef : 257835849

Résumé

Speech production is probably the most complex neuromotor activity of the human behavior. It involves a large number of muscles with particularly precise movements, characterized by a lot of motor units whose synchronization must be controlled perfectly to create the speech sound. This acoustical signal is the result of different mechanisms involving respiration, phonation and articulation controlled by cognitive processes. Dysarthria refers not only to a deficit in articulation per se, but encompasses disturbances in the control of voice quality, speech rhythm, loudness, segmental articulation, pitch, fluency, etc. Even though associations between deviant acoustic-phonetic dimensions and certain types of dysarthria have been made in clinical practice and in the clinical literature, descriptions of dysarthria are often based on perceptual assessments as done in the precursory studies of Darley [1]. It is true that perceptual analysis is still considered as the “gold standard” and a patient is declared dysarthric because he is perceived dysarthric [2]. However, instrumental analysis is more and more recommended to provide complementary information for the assessment and to objectively quantify descriptions of the speech patterns [3,4]. A review of acoustic studies of dysarthric speech is available in [3]. It reports that “the great majority (of studies) focuses on a small set of measures and typically a very small number of subjects”. The goal of acoustic analysis is to correlate speech deviances with neurological disturbance. This reverse process is not simple because the relationship between a neuromotor dysfunction and the speech signal is not direct. Some dysfunctions have no acoustical impact, some acoustical information are not directly linked with a dysfunction. Moreover, individual compensation mechanisms can blur the speech analysis. As speech is the result of complex processes, we have proposed for several years to proceed not only to acoustical analysis (which is the end of the speech production chain), but to use multisensor data acquisition systems for speech production investigation [5]. We present in this contribution several techniques which allow to observe and to measure more directly the phenomena linked to the organs dynamic. Aerodynamics sensors (oral airflow, nasal airflow, subglottal pressure) can be interesting to assess phonation and articulation as robust indicators of organs movements (laryngeal leakage, velum leakage, lips aperture) or physiological mechanisms (pneumophonatory coordination, velo-pharyngeal port activity). Electrophysiological techniques are also efficient in term of functional assessment: electroglottography for phonation, electromyography, electropalatography for articulation. The use of electrokynesiography (ie: electro-magneto-articulography) can be finally the best way to observe the movement disorders but the complexity of such a system is actually the biggest obstacle.
_2_2010-DBS-GhioLegou-EvaluationInstrumentale.pdf (978.33 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01616672 , version 1 (13-10-2017)

Identifiants

  • HAL Id : hal-01616672 , version 1

Citer

Alain Ghio, Thierry Legou. Multiparametric speech assessment. International Symposium Basal Ganglia Speech Disorders Deep & Brain Stimulation, 2010, Aix-en-Provence, France. ⟨hal-01616672⟩
91 Consultations
34 Téléchargements

Partager

More