Exploiting multimodal data fusion in robust speech recognition - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2010

Exploiting multimodal data fusion in robust speech recognition

Résumé

This article introduces automatic speech recognition based on Electro-Magnetic Articulography (EMA). Movements of the tongue, lips, and jaw are tracked by an EMA device, which are used as features to create HiddenMarkovModels (HMM) and recognize speech only from articulation, that is, without any audio information. Also, automatic phoneme recognition experiments are conducted to examine the contribution of the EMA parameters to robust speech recognition. Using feature fusion, multi-stream HMM fusion, and late fusion methods, noisy audio speech has been integrated with EMA speech and recognition experiments have been conducted. The achieved results show that the integration of the EMA parameters significantly increases an audio speech recognizer's accuracy, in noisy environments.
Fichier non déposé

Dates et versions

hal-00508288 , version 1 (02-08-2010)

Identifiants

  • HAL Id : hal-00508288 , version 1

Citer

Panikos Heracleous, Pierre Badin, Gérard Bailly, Norihiro Hagita. Exploiting multimodal data fusion in robust speech recognition. ICME 2010 - IEEE International Conference on Multimedia and Expo, Jul 2010, Singapour, Singapore. in press. ⟨hal-00508288⟩
83 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More