Convolutive Bottleneck Network with Dropout for Dysarthric Speech Recognition - Archive ouverte HAL
Article Dans Une Revue Transactions on Machine Learning and Artificial Intelligence Année : 2014

Convolutive Bottleneck Network with Dropout for Dysarthric Speech Recognition

Résumé

In this paper, we investigate the recognition of speech produced by a person with an articulation disorder resulting from athetoid cerebral palsy. The articulation of the first spoken words tends to become unstable due to strain on speech muscles, and that causes degradation of speech recognition. Therefore, we propose a robust feature extraction method using a convolutive bottleneck network (CBN) instead of the well-known MFCC. The CBN stacks multiple various types of layers, such as a convolution layer, a subsampling layer, and a bottleneck layer, forming a deep network. Applying the CBN to feature extraction for dysarthric speech, we expect that the CBN will reduce the influence of the unstable speaking style caused by the athetoid symptoms. Furthermore, we also adopt dropout in the output layer since automatically assigned labels to the dysarthric speech are usually unreliable due to ambiguous phonemes uttered by the person with speech disorders. We confirmed its effectiveness through word recognition experiments, where the CNN based feature extraction method outperformed the conventional feature extraction method.
Fichier principal
Vignette du fichier
Liris-6633.pdf (1.18 Mo) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte
Loading...

Dates et versions

hal-01301040 , version 1 (27-03-2017)

Identifiants

Citer

Toru Nakashika, Toshiya Yoshioka, Tetsuya Takiguchi, Yasuo Ariki, Stefan Duffner, et al.. Convolutive Bottleneck Network with Dropout for Dysarthric Speech Recognition. Transactions on Machine Learning and Artificial Intelligence, 2014, 2, 2, pp.1-15. ⟨10.14738/tmlai.22.150⟩. ⟨hal-01301040⟩
276 Consultations
492 Téléchargements

Altmetric

Partager

More