Dysarthric Speech Recognition Using a Convolutive Bottleneck Network - Archive ouverte HAL
Communication Dans Un Congrès Année : 2014

Dysarthric Speech Recognition Using a Convolutive Bottleneck Network

Toru Nakashika
  • Fonction : Auteur
Toshiya Yoshioka
  • Fonction : Auteur
Tetsuya Takiguchi
  • Fonction : Auteur
Yasuo Ariki
  • Fonction : Auteur

Résumé

In this paper, we investigate the recognition of speech produced by a person with an articulation disorder resulting from athetoid cerebral palsy. The articulation of the first spoken words tends to become unstable due to strain on speech muscles, and that causes a degradation of traditional speech recognition systems. Therefore, we propose a robust feature extraction method using a convolutive bottleneck network (CBN) instead of the well-known MFCC. The CBN stacks multiple various types of layers, such as a convolution layer, a subsampling layer, and a bottleneck layer, forming a deep network. Applying the CBN to feature extraction for dysarthric speech, we expect that the CBN will reduce the influence of the unstable speaking style caused by the athetoid symptoms. We confirmed its effectiveness through word-recognition experiments, where the CBN-based feature extraction method outperformed the conventional feature extraction method.
Fichier non déposé

Dates et versions

hal-01301122 , version 1 (11-04-2016)

Identifiants

  • HAL Id : hal-01301122 , version 1

Citer

Toru Nakashika, Toshiya Yoshioka, Tetsuya Takiguchi, Yasuo Ariki, Stefan Duffner, et al.. Dysarthric Speech Recognition Using a Convolutive Bottleneck Network. IEEE International Conference on Signal Processing (ICSP), Oct 2014, HangZhou, China. pp.505-509. ⟨hal-01301122⟩
85 Consultations
0 Téléchargements

Partager

More