Modeling and training strategies for language recognition systems - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

Modeling and training strategies for language recognition systems

Résumé

Automatic speech recognition is complementary to language recognition. The language recognition systems exploit this complementarity by using frame-level bottleneck features extracted from neural networks trained with a phone recognition task. Recent methods apply frame-level bottleneck features extracted from an end-to-end sequence-to-sequence speech recognition model. In this work, we study an integrated approach of the training of the speech recognition feature extractor and language recognition modules. We show that for both classical phone recognition and end-to-end sequence-to-sequence features, sequential training of the two modules is not the optimal strategy. The feature extractor can be improved by supervision with the language identification loss, either in a fine-tuning step or in a multi-task training framework. Besides, we notice that end-to-end sequence-to-sequence bottleneck features are on par with classical phone recognition bottleneck features without requiring a forced alignment of the signal with target tokens. However, for sequence-to-sequence, the architecture of the model seems to play an important role; the Conformer architectures leads to much better results than the conventional stacked DNNs approach; and can even be trained directly with the LID module in an end-to-end approach.
Fichier principal
Vignette du fichier
bnf-lid_interspeech2021_publication.pdf (169.36 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03264085 , version 1 (17-06-2021)

Identifiants

Citer

Raphaël Duroselle, Md Sahidullah, Denis Jouvet, Irina Illina. Modeling and training strategies for language recognition systems. INTERSPEECH 2021, Aug 2021, Brno, Czech Republic. ⟨10.21437/Interspeech.2021-277⟩. ⟨hal-03264085⟩
216 Consultations
543 Téléchargements

Altmetric

Partager

More