Semantic Information Investigation for Transformer-based Rescoring of N-best Speech Recognition - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

Semantic Information Investigation for Transformer-based Rescoring of N-best Speech Recognition

Résumé

This article proposes to improve an automatic speech recognition system by rescoring N-best recognition lists with models that could enhance the semantic consistency of the hypotheses. We believe that in noisy parts of speech, the semantic model can help remove acoustic ambiguities. We estimate a pairwise score for each pair of hypotheses by using BERT representations. The acoustic likelihood and LM scores are used as features in order to incorporate acoustic, language, and textual information together. In this research work, we investigate two new ideas: to use a fine-grained semantic representation at the word token level and to rely on the previously recognized sentences. On the TED-LIUM 3 dataset, in clean and noisy conditions, the best performance is obtained by leveraging context beyond the current utterance, which significantly outperforms the rescoring using the state-of-the-art GPT-2 model and the work of Fohr and Illina (2021).
Fichier principal
Vignette du fichier
Bertalsem_confLTC23_v9_nonanonym.pdf (564.22 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03965397 , version 1 (01-02-2023)

Identifiants

  • HAL Id : hal-03965397 , version 1

Citer

Irina Illina, Dominique Fohr. Semantic Information Investigation for Transformer-based Rescoring of N-best Speech Recognition. LTC 2023, Apr 2023, Poznan, Poland. ⟨hal-03965397⟩
130 Consultations
114 Téléchargements

Partager

More