End-to-end model for named entity recognition from speech without paired training data - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

End-to-end model for named entity recognition from speech without paired training data

Salima Mdhaffar
  • Fonction : Auteur
Jarod Duret
  • Fonction : Auteur
Titouan Parcollet
  • Fonction : Auteur
Yannick Estève

Résumé

Recent works showed that end-to-end neural approaches tend to become very popular for spoken language understanding (SLU). Through the term end-to-end, one considers the use of a single model optimized to extract semantic information directly from the speech signal. A major issue for such models is the lack of paired audio and textual data with semantic annotation. In this paper, we propose an approach to build an end-to-end neural model to extract semantic information in a scenario in which zero paired audio data is available. Our approach is based on the use of an external model trained to generate a sequence of vectorial representations from text. These representations mimic the hidden representations that could be generated inside an end-to-end automatic speech recognition (ASR) model by processing a speech signal. A SLU neural module is then trained using these representations as input and the annotated text as output. Last, the SLU module replaces the top layers of the ASR model to achieve the construction of the end-to-end model. Our experiments on named entity recognition, carried out on the QUAERO corpus, show that this approach is very promising, getting better results than a comparable cascade approach or than the use of synthetic voices.
Fichier principal
Vignette du fichier
IS22___Textual_injection_for_e2e_NER-3.pdf (652.13 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03701145 , version 1 (21-06-2022)

Identifiants

  • HAL Id : hal-03701145 , version 1

Citer

Salima Mdhaffar, Jarod Duret, Titouan Parcollet, Yannick Estève. End-to-end model for named entity recognition from speech without paired training data. Interspeech 2022, Sep 2022, Incheon, South Korea. ⟨hal-03701145⟩
67 Consultations
349 Téléchargements

Partager

More