On the Use of Semantically-Aligned Speech Representations for Spoken Language Understanding - Archive ouverte HAL
Communication Dans Un Congrès Année : 2023

On the Use of Semantically-Aligned Speech Representations for Spoken Language Understanding

Résumé

In this paper we examine the use of semantically-aligned speech representations for end-to-end spoken language understanding (SLU). We employ the recently-introduced SAMU-XLSR model, which is designed to generate a single embedding that captures the semantics at the utterance level, semantically aligned across different languages. This model combines the acoustic frame-level speech representation learning model (XLS-R) with the Language Agnostic BERT Sentence Embedding (LaBSE) model. We show that the use of the SAMU-XLSR model instead of the initial XLS-R model improves significantly the performance in the framework of end-to-end SLU. Finally, we present the benefits of using this model towards language portability in SLU.
Fichier principal
Vignette du fichier
2210.05291.pdf (980.06 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04155025 , version 1 (20-02-2024)

Identifiants

Citer

Gaelle Laperriere, Valentin Pelloin, Mickael Rouvier, Themos Stafylakis, Yannick Esteve. On the Use of Semantically-Aligned Speech Representations for Spoken Language Understanding. 2022 IEEE Spoken Language Technology Workshop (SLT), Jan 2023, Doha, Qatar. pp.361-368, ⟨10.1109/SLT54892.2023.10023013⟩. ⟨hal-04155025⟩
37 Consultations
41 Téléchargements

Altmetric

Partager

More