A dual task learning approach to fine-tune a multilingual semantic speech encoder for Spoken Language Understanding - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

A dual task learning approach to fine-tune a multilingual semantic speech encoder for Spoken Language Understanding

Résumé

Self-Supervised Learning is vastly used to efficiently represent speech for Spoken Language Understanding, gradually replacing conventional approaches. Meanwhile, textual SSL models are proposed to encode language-agnostic semantics. SAMU-XLSR framework employed this semantic information to enrich multilingual speech representations. A recent study investigated SAMU-XLSR in-domain semantic enrichment by specializing it on downstream transcriptions, leading to state-of-the-art results on a challenging SLU task. This study's interest lies in the loss of multilingual performances and lack of specific-semantics training induced by such specialization in close languages without any SLU implication. We also consider SAMU-XLSR's loss of initial cross-lingual abilities due to a separate SLU fine-tuning. Therefore, this paper proposes a dual task learning approach to improve SAMU-XLSR semantic enrichment while considering distant languages for multilingual and language portability experiments.
Fichier principal
Vignette du fichier
Interspeech_2024_Dual_specialization-3.pdf (464.08 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04615074 , version 1 (18-06-2024)

Identifiants

  • HAL Id : hal-04615074 , version 1

Citer

Gaëlle Laperrière, Sahar Ghannay, Bassam Jabaian, Yannick Estève. A dual task learning approach to fine-tune a multilingual semantic speech encoder for Spoken Language Understanding. Interspeech 2024, Sep 2024, Kos, Greece. ⟨hal-04615074⟩
87 Consultations
45 Téléchargements

Partager

More