SAMU-XLSR: Semantically-Aligned Multimodal Utterance-level Cross-Lingual Speech Representation - Archive ouverte HAL
Article Dans Une Revue IEEE Journal of Selected Topics in Signal Processing Année : 2022

SAMU-XLSR: Semantically-Aligned Multimodal Utterance-level Cross-Lingual Speech Representation

Résumé

We propose the ( SAMU-XLSR ): S emantically- A ligned M ultimodal U tterance-level Cross - L ingual S peech R epresentation learning framework. Unlike previous works on speech representation learning, which learns multilingual contextual speech embedding at the resolution of an acoustic frame (10–20 ms), this work focuses on learning multimodal (speech-text) multilingual speech embedding at the resolution of a sentence (5–10 s) such that the embedding vector space is semantically aligned across different languages. We combine state-of-the-art multilingual acoustic frame-level speech representation learning model XLSR with the Language Agnostic BERT Sentence Embedding ( LaBSE ) model to create an utterance-level multimodal multilingual speech encoder SAMU-XLSR . Although we train SAMU-XLSR with only multilingual transcribed speech data, cross-lingual speech-text and speech-speech associations emerge in its learned representation space. To substantiate our claims, we use SAMU-XLSR speech encoder in combination with a pre-trained LaBSE text sentence encoder for cross-lingual speech-to-text translation retrieval, and SAMU-XLSR alone for cross-lingual speech-to-speech translation retrieval. We highlight these applications by performing several cross-lingual text and speech translation retrieval tasks across several datasets.

Dates et versions

hal-03790203 , version 1 (28-09-2022)

Identifiants

Citer

Sameer Khurana, Antoine Laurent, James Glass. SAMU-XLSR: Semantically-Aligned Multimodal Utterance-level Cross-Lingual Speech Representation. IEEE Journal of Selected Topics in Signal Processing, 2022, pp.1-13. ⟨10.1109/JSTSP.2022.3192714⟩. ⟨hal-03790203⟩
91 Consultations
0 Téléchargements

Altmetric

Partager

More