SAMU-XLSR: Semantically-Aligned Multimodal Utterance-level Cross-Lingual Speech Representation
Résumé
We propose the ( SAMU-XLSR ): S emantically- A ligned M ultimodal U tterance-level Cross - L ingual S peech R epresentation learning framework. Unlike previous works on speech representation learning, which learns multilingual contextual speech embedding at the resolution of an acoustic frame (10–20 ms), this work focuses on learning multimodal (speech-text) multilingual speech embedding at the resolution of a sentence (5–10 s) such that the embedding vector space is semantically aligned across different languages. We combine state-of-the-art multilingual acoustic frame-level speech representation learning model XLSR with the Language Agnostic BERT Sentence Embedding ( LaBSE ) model to create an utterance-level multimodal multilingual speech encoder SAMU-XLSR . Although we train SAMU-XLSR with only multilingual transcribed speech data, cross-lingual speech-text and speech-speech associations emerge in its learned representation space. To substantiate our claims, we use SAMU-XLSR speech encoder in combination with a pre-trained LaBSE text sentence encoder for cross-lingual speech-to-text translation retrieval, and SAMU-XLSR alone for cross-lingual speech-to-speech translation retrieval. We highlight these applications by performing several cross-lingual text and speech translation retrieval tasks across several datasets.