Self-supervised speech processing - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Self-supervised speech processing

Gérard Chollet
Yingzhi Wang
  • Fonction : Auteur
  • PersonId : 1335234

Résumé

Most speech processing applications require a large set of labeled data to train their models. The labelling process is costly, prone to errors and time consuming. The question is: Is it possible to make use of non-labeled data to facilitate the training process? Additionally, is there a way to train a powerful feature embedding that can benefit the down-stream fine-tuning and multi-task training? Self-Supervised Learning (SSL) appears to answer the questions. Quoting Yann LeCun, who originally proposed this approach in 2019: “In Self-Supervised Learning, the system learns to predict part of its input from other parts of its input“. This principle has been successfully applied to text processing and computer vision. This presentation will focus on the recent developments of the same principle for speech applications. Self-supervised speech representation learning is closely related to acoustic word embedding and learning with NO lexical resources. The obtained vectorial embeddings can then be used for a variety of applications such as recognition, synthesis, speaker verification, emotion detection, etc. With its powerful expressive and adaptive ability, self-supervised models have brought revolutionary improvements on the performance on almost all these speech tasks.
Plenary%20Session_Pr.%20CHOLLET_ATSIP22.pdf (5.41 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04392097 , version 1 (13-01-2024)

Identifiants

  • HAL Id : hal-04392097 , version 1

Citer

Gérard Chollet, Yingzhi Wang. Self-supervised speech processing. The 6th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP'2022), ATMS, May 2022, Moncton, New Brunswick, Canada, Canada. ⟨hal-04392097⟩
28 Consultations
11 Téléchargements

Partager

More