Promises and Limitations of Self-supervised Learning for Automatic Speech Processing
Promesses et Limites de l’Apprentissage Auto-Supervisé pour le Traitement Automatique de la Parole
Résumé
Self-supervised learning (SSL) has recently been successfully introduced as a training strategy for Transformerbased neural models. Thanks to this approach, these models are now able to construct speech representations by using only audio data, without any manual labels (i.e. no supervision). Once trained, they can be leveraged for training competitive endto-end models for speech processing with smaller amounts of annotated data. Moreover, when the available annotated data is plenty, automatic speech recognition (ASR) and translation (AST) systems based on these SSL models are now the new state of the art. In this work, we are interested in their application in challenging settings that are relevant for security. We measure the robustness of a French-based SSL model to African accent, and we present some promising but limited results for speech translation without the use of transcriptions.
Origine | Fichiers produits par l'(les) auteur(s) |
---|