Fine-tuning strategies for faster inference using speech self-supervised models: a comparative study - Laboratoire Traitement et Communication de l'Information Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Fine-tuning strategies for faster inference using speech self-supervised models: a comparative study

Résumé

Self-supervised learning (SSL) has allowed substantial progress in Automatic Speech Recognition (ASR) performance in low-resource settings. In this context, it has been demonstrated that larger selfsupervised feature extractors are crucial for achieving lower downstream ASR error rates. Thus, better performance might be sanctioned with longer inferences. This article explores different approaches that may be deployed during the fine-tuning to reduce the computations needed in the SSL encoder, leading to faster inferences. We adapt a number of existing techniques to common ASR settings and benchmark them, displaying performance drops and gains in inference times. Interestingly, we found that given enough downstream data, a simple downsampling of the input sequences outperforms the other methods with both low performance drops and high computational savings, reducing computations by 61.3% with an WER increase of only 0.81. Finally, we analyze the robustness of the comparison to changes in dataset conditions, revealing sensitivity to dataset size.
Fichier principal
Vignette du fichier
2303.06740.pdf (286.63 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04076307 , version 1 (20-04-2023)

Identifiants

  • HAL Id : hal-04076307 , version 1

Citer

Salah Zaiem, Robin Algayres, Titouan Parcollet, Slim Essid, Mirco Ravanelli. Fine-tuning strategies for faster inference using speech self-supervised models: a comparative study. ICASSP 2023 - International Conference on Acoustics, Speech, and Signal Processing, Jun 2023, Rhodes, Greece. ⟨hal-04076307⟩
78 Consultations
131 Téléchargements

Partager

Gmail Facebook X LinkedIn More