On the Benefits of Self-supervised Learned Speech Representations for Predicting Human Phonetic Misperceptions - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

On the Benefits of Self-supervised Learned Speech Representations for Predicting Human Phonetic Misperceptions

Santiago Cuervo
  • Fonction : Auteur
  • PersonId : 1279908
Ricard Marxer

Résumé

Deep neural networks (DNNs) trained by self-supervised learning (SSL) have recently been shown to produce representations similar to brain activations for the same speech input. Can SSL representations help to explain human speech perception errors? Aiming to shed light on this question, we study their use for phonetic misperception prediction. We extract representations from wav2vec 2.0, a recent SSL architecture for speech, and use them to compute features for a model predicting the presence of phonetic perception errors in speech-in-noise signals. We perform our experiments on a corpus of over 3000 consistent word-in-noise confusions in English. We consider multiple SSL-based features and compare them against conventional acoustic baselines and features obtained from DNNs fine-tuned through supervised learning for ASR. Our results show the superiority of SSL representations when extracted from the proper layer, further suggesting their potential to model human speech perception.
Fichier principal
Vignette du fichier
cuervo23_interspeech.pdf (548.83 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04194225 , version 1 (02-09-2023)

Identifiants

Citer

Santiago Cuervo, Ricard Marxer. On the Benefits of Self-supervised Learned Speech Representations for Predicting Human Phonetic Misperceptions. INTERSPEECH 2023, Aug 2023, Dublin, Ireland. pp.1788-1792, ⟨10.21437/Interspeech.2023-1476⟩. ⟨hal-04194225⟩
41 Consultations
46 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More