On the Benefits of Self-supervised Learned Speech Representations for Predicting Human Phonetic Misperceptions
Résumé
Deep neural networks (DNNs) trained by self-supervised learning (SSL) have recently been shown to produce representations similar to brain activations for the same speech input. Can SSL representations help to explain human speech perception errors? Aiming to shed light on this question, we study their use for phonetic misperception prediction. We extract representations from wav2vec 2.0, a recent SSL architecture for speech, and use them to compute features for a model predicting the presence of phonetic perception errors in speech-in-noise signals. We perform our experiments on a corpus of over 3000 consistent word-in-noise confusions in English. We consider multiple SSL-based features and compare them against conventional acoustic baselines and features obtained from DNNs fine-tuned through supervised learning for ASR. Our results show the superiority of SSL representations when extracted from the proper layer, further suggesting their potential to model human speech perception.
Origine : Fichiers éditeurs autorisés sur une archive ouverte