A vector quantized masked autoencoder for speech emotion recognition - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

A vector quantized masked autoencoder for speech emotion recognition

Résumé

Recent years have seen remarkable progress in speech emotion recognition (SER), thanks to advances in deep learning techniques. However, the limited availability of labeled data remains a significant challenge in the field. Self-supervised learning has recently emerged as a promising solution to address this challenge. In this paper, we propose the vector quantized masked autoencoder for speech (VQ-MAE-S), a self-supervised model that is fine-tuned to recognize emotions from speech signals. The VQ-MAE-S model is based on a masked autoencoder (MAE) that operates in the discrete latent space of a vector quantized variational autoencoder. Experimental results show that the proposed VQ-MAE-S model, pre-trained on the VoxCeleb2 dataset and fine-tuned on emotional speech data, outperforms an MAE working on the raw spectrogram representation and other state-of-the-art methods in SER.
Fichier principal
Vignette du fichier
2304.11117.pdf (652.66 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04080024 , version 1 (24-04-2023)

Identifiants

Citer

Samir Sadok, Simon Leglaive, Renaud Séguier. A vector quantized masked autoencoder for speech emotion recognition. IEEE ICASSP 2023 Workshop on Self-Supervision in Audio, Speech and Beyond (SASB), Jun 2023, Rhodes, Greece. ⟨hal-04080024⟩
85 Consultations
140 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More