Dispeech: A Synthetic Toy Dataset for Speech Disentangling - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Dispeech: A Synthetic Toy Dataset for Speech Disentangling

Résumé

Recently, a growing interest in unsupervised learning of disentangled representations has been observed, with successful applications to both synthetic and real data. In speech processing, such methods have been able to disentangle speakers’ attributes from verbal content. To have a better understanding of disentanglement, synthetic data is necessary, as it provides a controllable framework to train models and evaluate disentanglement. Thus, we introduce diSpeech, a corpus of speech synthesized with the Klatt synthesizer. Its first version is constrained to vowels synthesized with 5 generative factors relying on pitch and formants. Experiments show the ability of variational autoencoders to disentangle these generative factors and assess the reliability of disentanglement metrics. In addition to provide a support to benchmark speech disentanglement methods, diSpeech also enables the objective evaluation of disentanglement on real speech, which is to our knowledge unprecedented. To illustrate this methodology, we apply it to TIMIT’s isolated vowels.
Fichier non déposé

Dates et versions

hal-03975832 , version 1 (06-02-2023)

Identifiants

Citer

Olivier Zhang, Nicolas Gengembre, Olivier Le Blouch, Damien Lolive. Dispeech: A Synthetic Toy Dataset for Speech Disentangling. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2022, Singapore, Singapore. pp.8557-8561, ⟨10.1109/ICASSP43922.2022.9747011⟩. ⟨hal-03975832⟩
53 Consultations
0 Téléchargements

Altmetric

Partager

More