Exploring the multidimensional representation of unidimensional speech of acoustic parameters extracted by deep unsupervised models - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Exploring the multidimensional representation of unidimensional speech of acoustic parameters extracted by deep unsupervised models

Résumé

Understanding latent representations of speech encoded by deep unsupervised models is fundamental for unlocking the full potential of neural approaches for signal analysis, transformation, and generation. While prior studies have identified the directions of variation of individual acoustic parameters such as fundamental frequency or formant frequencies within deep latent spaces, one also demonstrated that those variations are often explained by multiple latent dimensions. This thus calls for the following question: Why are multiple dimensions needed for the encoding of one-dimensional parameters within these latent spaces? Among the possible multiple interactions between acoustic parameters, our hypothesis, explored in this study, is that the different dimensions may reflect the different sources of inter- and intra-individual variability of each acoustic parameter. In the framework of a variational autoencoder (VAE) trained on a multi-speaker database, this work proposes a novel methodology to identify the role of these intricate dimensions within the latent space. Specifically, for interpreting the multi- dimensional aspect of the representation of individual acoustic parameters, we: 1) tailored two test datasets with either controlled variation of single acoustic parameters (synthetic speech) or uncontrolled co-variations of all acoustic parameters (natural speech) ; 2) analysed the direction of variation of those parameters in the latent space of the VAE with linear analysis, including principal component analysis, linear discriminant analysis and linear regression. Our investigation first confirmed that each acoustic parameter mentioned above, essential in characterising speech, is encoded within the VAE’s latent space on multiple directions. Among those multiple dimensions, we have demonstrated that one of them directly encodes the global shape of the parameter distribution seen in the training set, pointing out the impact of the training dataset on the performance of our model. Then, we proved that parameter values belonging to each mode of the distribution are encoded on additional distinct dimensions. In the particular case of the fundamental frequency, the parameter distribution is bimodal (corresponding to the two genders) and values belonging to different modes are encoded on two additional and distinct dimensions. Given those findings, we aimed to identify latent directions of variation of acoustic parameters within and between modes of multi-modal distributions, and found disentangled directions in the VAE latent space that explain the between- and within-gender variations. In summary, our research underscores the pivotal role of latent spaces in deep unsupervised models for speech representation learning. While several studies have used latent space dimension reduction, addressed the orthogonality of the different directions that explain a given parameter, or identified the variation of acoustic parameters in the latent space, this work is one of the few to interpret the multidimensional representation of each unidimensional acoustic parameter, by introducing a systematic methodology that combines the use of specifically designed test sets and linear analysis methods. We believe that our research illuminates the need for more interpretable representations, and that our findings on the unsupervised representation of the inter- and intra-individual variability of each acoustic parameter are a first step towards finely controllable speech encoding-decoding models, crucial for speech analysis, transformation and synthesis.
Fichier non déposé

Dates et versions

hal-04416200 , version 1 (26-01-2024)

Licence

Paternité

Identifiants

  • HAL Id : hal-04416200 , version 1

Citer

Maxime Jacquelin, Maëva Garnier, Laurent Girin, Rémy Vincent, Olivier Perrotin. Exploring the multidimensional representation of unidimensional speech of acoustic parameters extracted by deep unsupervised models. Journée commune AFIA-TLH / AFCP – “Extraction de connaissances interprétables pour l’étude de la communication parlée”, AFIA-TLH; AFCP, Dec 2023, Avignon (FR), France. ⟨hal-04416200⟩
31 Consultations
4 Téléchargements

Partager

Gmail Facebook X LinkedIn More