A Closer Look at Latent Representations of End-to-end TTS Models - Archive ouverte HAL Accéder directement au contenu
Poster De Conférence Année : 2023

A Closer Look at Latent Representations of End-to-end TTS Models

Résumé

In recent years, deep neural architectures have displayed groundbreaking performances in various speech processing areas, including Text-To-Speech (TTS), but at the expense of interpretability of computed intermediate representations. However, statistical learning performed by these neural models constitutes a valuable source of information about speech and language. The present study aims at developing statistical tools to narrow the gap between these new processing techniques and speech sciences. By identifying phonetic and acoustic features in model representations, the proposed methods help understanding how neural TTS are able to organise speech information on an unsupervised manner and provide new insights on phonetic regularities captured by statistical learning on massive data. We introduce a methodology for the analysis of any phonetic or acoustic feature in any intermediate representations of state-of-the-art sequence-to-sequence TTS models: Tacotron2 and FastSpeech2, without the need for additional data and train- ing process. In particular, we show that acoustic features measured on the output synthetic speech can be approximated by multi-linear predictors from the output of any layer of these models. The direction of variation of each acoustic feature in an intermediate representation is given by the regression coefficients. Analysis of the goodness of fit of the multi-linear regression (R2) for each model, each intermediate layer and each acoustic feature first demonstrated that segmental acoustic features (formant frequencies, spectral tilt, centre of gravity) are gradually encoded throughout both models, with the highest fit at the end of the decoder. This shows that segmental features are not completely encoded in the text encoder and that the decoder is needed to complement this information, likely modelling the co-articulations factors. The gradual encoding of segmental features also highlights the early computation of phonetic rep- resentations by the models. This hypothesis was confirmed by the adaptation of the proposed method to a linear phoneme classification task from the output of each layer. Supra-segmental features (fundamental frequency, duration, energy) on the other hand are mostly encoded at the output of the text encoder. The fundamental frequency and energy predictors natively implemented in FastSpeech2 constrain this behaviour, whereas Tacotron2 linearly encodes these features by default. The identification of intermediate layers that display the best linear representation of acoustic features opens the route toward designing more careful control architectures for neural TTS. As an example, we showed how explicit biases can be inferred from the direction of variation of each acoustic feature calculated in intermediate representations, and added with a controllable gain to those representations to vary the corresponding acoustic feature value. This control mechanism was evaluated for various levels of internal representations, and we reached highly accurate control of acoustic features on the intermediate layers that displayed the highest regression goodness of fit. The localisation of phonetic representations in the model also allows for discrete control of phonological processes such as French liaisons and pauses. The combined control of continuous prosodic features and discrete representations was evaluated through listening test, which showed the benefits of the proposed embedding bias method to manipulate the speaking rate. Overall, the proposed analysis highlighted how acoustic and phonetic features were linearly encoded into intermediate latent representations. The proposed methodology can be applied to any encoder-decoder architectures, as well as any acoustic parameters, either continuous or categorical, without the need for additional data and training process, and paves the way towards more controllable speech generation systems.
Fichier principal
Vignette du fichier
Lenglet2023_AFIA-AFCP_poster.pdf (4.19 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Licence : CC BY - Paternité

Dates et versions

hal-04269953 , version 1 (23-01-2024)

Identifiants

  • HAL Id : hal-04269953 , version 1

Citer

Martin Lenglet, Olivier Perrotin, Gérard Bailly. A Closer Look at Latent Representations of End-to-end TTS Models. Journée commune AFIA-TLH / AFCP – “Extraction de connaissances interprétables pour l’étude de la communication parlée”, Dec 2023, Avignon, France. . ⟨hal-04269953⟩
118 Consultations
9 Téléchargements

Partager

Gmail Facebook X LinkedIn More