Improving Latent Representation For End To End Multispeaker Expressive Text To Speech System
Résumé
The main goal of this work is to generate expressive speech in different speaker's voices for which no expressive speech data is available. To do that, we propose to use multiclass N-pair loss in end-to-end multispeaker expressive Text-To-Speech (TTS) for improving the transfer of expres-sivity to the target speaker's voice. This augmentation of the loss function during training paves the way to enhance the latent space representation of emotions. The presented approach condition tacotron based end-to-end system with latent representation extracted from the expressivity encoder. We have jointly trained the end-to-end (E2E) TTS with mul-ticlass N-pair loss to discriminate between various emotions. We experimented with two neural network architectures for expressivity encoder namely global style token (GST) and variational autoencoder (VAE). We transferred the expressiv-ity using the mean of latent representation extracted from the expressivity encoder for each emotion. The obtained results show that adding multiclass N-pair loss based deep metric learning in training process improves expressivity in the desired speaker's voice.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...