Improving transfer of expressivity for end-to-end multispeaker text-to-speech synthesis
Résumé
The main goal of this work is to generate expressive
speech in different speaker’s voices for which no expressive
speech data is available. The presented approach conditions
Tacotron 2 speech synthesis with latent representations extracted
from text, speaker identity, and reference expressive Mel spectro-
gram. We propose to use multiclass N-pair loss in the end-to-end
multispeaker expressive Text-To-Speech (TTS) for improving the
transfer of expressivity to the target speaker’s voice. We have
jointly trained the end-to-end (E2E) TTS with multiclass N-pair
loss to discriminate between various emotions. This augmentation
of the loss function during training paves the way to enhance the
latent space representation of emotions.
We have experimented with two different neural network
architectures for expressivity in the encoder, namely global style
token (GST) and variational autoencoder (VAE). We transferred
the expressivity using the mean of latent representation extracted
from the expressivity encoder for each emotion. The obtained
results show that adding multiclass N-pair loss based deep metric
learning in the training process improves expressivity in the
desired speaker’s voice.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|