Alternate Endings: Improving Prosody for Incremental Neural TTS with Predicted Future Text Input - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Alternate Endings: Improving Prosody for Incremental Neural TTS with Predicted Future Text Input

Résumé

Inferring the prosody of a word in text-to-speech synthesis requires information about its surrounding context. In incremental text-to-speech synthesis, where the synthesizer produces an output before it has access to the complete input, the full context is often unknown which can result in a loss of naturalness. In this paper, we investigate whether the use of predicted future text from a transformer language model can attenuate this loss in a neural TTS system. We compare several test conditions of next future word: (a) unknown (zero-word), (b) language model predicted, (c) randomly predicted and (d) ground-truth. We measure the prosodic features (pitch, energy and duration) and find that predicted text provides significant improvements over a zero-word lookahead, but only slight gains over randomword lookahead. We confirm these results with a perceptive test.
Fichier principal
Vignette du fichier
Alternate_Endings_Interspeech-2.pdf (304.97 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03372802 , version 1 (11-10-2021)

Identifiants

Citer

Brooke Stephenson, Thomas Hueber, Laurent Girin, Laurent Besacier. Alternate Endings: Improving Prosody for Incremental Neural TTS with Predicted Future Text Input. Interspeech 2021 - 22nd Annual Conference of the International Speech Communication Association, Aug 2021, Brno, Czech Republic. pp.3865-3869, ⟨10.21437/Interspeech.2021-275⟩. ⟨hal-03372802⟩
101 Consultations
249 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More