Phoneme-to-Audio Alignment with Recurrent Neural Networks for Speaking and Singing Voice - Archive ouverte HAL Access content directly
Conference Papers Year : 2021

Phoneme-to-Audio Alignment with Recurrent Neural Networks for Speaking and Singing Voice

Yann Teytaut
  • Function : Author
  • PersonId : 1125039
Axel Roebel

Abstract

Phoneme-to-audio alignment is the task of synchronizing voice recordings and their related phonetic transcripts. In this work, we introduce a new system to forced phonetic alignment with Recurrent Neural Networks (RNN). With the Connectionist Temporal Classification (CTC) loss as training objective, and an additional reconstruction cost, we learn to infer relevant perframe phoneme probabilities from which alignment is derived. The core of the neural architecture is a context-aware attention mechanism between mel-spectrograms and side information. We investigate two contexts given by either phoneme sequences (model PHATT) or spectrograms themselves (model SPATT). Evaluations show that these models produce precise alignments for both speaking and singing voice. Best results are obtained with the model PHATT, which outperforms baseline reference with an average imprecision of 16.3ms and 29.8ms on speech and singing, respectively. The model SPATT also appears as an interesting alternative, capable of aligning longer audio files without requiring phoneme sequences on small audio segments.
Fichier principal
Vignette du fichier
1676anav.pdf (421.96 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03552964 , version 1 (15-02-2022)

Identifiers

Cite

Yann Teytaut, Axel Roebel. Phoneme-to-Audio Alignment with Recurrent Neural Networks for Speaking and Singing Voice. Proceedings of Interspeech 2021, International Speech Communication Association, Aug 2021, Brno, Czech Republic. pp.61-65, ⟨10.21437/interspeech.2021-1676⟩. ⟨hal-03552964⟩
215 View
565 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More