PixIT: Joint Training of Speaker Diarization and Speech Separation from Real-world Multi-speaker Recordings - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

PixIT: Joint Training of Speaker Diarization and Speech Separation from Real-world Multi-speaker Recordings

Résumé

A major drawback of supervised speech separation (SSep) systems is their reliance on synthetic data, leading to poor real-world generalization. Mixture invariant training (MixIT) was proposed as an unsupervised alternative that uses real recordings, yet struggles with over-separation and adapting to long-form audio. We introduce PixIT, a joint approach that combines permutation invariant training (PIT) for speaker diarization (SD) and MixIT for SSep. With a small extra requirement of needing SD labels during training, it solves the problem of over-separation and allows stitching local separated sources leveraging existing work on clustering-based neural SD. We measure the quality of the separated sources via applying automatic speech recognition (ASR) systems to them. PixIT boosts the performance of various ASR systems across two meeting corpora both in terms of the speaker-attributed and utterance-based word error rates while not requiring any fine-tuning.
Fichier principal
Vignette du fichier
kalda24_odyssey.pdf (926.17 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04649858 , version 1 (16-07-2024)

Identifiants

Citer

Joonas Kalda, Clément Pagés, Ricard Marxer, Tanel Alumäe, Hervé Bredin. PixIT: Joint Training of Speaker Diarization and Speech Separation from Real-world Multi-speaker Recordings. The Speaker and Language Recognition Workshop (Odyssey 2024), Jun 2024, Quebec City, Canada. pp.115-122, ⟨10.21437/odyssey.2024-17⟩. ⟨hal-04649858⟩
94 Consultations
31 Téléchargements

Altmetric

Partager

More