Powerset multi-class cross entropy loss for neural speaker diarization - Archive ouverte HAL Access content directly
Conference Papers Year : 2023

Powerset multi-class cross entropy loss for neural speaker diarization

Abstract

Since its introduction in 2019, the whole end-to-end neural diarization (EEND) line of work has been addressing speaker diarization as a frame-wise multi-label classification problem with permutation-invariant training. Despite EEND showing great promise, a few recent works took a step back and studied the possible combination of (local) supervised EEND diarization with (global) unsupervised clustering. Yet, these hybrid contributions did not question the original multi-label formulation. We propose to switch from multi-label (where any two speakers can be active at the same time) to powerset multi-class classification (where dedicated classes are assigned to pairs of overlapping speakers). Through extensive experiments on 9 different benchmarks, we show that this formulation leads to significantly better performance (mostly on overlapping speech) and robustness to domain mismatch, while eliminating the detection threshold hyperparameter, critical for the multi-label formulation.
Fichier principal
Vignette du fichier
2023___Interspeech___Powerset_speaker_diarization-4.pdf (324.1 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Licence : CC BY - Attribution

Dates and versions

hal-04233796 , version 1 (16-10-2023)

Licence

Attribution

Identifiers

Cite

Alexis Plaquet, Hervé Bredin. Powerset multi-class cross entropy loss for neural speaker diarization. 24th INTERSPEECH Conference (INTERSPEECH 2023), Aug 2023, Dublin, Ireland. pp.3222-3226, ⟨10.21437/Interspeech.2023-205⟩. ⟨hal-04233796⟩
71 View
35 Download

Altmetric

Share

Gmail Facebook X LinkedIn More