Powerset multi-class cross entropy loss for neural speaker diarization - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Powerset multi-class cross entropy loss for neural speaker diarization

Résumé

Since its introduction in 2019, the whole end-to-end neural diarization (EEND) line of work has been addressing speaker diarization as a frame-wise multi-label classification problem with permutation-invariant training. Despite EEND showing great promise, a few recent works took a step back and studied the possible combination of (local) supervised EEND diarization with (global) unsupervised clustering. Yet, these hybrid contributions did not question the original multi-label formulation. We propose to switch from multi-label (where any two speakers can be active at the same time) to powerset multi-class classification (where dedicated classes are assigned to pairs of overlapping speakers). Through extensive experiments on 9 different benchmarks, we show that this formulation leads to significantly better performance (mostly on overlapping speech) and robustness to domain mismatch, while eliminating the detection threshold hyperparameter, critical for the multi-label formulation.
Fichier principal
Vignette du fichier
2023___Interspeech___Powerset_speaker_diarization-4.pdf (324.1 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Licence : CC BY - Paternité

Dates et versions

hal-04233796 , version 1 (16-10-2023)

Licence

Paternité

Identifiants

Citer

Alexis Plaquet, Hervé Bredin. Powerset multi-class cross entropy loss for neural speaker diarization. 24th INTERSPEECH Conference (INTERSPEECH 2023), Aug 2023, Dublin, Ireland. pp.3222-3226, ⟨10.21437/Interspeech.2023-205⟩. ⟨hal-04233796⟩
102 Consultations
96 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More