Deep complementary features for speaker identification in TV broadcast data - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Deep complementary features for speaker identification in TV broadcast data

Résumé

This work tries to investigate the use of a Convolutional Neu-ral Network approach and its fusion with more traditional systems such as Total Variability Space for speaker identification in TV broadcast data. The former uses spectrograms for training, while the latter is based on MFCC features. The dataset poses several challenges such as significant class imbalance or background noise and music. Even though the performance of the Convolutional Neural Network is lower than the state-of-the-art, it is able to complement it and give better results through fusion. Different fusion techniques are evaluated using both early and late fusion.
Fichier principal
Vignette du fichier
odyssey-deep-complementary.pdf (215.79 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01350068 , version 1 (29-07-2016)

Identifiants

Citer

Mateusz Budnik, Laurent Besacier, Ali Khodabakhsh, Cenk Demiroglu. Deep complementary features for speaker identification in TV broadcast data. Odyssey Workshop 2016, Jun 2016, Bilbao, Spain. ⟨10.21437/Odyssey.2016-21⟩. ⟨hal-01350068⟩
238 Consultations
617 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More