Multimodal speaker clustering in full length movies - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Multimedia Tools and Applications Année : 2016

Multimodal speaker clustering in full length movies

I Kapsouras
  • Fonction : Auteur
A Tefas
  • Fonction : Auteur
Nikos Nikolaidis
  • Fonction : Auteur
I Pitas
  • Fonction : Auteur

Résumé

Multimodal clustering/diarization tries to answer the question ''who spoke when'' by using audio and visual information. Diarization consists of two steps, at first segmentation of the audio information and detection of the speech segments and then clustering of the speech segments to group the speakers. This task has been mainly studied on audiovisual data from meetings, news broadcasts or talk shows. In this paper, we use visual information to aid speaker clustering and we introduce a new video-based feature, called actor presence that can be used to enhance audio-based speaker clustering. We tested the proposed method in three full length stereoscopic movies, i.e. a scenario much more difficult than the ones used so far, where there is no certainty that speech segments and video appearances of actors will always overlap. The results proved that the visual information can improve the speaker clustering accuracy and hence the diarization process.
Fichier non déposé

Dates et versions

hal-01261696 , version 1 (25-01-2016)

Identifiants

Citer

I Kapsouras, A Tefas, Nikos Nikolaidis, Geoffroy Peeters, Elie-Laurent Benaroya, et al.. Multimodal speaker clustering in full length movies. Multimedia Tools and Applications, 2016, ⟨10.1007/s11042-015-3181-5⟩. ⟨hal-01261696⟩
127 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More