Multimodal Emotion Recognition for AVEC 2016 Challenge - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Multimodal Emotion Recognition for AVEC 2016 Challenge

Filip Povolny
  • Fonction : Auteur
Pavel Matejka
  • Fonction : Auteur
Michal Hradis
  • Fonction : Auteur
Anna Popková
  • Fonction : Auteur
Lubomir Otrusina
  • Fonction : Auteur
Pavel Smrz
  • Fonction : Auteur
Ian Wood
  • Fonction : Auteur
Cecile Robin
  • Fonction : Auteur

Résumé

This paper describes a systems for emotion recognition and its application on the dataset from the AV+EC 2016 Emotion Recognition Challenge. The realized system was produced and submitted to the AV+EC 2016 evaluation, making use of all three modalities (audio, video, and physiological data). Our work primarily focused on features derived from audio. The original audio features were complement with bottleneck features and also text-based emotion recognition which is based on transcribing audio by an automatic speech recognition system and applying resources such as word embedding models and sentiment lexicons. Our multimodal fusion reached CCC=0.855 on dev set for arousal and 0.713 for valence. CCC on test set is 0.719 and 0.596 for arousal and valence respectively.
Fichier non déposé

Dates et versions

hal-01837203 , version 1 (12-07-2018)

Identifiants

  • HAL Id : hal-01837203 , version 1

Citer

Filip Povolny, Pavel Matejka, Michal Hradis, Anna Popková, Lubomir Otrusina, et al.. Multimodal Emotion Recognition for AVEC 2016 Challenge. Audio/Visual Emotion Challenge, ACM, Oct 2016, Amsterdam, Netherlands. ⟨hal-01837203⟩
55 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More