Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wild - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wild

Résumé

In this paper, we propose a multimodal deep learning architecturefor emotion recognition in video regarding our participation to theaudio-video based sub-challenge of the Emotion Recognition in theWild 2017 challenge. Our model combines cues from multiple videomodalities, including static facial features, motion patterns relatedto the evolution of the human expression over time, and audio infor-mation. Specifically, it is composed of three sub-networks trainedseparately: the first and second ones extract static visual featuresand dynamic patterns through 2D and 3D Convolutional NeuralNetworks (CNN), while the third one consists in a pretrained audionetwork which is used to extract useful deep acoustic signals fromvideo. In the audio branch, we also apply Long Short Term Memory(LSTM) networks in order to capture the temporal evolution of theaudio features. To identify and exploit possible relationships amongdifferent modalities, we propose a fusion network that merges cuesfrom the different modalities in one representation. The proposed ar-chitecture outperforms the challenge baselines (38.81%and40.47%):we achieve an accuracy of50.39%and49.92%respectively on thevalidation and the testing data.

Dates et versions

hal-02065973 , version 1 (13-03-2019)

Identifiants

Citer

Stefano Pini, Olfa Ben Ahmed, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara, et al.. Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wild. the 19th ACM International Conference, Nov 2017, Glasgow, France. pp.536-543, ⟨10.1145/3136755.3143006⟩. ⟨hal-02065973⟩

Collections

EURECOM
89 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More