Towards Semantic Multimodal Emotion Recognition for Enhancing Assistive Services in Ubiquitous Robotics - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Towards Semantic Multimodal Emotion Recognition for Enhancing Assistive Services in Ubiquitous Robotics

N. Ayari
Hazem Abdelkawy
  • Fonction : Auteur
Abdelghani Chibani
Yacine Y. Amirat
  • Fonction : Auteur
  • PersonId : 16807
  • IdHAL : lab-lissi

Résumé

In this paper, the problem of endowing ubiquitous robots with cognitive capabilities for recognizing emotions, sentiments, affects and moods of humans, in their context, is studied. A hybrid approach based on multilayer perceptron (MLP) neural network and n-ary ontologies for emotion-aware robotic systems is proposed. In particular, an algorithm based on the hybrid-level fusion, an expressive emotional knowledge representation and reasoning model are introduced to recognize complex and non-observable emotional context of the user. Empirical experiments on real-world dataset corroborate its effectiveness.
Fichier non déposé

Dates et versions

hal-01637275 , version 1 (17-11-2017)

Identifiants

  • HAL Id : hal-01637275 , version 1

Citer

N. Ayari, Hazem Abdelkawy, Abdelghani Chibani, Yacine Y. Amirat. Towards Semantic Multimodal Emotion Recognition for Enhancing Assistive Services in Ubiquitous Robotics. Proc. Of the AAAI 2017 Fall Symposium Series, Nov 2017, Arlington, United States. pp.2-9. ⟨hal-01637275⟩

Collections

LISSI UPEC
153 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More