Multilayer framework combining body movements and contextual descriptors for human activity understanding
Résumé
The deep understanding of the human activity is an essential key for a successful Human-Robot Interaction (HRI). The translation of the sensed human behavioral signals/cues and context descriptors into an encoded human activity is still a challenge because of the complex nature of the human actions. We propose a multilayer framework for the understanding of the human activity and suitable for being implemented in a mobile robot. It is based on the ideomotor theory which argues that each human action can be seen as goal-directed movements that cause intended effects in the environment [16]. The perception layer collects data related to the kinematics and dynamics of human body and the environment/ context descriptors; the classification layer combines a segment-based Support Vector Machine (SVM) method with the Video Annotation Tool from Irvine, California (VATIC) for the classification of the elementary actions; the interpretation layer allows the understanding of goal-directed activity by using a fuzzy logic-based decisional engine (developed with SpirOps AI software). In the paper, the method, the tools and the preliminary results of the framework customized for ”Making coffee” task recognition are given.