On the Benefit of Independent Control of Head and Eye Movements of a Social Robot for Multiparty Human-Robot Interaction
Résumé
The human gaze direction is the sum of the head and eye movements. The coordination of these two segments has been studied and models of the contribution of head movement to the gaze of virtual agents or robots have been proposed. However, these coordination models are mostly not trained nor evaluated in an interaction context, and may underestimate the social functions of gaze. Indeed, after analyzing human behavior in a three-party conversation dataset, we show that the contribution of the head to the gaze varies depending on whether the speaker is addressing two interlocutors or one of them: the conversational regime actually impacts the head/eyes coordination. We therefore propose an evaluation of different coordination policies in a social interaction context, using a Furhat robot to replay the human multimodal behavior from our data record. The verbal content and gaze targets are the same, but the robot uses four different head and eye coordination policies. (1) Furhat's default gaze control, whose eyes move faster and start before the head, but finally aligns both segments. (2) the robot head is fixed and only the eyes move. (3) the eyes are fixed and only the head moves. (4) Human-like control where the robot mimics the head movements of the human dataset, which naturally exploits independent eye and head control. Using an online crowdsourced test, we show that the human-like policy, which uses decoupled head and eye movements, is perceived significantly more natural than the others.
Origine | Fichiers produits par l'(les) auteur(s) |
---|