Multimodal face-to-face interaction with a talking face: mutual attention and deixis - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2005

Multimodal face-to-face interaction with a talking face: mutual attention and deixis

Résumé

Our long-term goal is to build an embodied conversational agent able to maintain realistic face-to-face communication with a human interlocutor. This conversational agent is embodied by a videorealistic talking head. While most researchers focus on discourse interpretation and generation, the main challenge here is to provide the interlocutor with implicit and explicit signs of mutual interest and attention as well as with an awareness of environmental conditions in which interaction takes place. A hybrid platform, with hardware and software, has been developed to test various interaction scenarios. As an application, the talking agent was used to interact with a user during a simple card game. The role of the agent was to act as a guide as well as to provide different levels of guidance (with or without mutual attention, and with or without endogenous eye saccades toward a correct or incorrect play). We provide here a comparative analysis of user performance across different levels of guidance and user perception of both level of guidance and level of help from the embodied conversational agent.
Fichier principal
Vignette du fichier
HCII2005_GB_revised.pdf (2.58 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00516324 , version 1 (09-09-2010)

Identifiants

  • HAL Id : hal-00516324 , version 1

Citer

Gérard Bailly, Frédéric Elisei, Stephan Raidt. Multimodal face-to-face interaction with a talking face: mutual attention and deixis. Human-Computer Interaction, Jul 2005, France. 10 p. ⟨hal-00516324⟩

Collections

UGA CNRS ICP
230 Consultations
166 Téléchargements

Partager

Gmail Facebook X LinkedIn More