Multimodal signal processing and interaction for a driving simulator : component-based architecture - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Journal on Multimodal User Interfaces Année : 2007

Multimodal signal processing and interaction for a driving simulator : component-based architecture

Résumé

In this paper we focus on the design and development of a multimodal driving simulator that is based on both multimodal driver's focus of attention detection as well as driver's fatigue state detection and prediction. Capturing and interpreting the driver's focus of attention and fatigue state will be based on video data (e.g., facial expression, head movement, eye tracking). While the input multimodal interface relies on passive modalities only (also called attentive user interface), the output multimodal user interface includes several active output modalities for presenting alert messages including graphics and text on a mini-screen and in the windshield, sounds, speech and vibration (vibration wheel). Active input modalities are added in the meta-User Interface to let the user dynamically select the output modalities. The driving simulator is used as a case study for studying software architecture for multimodal signal processing and multimodal interaction using two software component-based platforms, OpenInterface and ICARE.
Fichier principal
Vignette du fichier
project6_eNTERFACE2006.pdf (490.79 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte
Loading...

Dates et versions

hal-00256660 , version 1 (22-02-2008)

Identifiants

Citer

Alexandre Benoit, Laurent Bonnaud, Alice Caplier, I. Damousis, F. Jourde, et al.. Multimodal signal processing and interaction for a driving simulator : component-based architecture. Journal on Multimodal User Interfaces, 2007, 1 (1), pp.49-58. ⟨10.1007/BF02884432⟩. ⟨hal-00256660⟩
279 Consultations
318 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More