Multimodal Meaning Representation for Generic Dialogue Systems Architectures - Archive ouverte HAL
Communication Dans Un Congrès Année : 2004

Multimodal Meaning Representation for Generic Dialogue Systems Architectures

Résumé

An unified language for the communicative acts between agents is essential for the design of multi-agents architectures. Whatever the type of interaction (linguistic, multimodal, including particular aspects such as force feedback), whatever the type of application (command dialogue, request dialogue, database querying), the concepts are common and we need a generic meta-model. In order to tend towards task-independent systems, we need to clarify the modules parameterization procedures. In this paper, we focus on the characteristics of a meta-model designed to represent meaning in linguistic and multimodal applications. This meta-model is called MMIL for MultiModal Interface Language, and has first been specified in the framework of the IST MIAMM European project. What we want to test here is how relevant is MMIL for a completely different context (a different task, a different interaction type, a different linguistic domain). We detail the exploitation of MMIL in the framework of the IST OZONE European project, and we draw the conclusions on the role of MMIL in the parameterization of task-independent dialogue managers.
Fichier principal
Vignette du fichier
04_LREC.pdf (125.24 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00137088 , version 1 (16-03-2007)

Identifiants

Citer

Frédéric Landragin, Alexandre Denis, Annalisa Ricci, Laurent Romary. Multimodal Meaning Representation for Generic Dialogue Systems Architectures. Fourth International Conference on Language Resources and Evaluation, May 2004, Lisbon (Portugal), Portugal. pp.521-524. ⟨hal-00137088⟩
202 Consultations
327 Téléchargements

Altmetric

Partager

More