Human Annotated Dialogues Dataset for Natural Conversational Agents - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Applied Sciences Année : 2020

Human Annotated Dialogues Dataset for Natural Conversational Agents

Résumé

Conversational agents are gaining huge popularity in industrial applications such as digital assistants, chatbots, and particularly systems for natural language understanding (NLU). However, a major drawback is the unavailability of a common metric to evaluate the replies against human judgement for conversational agents. In this paper, we develop a benchmark dataset with human annotations and diverse replies that can be used to develop such metric for conversational agents. The paper introduces a high-quality human annotated movie dialogue dataset, HUMOD, that is developed from the Cornell movie dialogues dataset. This new dataset comprises 28,500 human responses from 9500 multi-turn dialogue history-reply pairs. Human responses include: (i) ratings of the dialogue reply in relevance to the dialogue history; and (ii) unique dialogue replies for each dialogue history from the users. Such unique dialogue replies enable researchers in evaluating their models against six unique human responses for each given history. Detailed analysis on how dialogues are structured and human perception on dialogue score in comparison with existing models are also presented.

Dates et versions

hal-03081727 , version 1 (18-12-2020)

Identifiants

Citer

Erinc Merdivan, Deepika Singh, Sten Hanke, Johannes Kropf, Andreas Holzinger, et al.. Human Annotated Dialogues Dataset for Natural Conversational Agents. Applied Sciences, 2020, 10 (3), pp.762. ⟨10.3390/app10030762⟩. ⟨hal-03081727⟩
226 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More