Guiding Attention in Sequence-to-Sequence Models for Dialogue Act Prediction - Archive ouverte HAL
Article Dans Une Revue Proceedings of the AAAI Conference on Artificial Intelligence Année : 2020

Guiding Attention in Sequence-to-Sequence Models for Dialogue Act Prediction

Résumé

The task of predicting dialog acts (DA) based on conversational dialog is a key component in the development of conversational agents. Accurately predicting DAs requires a precise modeling of both the conversation and the global tag dependencies. We leverage seq2seq approaches widely adopted in Neural Machine Translation (NMT) to improve the modelling of tag sequentiality. Seq2seq models are known to learn complex global dependencies while currently proposed approaches using linear conditional random fields (CRF) only model local tag dependencies. In this work, we introduce a seq2seq model tailored for DA classification using: a hierarchical encoder, a novel guided attention mechanism and beam search applied to both training and inference. Compared to the state of the art our model does not require handcrafted features and is trained end-to-end. Furthermore, the proposed approach achieves an unmatched accuracy score of 85% on SwDA, and state-of-the-art accuracy score of 91.6% on MRDA.
Fichier principal
Vignette du fichier
2002.08801 (665.3 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03134847 , version 1 (04-01-2024)

Identifiants

Citer

Pierre Colombo, Emile Chapuis, Matteo Manica, Emmanuel Vignon, Giovanna Varni, et al.. Guiding Attention in Sequence-to-Sequence Models for Dialogue Act Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34 (05), pp.7594-7601. ⟨10.1609/aaai.v34i05.6259⟩. ⟨hal-03134847⟩
59 Consultations
24 Téléchargements

Altmetric

Partager

More