The Natural Language Generation Pipeline, Neural Text Generation and Explainability
Résumé
End-to-end encoder-decoder approaches to data-to-text generation are often black boxes whose predictions are difficult to explain. Breaking up the end-to-end model into submodules is a natural way to address this problem. The traditional pre-neural Natural Language Generation (NLG) pipeline provides a framework for breaking up the end-to-end encoder-decoder. We survey recent papers that integrate traditional NLG sub-modules in neural approaches and analyse their explainability. Our survey is a first step towards building explainable neural NLG models.
Domaines
Informatique [cs]
Fichier principal
Submission_Workshop_INLG2020___NL4XAI___8_Dec_2020.pdf (137.21 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|