Weak supervision for Question Type Detection with large language models
Résumé
Large pre-trained language models (LLM) have shown remarkable Zero-Shot Learning performances in many Natural Language Processing tasks. However, designing effective prompts is still very difficult for some tasks, in particular for dialogue act recognition. We propose an alternative way to leverage pretrained LLM for such tasks that replace manual prompts with simple rules, which are more intuitive and easier to design for some tasks. We demonstrate this approach on the question type recognition task, and show that our zero-shot model obtains competitive performances both with a supervised LSTM trained on the full training corpus, and another supervised model from previously published works on the MRDA corpus. We further analyze the limits of the proposed approach, which can not be applied on any task, but may advantageously complement prompt programming for specific classes.
Domaines
Intelligence artificielle [cs.AI]Origine | Fichiers produits par l'(les) auteur(s) |
---|