Weak supervision for Question Type Detection with large language models - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Weak supervision for Question Type Detection with large language models

Résumé

Large pre-trained language models (LLM) have shown remarkable Zero-Shot Learning performances in many Natural Language Processing tasks. However, designing effective prompts is still very difficult for some tasks, in particular for dialogue act recognition. We propose an alternative way to leverage pretrained LLM for such tasks that replace manual prompts with simple rules, which are more intuitive and easier to design for some tasks. We demonstrate this approach on the question type recognition task, and show that our zero-shot model obtains competitive performances both with a supervised LSTM trained on the full training corpus, and another supervised model from previously published works on the MRDA corpus. We further analyze the limits of the proposed approach, which can not be applied on any task, but may advantageously complement prompt programming for specific classes.
Fichier principal
Vignette du fichier
paper.pdf (156.11 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03786135 , version 1 (23-09-2022)

Identifiants

  • HAL Id : hal-03786135 , version 1

Citer

Jiří Martínek, Christophe Cerisara, Pavel Král, Ladislav Lenc, Josef Baloun. Weak supervision for Question Type Detection with large language models. INTERSPEECH 2022 -, Sep 2022, Incheon, South Korea. ⟨hal-03786135⟩
137 Consultations
168 Téléchargements

Partager

More