Scoring Multi-hop Question Decomposition Using Masked Language Models
Résumé
Question answering (QA) is a sub-field of Natural Language Processing (NLP) that focuses on developing systems capable of answering natural language queries. Within this domain, multi-hop question answering represents an advanced QA task that requires gathering and reasoning over multiple pieces of information from diverse sources or passages. To handle the complexity of multi-hop questions, question decomposition has been proven to be a valuable approach. This technique involves breaking down complex questions into simpler sub-questions, reducing the complexity of the problem. However, it’s worth noting that existing question decomposition methods often rely on training data, which may not always be readily available for low-resource languages or specialized domains. To address this issue, we propose a novel approach that utilizes pre-trained masked language models to score decomposition candidates in a zero-shot manner. The method involves generating decomposition candidates, scoring them using a pseudo-log likelihood estimation, and ranking them based on their scores. To evaluate the efficacy of the decomposition process, we conducted experiments on two datasets annotated on decomposition in two different languages, Arabic and English. Subsequently, we integrated our approach into a complete QA system and conducted a reading comprehension performance evaluation on the HotpotQA dataset. The obtained results emphasize that while the system exhibited a small drop in performance, it still maintained a significant advance compared to the baseline model. The proposed approach highlights the efficiency of the language model scoring technique in complex reasoning tasks such as multi-hop question decomposition.
Domaines
Informatique [cs]
Fichier principal
Multi_hop_Questions_Decomposition_Based_on_Masked_Language_Model_Scoring.pdf (416.97 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|