Learned Text Representation for Amharic Information Retrieval and Natural Language Processing - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Information Année : 2023

Learned Text Representation for Amharic Information Retrieval and Natural Language Processing

Résumé

Over the past few years, word embeddings and bidirectional encoder representations from transformers (BERT) models have brought better solutions to learning text representations for natural language processing (NLP) and other tasks. Many NLP applications rely on pre-trained text representations, leading to the development of a number of neural network language models for various languages. However, this is not the case for Amharic, which is known to be a morphologically complex and under-resourced language. Usable pre-trained models for automatic Amharic text processing are not available. This paper presents an investigation on the essence of learned text representation for information retrieval and NLP tasks using word embeddings and BERT language models. We explored the most commonly used methods for word embeddings, including word2vec, GloVe, and fastText, as well as the BERT model. We investigated the performance of query expansion using word embeddings. We also analyzed the use of a pre-trained Amharic BERT model for masked language modeling, next sentence prediction, and text classification tasks. Amharic ad hoc information retrieval test collections that contain word-based, stem-based, and root-based text representations were used for evaluation. We conducted a detailed empirical analysis on the usability of word embeddings and BERT models on word-based, stem-based, and root-based corpora. Experimental results show that word-based query expansion and language modeling perform better than stem-based and root-based text representations, and fastText outperforms other word embeddings on word-based corpus.
Fichier principal
Vignette du fichier
Learned Text Representation for Amharic Information Retrieval.pdf (31.47 Mo) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04229854 , version 1 (05-10-2023)

Licence

Paternité

Identifiants

Citer

Tilahun Yeshambel, Josiane Mothe, Yaregal Assabie. Learned Text Representation for Amharic Information Retrieval and Natural Language Processing. Information, 2023, 14 (3: Special Issue "Novel Methods and Applications in Natural Language Processing"), pp.1-23, article 195. ⟨10.3390/info14030195⟩. ⟨hal-04229854⟩
27 Consultations
3 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More