How do Transformer-Architecture Models Address Polysemy of Korean Adverbial Postpositions?
Résumé
Postpositions, which are characterized as multiple form-function associations and thus polysemous, pose a challenge to automatic identification of their usage. Several studies have used contextualized word-embedding models to reveal the functions of Korean postpositions. Despite the superior classification performance of previous studies, the particular reason how these models resolve the polysemy of Korean postpositions is not enough clear. To add more interpretation, for this reason, we devised a classification model by employing two transformerarchitecture models-BERT and GPT-2-and introduces a computational simulation that interactively demonstrates how these transformerarchitecture models simulate human interpretation of word-level polysemy involving Korean adverbial postpositions-ey,-eyse, and-(u)lo. Results reveal that (i) the BERT model performs better than the GPT-2 model to classify the intended function of postpositions, (ii) there is an inverse relationship between the classification performance and the number of functions that each postposition manifests, (iii) model performance is affected by the corpus size of each function, (iv) the models' performance gradually improves as the epoch proceeds, and (vi) the models are affected by the scarcity of input and/or semantic closeness between the items.
Domaines
Informatique et langage [cs.CL] Intelligence artificielle [cs.AI] Algorithme et structure de données [cs.DS] Synthèse d'image et réalité virtuelle [cs.GR] Interface homme-machine [cs.HC] Traitement du texte et du document Linguistique Calcul [stat.CO] Méthodologie [stat.ME] Machine Learning [stat.ML]
Fichier principal
How do Transformer-Architecture Models Address Polysemy of Korean Adverbial Postpositions.pdf (4.28 Mo)
Télécharger le fichier
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|