Generating from AMRs into High and Low-Resource Languages using Phylogenetic Knowledge and Hierarchical QLoRA Training (HQL)
Résumé
Previous work on multilingual generation from Abstract Meaning Representations has mostly focused on High- and Medium-Resource languages relying on large amounts of training data. In this work, we consider both High- and Low-Resource languages capping training data size at the lower bound set by our Low-Resource languages i.e., 31K training instances. We propose two straightforward techniques to enhance generation results on Low-Resource while preserving performance on High- and Medium-Resource languages. First, we iteratively refine a multilingual model to a set of monolingual models using Low-Rank Adaptation - this enables cross-lingual transfer while reducing over-fitting for High-Resource languages as the monolingual models are trained last. Second, we base our training curriculum on a tree structure which permits investigating how the languages used at each iteration impact generation performance on High and Low-Resource languages. We show an improvement over both mono and multilingual approaches. Comparing different ways of grouping languages at each iteration step we find two beneficial configurations: grouping related languages which promotes transfer, or grouping distant languages which facilitates regularisation.