An Empirical Study on Leveraging LLMs for Metamodels and Code Co-evolution - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

An Empirical Study on Leveraging LLMs for Metamodels and Code Co-evolution

Résumé

Metamodels play an important role in MDE and in specifying a software language. They are cornerstone to generate other artifacts of lower abstraction level, such as code. Developers then enrich the generated code to build their language services and tooling, e.g., editors, and checkers. When a metamodel evolves, part of the code is regenerated and all the additional developers’ code can be impacted, thus requiring erroneous code to be co-evolved accordingly. In this paper, we explore a novel approach to mitigate the challenge of metamodel evolution impacts on the code using LLMs. In fact, LLMs stand as promising tools for tackling increasingly complex problems and support developers in various tasks of writing, correcting, and documenting source code, models, and other artifacts. However, while there is an extensive empirical assessment of the LLMs capabilities in generating models, code, and tests, there is a lack of work on their ability to support their maintenance. In this paper, we focus on the particular problem of metamodels and code co-evolution. We first designed a prompt template structure that contains contextual information about metamodel changes, the abstraction gap between the metamodel and the code, and the erroneous code to co-evolve. To investigate the usefulness of this template, we generated three more variations of the prompts. The generated prompts are then given to the LLM to co-evolve the impacted code. We evaluated our generated prompts and other three of their variations with ChatGPT version 3.5 on seven Eclipse projects from OCL and Modisco evolved metamodels. Results show that ChatGPT can co-evolve correctly 88.7% of the errors due to metamodel evolution, varying from 75% to 100% of correctness rate. When varying the prompts, we observed increased correctness in two variants and decreased correctness in another variant. We also observed that varying the temperature hyperparameter yields better results with lower temperatures. Our results are observed on a total of 5320 generated prompts. Finally, when compared to the quick fixes of the IDE, the generated prompts co-evolutions completely outperform the quick fixes.
Fichier principal
Vignette du fichier
article6.pdf (1.73 Mo) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte
Licence

Dates et versions

hal-04667772 , version 1 (07-08-2024)

Licence

Identifiants

Citer

Zohra Kaouter Kebaili, Djamel Eddine Khelladi, Mathieu Acher, Olivier Barais. An Empirical Study on Leveraging LLMs for Metamodels and Code Co-evolution. ECMFA 2024 - European Conference on Modelling Foundations and Applications, Jul 2024, Enschede, Netherlands. pp.1-14, ⟨10.5381/jot.2024.23.3.a6⟩. ⟨hal-04667772⟩
52 Consultations
36 Téléchargements

Altmetric

Partager

More