A manual categorization of new quality issues on automatically-generated tests - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

A manual categorization of new quality issues on automatically-generated tests

Résumé

Diverse studies have analyzed the quality of automatically generated test cases by using test smells as the main quality attribute. But recent work reported that generated tests may suffer a number of quality issues not necessarily considered in previous studies. Little is known about these issues and their frequency within generated tests. In this paper, we report on a manual analysis of an external dataset consisting of 2,340 automatically generated tests. This analysis aimed at detecting new quality issues, not covered by past recognized test smells. We use thematic analysis to group and categorize the new quality issues found. As a result, we propose a taxonomy of 13 new quality issues grouped in four categories. We also report on the frequency of these new quality issues within the dataset and present eight recommendations that test generators may consider to improve the quality and usefulness of the automatically generated tests.
Fichier principal
Vignette du fichier
conference_101719.pdf (226.53 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Licence : CC BY - Paternité

Dates et versions

hal-04344531 , version 1 (14-12-2023)

Licence

Paternité

Identifiants

Citer

Geraldine Galindo-Gutierrez, Narea Maxilimiliano, Blanco Alison Fernandez, Nicolas Anquetil, Alcocer Juan Pablo Sandoval. A manual categorization of new quality issues on automatically-generated tests. Proceedings of the 39st IEEE International Conference on Software Maintenance and Evolution (ICSME'23), IEEE computer society, Oct 2023, Bogota, Colombia. ⟨10.1109/ICSME58846.2023.00035⟩. ⟨hal-04344531⟩
48 Consultations
16 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More