A manual categorization of new quality issues on automatically-generated tests - INRIA Chile Access content directly
Conference Papers Year : 2023

A manual categorization of new quality issues on automatically-generated tests

Abstract

Diverse studies have analyzed the quality of automatically generated test cases by using test smells as the main quality attribute. But recent work reported that generated tests may suffer a number of quality issues not necessarily considered in previous studies. Little is known about these issues and their frequency within generated tests. In this paper, we report on a manual analysis of an external dataset consisting of 2,340 automatically generated tests. This analysis aimed at detecting new quality issues, not covered by past recognized test smells. We use thematic analysis to group and categorize the new quality issues found. As a result, we propose a taxonomy of 13 new quality issues grouped in four categories. We also report on the frequency of these new quality issues within the dataset and present eight recommendations that test generators may consider to improve the quality and usefulness of the automatically generated tests.
Fichier principal
Vignette du fichier
conference_101719.pdf (226.53 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
licence : CC BY - Attribution

Dates and versions

hal-04344531 , version 1 (14-12-2023)

Licence

Attribution

Identifiers

Cite

Geraldine Galindo-Gutierrez, Narea Maxilimiliano, Blanco Alison Fernandez, Nicolas Anquetil, Alcocer Juan Pablo Sandoval. A manual categorization of new quality issues on automatically-generated tests. Proceedings of the 39st IEEE International Conference on Software Maintenance and Evolution (ICSME'23), IEEE computer society, Oct 2023, Bogota, Colombia. ⟨10.1109/ICSME58846.2023.00035⟩. ⟨hal-04344531⟩
50 View
19 Download

Altmetric

Share

Gmail Facebook X LinkedIn More