Augmenting Equivalent Mutant Dataset Using Symbolic Execution

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 45
  • Download : 0
Mutation testing aims to ensure that a test suite is capable of detecting real faults, by checking whether they can reveal (i.e., kill) small and arbitrary lexical changes made to the program (i.e., mutants). Some of these arbitrary changes may result in a mutant that is syntactically different but is semantically equivalent to the original program under test: such mutants are called equivalent mutants. Since program equivalence is undecidable in general, equivalent mutants pose a serious challenge to mutation testing. Given an unkilled mutant, it is not possible to automatically decide whether the cause is the weakness of test cases or the equivalence of the mutant. Recently machine learning has been adopted to train binary classification models for mutant equivalence. However, training such classification models requires a pool of equivalent mutants, the labelling for which involves a significant amount of human investigation. In this paper, we introduce two techniques that can be used to augment the equivalent mutant benchmarks. First, we propose a symbolic execution-based validation of mutant equivalence, instead of manual classification. Second, we introduce a synthesis technique for equivalent mutants: for a subset of mutation operators, the technique identifies potential mutation locations that are guaranteed to produce equivalent mutants. We compare these two techniques to MutantBench, a manually labelled equivalent mutant benchmark. For the 19 programs studied, MutantBench contains 462 equivalent mutants, whereas our technique is capable of generating 1,725 equivalent mutants automatically, of which 1,349 are new and unique. We further show that the additional equivalent mutants can lead to more accurate equivalent mutant classification models.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2022-04-04
Language
English
Citation

14th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2022, pp.150 - 159

DOI
10.1109/ICSTW55395.2022.00038
URI
http://hdl.handle.net/10203/299443
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0