Empirical evaluation of mutation-based test case prioritization techniques

Cited 13 time in webofscience Cited 17 time in scopus
  • Hit : 467
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorShin, Donghwanko
dc.contributor.authorYoo, Shinko
dc.contributor.authorPapadakis, Mikeko
dc.contributor.authorBae, Doo-Hwanko
dc.date.accessioned2019-03-19T01:25:11Z-
dc.date.available2019-03-19T01:25:11Z-
dc.date.created2019-01-29-
dc.date.issued2019-03-
dc.identifier.citationSoftware Testing, Verification and Reliability, v.29, no.1-2, pp.e1695-
dc.identifier.issn0960-0833-
dc.identifier.urihttp://hdl.handle.net/10203/251621-
dc.description.abstractIn this paper, we propose a new test case prioritization technique that combines both mutation-based and diversity-aware approaches. The diversity-aware mutation-based technique relies on the notion of mutant distinguishment, which aims to distinguish one mutant's behaviour from another, rather than from the original program. The relative cost and effectiveness of the mutation-based prioritization techniques (i.e., using both the traditional mutant kill and the proposed mutant distinguishment) are empirically investigated with 352 real faults and 553,477 developer-written test cases. The empirical evaluation considers both the traditional and the diversity-aware mutation criteria in various settings: single-objective greedy, hybrid, and multi-objective optimization. The results show that there is no single dominant technique across all the studied faults. To this end, the reason why each one of the mutation-based prioritization criteria performs poorly is discussed, using a graphical model called Mutant Distinguishment Graph that demonstrates the distribution of the fault-detecting test cases with respect to mutant kills and distinguishment. (c) 2018 John Wiley & Sons, Ltd.-
dc.publisherWiley-
dc.titleEmpirical evaluation of mutation-based test case prioritization techniques-
dc.typeArticle-
dc.identifier.wosid000458911000002-
dc.identifier.scopusid2-s2.0-85058967207-
dc.type.rimsART-
dc.citation.volume29-
dc.citation.issue1-2-
dc.citation.beginningpagee1695-
dc.citation.publicationnameSoftware Testing, Verification and Reliability-
dc.identifier.doi10.1002/stvr.1695-
dc.contributor.localauthorYoo, Shin-
dc.contributor.localauthorBae, Doo-Hwan-
dc.contributor.nonIdAuthorPapadakis, Mike-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthormutation testing-
dc.subject.keywordAuthortest case prioritization-
dc.subject.keywordAuthorregression testing-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 13 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0