An empirical evaluation of six methods to detect faults in software

Cited 21 time in webofscience Cited 0 time in scopus
  • Hit : 833
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorS.S.Soko
dc.contributor.authorCha, Sungdeokko
dc.contributor.authorTimothy J.Simealko
dc.contributor.authorKwon, Yong Raeko
dc.date.accessioned2013-03-06T05:38:32Z-
dc.date.available2013-03-06T05:38:32Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2002-10-
dc.identifier.citationSOFTWARE TESTING VERIFICATION & RELIABILITY, v.12, no.3, pp.155 - 171-
dc.identifier.issn0960-0833-
dc.identifier.urihttp://hdl.handle.net/10203/85984-
dc.description.abstractAlthough numerous empirical studies have been conducted to measure the fault detection capability of software analysis methods, few studies have been conducted using programs of similar size and characteristics. Therefore, it is difficult to derive meaningful conclusions on the relative detection ability and cost-effectiveness of various fault detection methods. In order to compare fault detection capability objectively, experiments must be conducted using the same set of programs to evaluate all methods and must involve participants who possess comparable levels of technical expertise. One such experiment was 'Conflict1', which compared voting, a testing method, self-checks, code reading by stepwise refinement and data-flow analysis methods on eight versions of a battle simulation program. Since an inspection method was not included in the comparison, the authors conducted a follow-up experiment 'Conflict2', in which five of the eight versions from Conflict1 were subjected to Fagan inspection. Conflict2 examined not only the number and types of faults detected by each method, but also the cost-effectiveness of each method, by comparing the average amount of effort expended in detecting faults. The primary findings of the Conflict2 experiment are the following. First, voting detected the largest number of faults, followed by the testing method, Fagan inspection, self-checks, code reading and data-flow analysis. Second, the voting, testing and inspection methods were largely complementary to each other in the types of faults detected. Third, inspection was far more cost-effective than the testing method studied. Copyright (C) 2002 John Wiley Sons, Ltd.-
dc.languageEnglish-
dc.publisherWiley-Blackwell-
dc.subjectINSPECTIONS-
dc.titleAn empirical evaluation of six methods to detect faults in software-
dc.typeArticle-
dc.identifier.wosid000178446700003-
dc.identifier.scopusid2-s2.0-0036741472-
dc.type.rimsART-
dc.citation.volume12-
dc.citation.issue3-
dc.citation.beginningpage155-
dc.citation.endingpage171-
dc.citation.publicationnameSOFTWARE TESTING VERIFICATION & RELIABILITY-
dc.contributor.localauthorKwon, Yong Rae-
dc.contributor.nonIdAuthorS.S.So-
dc.contributor.nonIdAuthorTimothy J.Simeal-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorexperiments-
dc.subject.keywordAuthorsoftware testing-
dc.subject.keywordAuthorcode reading-
dc.subject.keywordAuthorself-checks-
dc.subject.keywordAuthorFagan inspection-
dc.subject.keywordAuthorN-version voting-
dc.subject.keywordPlusINSPECTIONS-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 21 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0