Researchers/engineers in the field of software testing have valued coverage as a testing metric for decades now. There have been various empirical results that have shown that as coverage increases the ability of the test program to detect a fault also increases. As a result numerous coverage techniques have been introduced. Which coverage criteria co-relates better with fault detection? Which coverage criteria on the other hand have lower correlation with fault detection? In other words, does it make more sense to achieve a higher percentage of c1 kind of coverage over a higher percentage of c2 coverage to gain good fault detection rate. Do the popular block and branch coverage perform better or does path coverage outperform them? Answering these questions will help future engineers/researchers in generating more efficient test suites and in gaining a better metric of measurement. This also helps in test suite minimization. This thesis studies the relationship between coverage and mutant kill-rates over large, randomly generated test suites for statement, branch, predicate, and path coverage of two realistic programs to answer the above open questions. The experiments both confirm conventional wisdom about these coverage criteria and contains a few surprises. / Graduation date: 2013
Identifer | oai:union.ndltd.org:ORGSU/oai:ir.library.oregonstate.edu:1957/31111 |
Date | 14 June 2012 |
Creators | Shamasunder, Shalini |
Contributors | Groce, Alex |
Source Sets | Oregon State University |
Language | en_US |
Detected Language | English |
Type | Thesis/Dissertation |
Page generated in 0.0019 seconds