A Software Product Line (SPL) is a family of related programs, which of each is defined by a combination of
features. By developing related programs together, an SPL simultaneously reduces programming effort and satisfies multiple sets of requirements. Testing an SPL efficiently is challenging because a property must be checked for all the programs in the SPL, the number of which can be exponential in the number of features.
In this dissertation, we present a suite of complementary static and dynamic techniques for efficient testing and runtime monitoring of SPLs, which can be divided into two categories. The first prunes programs, termed configurations, that are irrelevant to the property being tested. More specifically, for a given test, a static analysis identifies features that can influence the test outcome, so that the test needs to be run only on programs that include these features. A dynamic analysis counterpart also eliminates configurations that do not have to be tested, but does so by checking a simpler property and can be faster and more scalable. In addition, for runtime monitoring,
a static analysis identifies configurations that can violate a safety property and only these configurations need to be monitored.
When no configurations can be pruned, either by design of the test or due to ineffectiveness of program analyses,
runtime similarity between configurations, arising due to design similarity between configurations of a product line, is exploited. In particular, shared execution runs all the configurations together, executing bytecode instructions common to the configurations just once. Deferred execution improves on shared execution by
allowing multiple memory locations to be treated as a single memory location, which can increase the amount of sharing for object-oriented programs and for programs
using arrays.
The techniques have been evaluated and the results demonstrate that the techniques can be effective and can advance the idea that despite the feature combinatorics of an SPL, its structure can be exploited by automated analyses to make testing more efficient. / text
Identifer | oai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/23319 |
Date | 25 February 2014 |
Creators | Kim, Chang Hwan Peter |
Source Sets | University of Texas |
Detected Language | English |
Type | Thesis |
Format | application/pdf |
Page generated in 0.0016 seconds