Return to search

Automated test of evolving software

Computers and the software they run are pervasive, yet released software is often unreliable, which has many consequences. Loss of time and earnings can be caused by application software (such as word processors) behaving incorrectly or crashing. Serious disruption can occur as in the l4th August 2003 blackouts in North East USA and Canadal, or serious injury or death can be caused as in the Therac-25 overdose incidents. One way to improve the quality of software is to test it thoroughly. However, software testing is time consuming, the resources, capabilities and skills needed to carry it out are often not available and the time required is often curtailed because of pressures to meet delivery deadlines3. Automation should allow more thorough testing in the time available and improve the quality of delivered software, but there are some problems with automation that this research addresses. Firstly, it is difficult to determine ifthe system under test (SUT) has passed or failed a test. This is known as the oracle problem4 and is often ignored in software testing research. Secondly, many software development organisations use an iterative and incremental process, known as evolutionary development, to write software. Following release, software continues evolving as customers demand new features and improvements to existing ones5. This evolution means that automated test suites must be maintained throughout the life ofthe software. A contribution of this research is a methodology that addresses automatic generation of the test cases, execution of the test cases and evaluation of the outcomes from running each test. "Predecessor" software is used to solve the oracle problem. This is software that already exists, such as a previous version of evolving software, or software from a different vendor that solves the same, or similar, problems. However, the resulting oracle is assumed not be perfect, so rules are defined in an interface, which are used by the evaluator in the test evaluation stage to handle the expected differences. The interface also specifies functional inputs and outputs to the SUT. An algorithm has been developed that creates a Markov Chain Transition Matrix (MCTM) model of the SUT from the interface. Tests are then generated automatically by making a random walk of the MCTM. This means that instead of maintaining a large suite of tests, or a large model of the SUT, only the interface needs to be maintained.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:425759
Date January 2005
CreatorsShaw, Hazel Anne
PublisherUniversity of Bedfordshire
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://hdl.handle.net/10547/305743

Page generated in 0.0133 seconds