This dissertation examines, based on a case study, the feasibility of using model programs as a practical solution to the oracle problem in software testing. The case study pertains especially to testing algorithmically complex software and it evaluates the approach proposed in this dissertation against testing that is based on manual outcome prediction. In essence, the experiment entailed developing a model program for testing a medium-size industrial application that implements a complex scheduling algorithm. One of the most difficult tasks in software testing is to adjudicate on whether a program passed or failed a test. Because that usually requires "predicting" the correct program outcome, the problem of devising a mechanism for correctness checking (i.e., a "test oracle") is usually referred to as the "oracle problem". In practice, the most direct solution to the oracle problem is to pre-calculate manually the expected program outcomes. However, especially for algorithmically complex software, that is usually time consuming and error-prone. Although alternatives to the manual approach have been suggested in the testing literature, only few formal experiments have been conducted to evaluate them. A potential alternative to manual outcome prediction, which is evaluated in this dissertation, is to write one or more model programs that conform to the same functional specification (or parts of that specification) as the primary program (Le., the software to be delivered). Subjected to the same input, the programs should produce identical outputs. Disagreements indicate either the presence of software faults or specification defects. The absence of disagreements does not guarantee the correctness of the results since the programs may erroneously agree on outputs. However, if the test data is adequate and the implementations are diverse, it is unlikely that the programs will consistently fail and still reach agreement. This testing approach is based on a principle that is applied primarily in software fault-tolerance: "N-version diversity". In this dissertation, the approach is called "testing using M model programs" or, in short, "M-mp testing". The advantage of M-mp testing is that the programs, together, constitute an approximate, but continuously perfecting, test oracle. Human assistance is required only to analyse and arbitrate program disagreements. Consequently, the testing process can be automated to a very large degree. The main disadvantage of the approach is the extra effort required for constructing and maintaining the model programs. The case study that is presented in this dissertation provides prima facie evidence to suggest that the M-mp approach may be more cost-effective than testing based on manual outcome prediction. Of course, the validity of such a conclusion is dependent upon the specific context in which the experiment was carried out. However, there are good indications that the results of the experiment are generally applicable to testing algorithmically complex software. / Dissertation (MSc (Computer Science))--University of Pretoria, 2007. / Computer Science / unrestricted
Identifer | oai:union.ndltd.org:netd.ac.za/oai:union.ndltd.org:up/oai:repository.up.ac.za:2263/30583 |
Date | 23 February 2007 |
Creators | Manolache, Liviu-Iulian |
Contributors | Prof D G Kourie, upetd@up.ac.za |
Source Sets | South African National ETD Portal |
Detected Language | English |
Type | Dissertation |
Rights | © 2000, University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria. |
Page generated in 0.0018 seconds