Spelling suggestions: "subject:"softwaretesting"" "subject:"softwaretests""
11 |
On proportional sampling strategies in software testing奚永忻, Hsi, Yung-shing, Paul. January 2001 (has links)
published_or_final_version / Computer Science and Information Systems / Master / Master of Philosophy
|
12 |
Strong mutation testing strategiesDuncan, Ishbel M. M. January 1993 (has links)
Mutation Testing (or Mutation Analysis) is a source code testing technique which analyses code by altering code components. The output from the altered code is compared with output from the original code. If they are identical then Mutation Testing has been successful in discerning a weakness in either the test code or the test data. A mutation test therefore helps the tester to develop a program devoid of simple faults with a well developed test data set. The confidence in both program and data set is then increased. Mutation Analysis is resource intensive. It requires program copies, with one altered component, to be created and executed. Consequently, it has been used mainly by academics analysing small programs. This thesis describes an experiment to apply Mutation Analysis to larger, multi-function test programs. Mutations, alterations to the code, are induced using a sequence derived from the code control flow graph. The detection rate of live mutants, programs whose output match the original, was plotted and compared against data generated from the standard technique of mutating in statement order. This experiment was repeated for different code components such as relational operators, conditional statement or pointer references. A test was considered efficient if the majority of live mutants was detected early in the test sequence. The investigations demonstrated that control flow driven mutation could improve the efficiency of a test. However, the experiments also indicated that concentrations of live mutants of a few functions or statements could effect the efficiency of a test. This conclusion lead to the proposal that mutation testing should be directed towards functions or statements containing groupings of the code component that give rise to the live mutants. This effectively forms a test focused onto particular functions or statements.
|
13 |
A trade-off model between cost and reliability during the design phase of software developmentBurnett, Robert Carlisle January 1995 (has links)
This work proposes a method for estimating the development cost of a software system with modular structure taking into account the target level of reliability for that system. The required reliability of each individual module is set in order to meet the overall required reliability of the system. Consequently the individual cost estimates for each module and the overall cost of the software system are linked to the overall required reliability. Cost estimation is carried out during the early design phase, that is, well in advance of any detailed development. Where a satisfactory compromise between cost and reliability is feasible, this will enable a project manager to plan the allocation of resources to the implementation and testing phases so that the estimated total system cost does not exceed the project budget and the estimated system reliability matches the required target. The line of argument developed here is that the operational reliability of a software module can be linked to the effort spent during the testing phase. That is, a higher level of desired reliability will require more testing effort and will therefore cost more. A method is developed which enable us to estimate the cost of development based on an estimate of the number of faults to be found and fixed, in order to achieve the required reliability, using data obtained from the requirements specification and historical data. Using Markov analysis a method is proposed for allocating an appropriate reliability requirement to each module of a modular software system. A formula to calculate an estimate of the overall system reliability is established. Using this formula, a procedure to allocate the reliability requirement for each module is derived using a minimization process, which takes into account the stipulated overall required level of reliability. This procedure allow us to construct some scenarios for cost and the overall required reliability. The foremost application of the outcome of this work is to establish a basis for a trade-off model between cost and reliability during the design phase of the development of a modular software system. The proposed model is easy to understand and suitable for use by a project manager.
|
14 |
Automatic generation of software test cases from formal specificationsMeudec, Christophe January 1998 (has links)
No description available.
|
15 |
Evaluation of Jupiter, a lightweight code review frameworkYamashita, Takuya January 2006 (has links)
Thesis (M.S.)--University of Hawaii at Manoa, 2006. / Includes bibliographical references (leaves 96-97). / x, 97 leaves, bound ill. 29 cm
|
16 |
A Similarity-based Test Case Quality Metric using Historical Failure DataNoor, Tanzeem Bin January 2015 (has links)
A test case is a set of input data and expected output, designed to verify whether the system under test satisfies all requirements and works correctly. An effective test case reveals a fault when the actual output differs from the expected output (i.e., the test case fails). The effectiveness of test cases is estimated using quality metrics, such as code coverage, size, and historical fault detection. Prior studies have shown that previously failing test cases are highly likely to fail again in the next releases; therefore, they are ranked higher. However, in practice, a failing test case may not be exactly the same as a previously failed test case, but quite similar. In this thesis, I have defined a metric that estimates test case quality using its similarity to the previously failing test cases. Moreover, I have evaluated the effectiveness of the proposed test quality metric through detailed empirical study. / February 2016
|
17 |
A testbed for embedded systemsBurgess, Peter January 1994 (has links)
Testing and Debugging are often the most difficult phase of software development. This is especially true of embedded systems which are usually concurrent, have real-time performance and correctness constraints and which execute in the field in an environment which may not permit internal scrutiny of the software behaviour. Although good software engineering practices help, they will never eliminate the need for testing and debugging. This is because failings in the specification and design are often only discovered through testing and understanding these failings and how to correct them comes from debugging. These observations suggest that embedded software should be designed in a way which makes testing and debugging easier and that tools which support these activities are required. Due to the often hostile environment in which the finished embedded system will function, it is necessary to have a platform which allows the software to be developed and tested "in vitro". The Testbed system achieves these goals by providing dynamic modification and process migration facilities for use during development as well as powerful monitoring and background debugging support. These facilities are built on a basic run-time harness supporting an event-driven programming model with a global communication mechanism. This programming model is well suited to the reactive nature of embedded systems. The main research contributions of this work are in the areas of finding deadlock-free, path-optimal routings for networks and of dynamic modification with automated conversion of data which may include pointers.
|
18 |
Testing algorithmically complex software using model programsManolache, Liviu-Iulian 23 February 2007 (has links)
This dissertation examines, based on a case study, the feasibility of using model programs as a practical solution to the oracle problem in software testing. The case study pertains especially to testing algorithmically complex software and it evaluates the approach proposed in this dissertation against testing that is based on manual outcome prediction. In essence, the experiment entailed developing a model program for testing a medium-size industrial application that implements a complex scheduling algorithm. One of the most difficult tasks in software testing is to adjudicate on whether a program passed or failed a test. Because that usually requires "predicting" the correct program outcome, the problem of devising a mechanism for correctness checking (i.e., a "test oracle") is usually referred to as the "oracle problem". In practice, the most direct solution to the oracle problem is to pre-calculate manually the expected program outcomes. However, especially for algorithmically complex software, that is usually time consuming and error-prone. Although alternatives to the manual approach have been suggested in the testing literature, only few formal experiments have been conducted to evaluate them. A potential alternative to manual outcome prediction, which is evaluated in this dissertation, is to write one or more model programs that conform to the same functional specification (or parts of that specification) as the primary program (Le., the software to be delivered). Subjected to the same input, the programs should produce identical outputs. Disagreements indicate either the presence of software faults or specification defects. The absence of disagreements does not guarantee the correctness of the results since the programs may erroneously agree on outputs. However, if the test data is adequate and the implementations are diverse, it is unlikely that the programs will consistently fail and still reach agreement. This testing approach is based on a principle that is applied primarily in software fault-tolerance: "N-version diversity". In this dissertation, the approach is called "testing using M model programs" or, in short, "M-mp testing". The advantage of M-mp testing is that the programs, together, constitute an approximate, but continuously perfecting, test oracle. Human assistance is required only to analyse and arbitrate program disagreements. Consequently, the testing process can be automated to a very large degree. The main disadvantage of the approach is the extra effort required for constructing and maintaining the model programs. The case study that is presented in this dissertation provides prima facie evidence to suggest that the M-mp approach may be more cost-effective than testing based on manual outcome prediction. Of course, the validity of such a conclusion is dependent upon the specific context in which the experiment was carried out. However, there are good indications that the results of the experiment are generally applicable to testing algorithmically complex software. / Dissertation (MSc (Computer Science))--University of Pretoria, 2007. / Computer Science / unrestricted
|
19 |
Message from A-MOST 2020 ChairsHierons, Rob, Nunez, Manuel, Pretschner, Alexander, Lefticaru, Raluca 08 December 2021 (has links)
yes / Welcome to the 16th edition of the Advances in Model-Based Testing Workshop (A-MOST 2020)
held on March 23rd, 2020 in Porto as part of ICST 2020, the IEEE International Conference on
Software Testing, Verification and Validation.
|
20 |
Exploring the impact of test suite granularity and test grouping technique on the cost-effectiveness of regression testingQiu, Xuemei 05 December 2002 (has links)
Regression testing is an expensive testing process used to validate changes
made to previously tested software. Different regression testing techniques can
have different impacts on the cost-effectiveness of testing. This cost-effectiveness
can also vary with different characteristics of test suites. One such characteristic,
test suite granularity, reflects the way in which test cases are organized
within a test suite; another characteristic, test grouping technique, involves the
way in which the test inputs are grouped into test cases. Various cost-benefits
tradeoffs have been attributed to choices of test suite granularity and test grouping
technique, but little research has formally examined these tradeoffs. In this
thesis, we conducted several controlled experiments, examining the effects of
test suite granularity and test grouping technique on the costs and benefits of
several regression testing methodologies across ten releases of a non-trivial software
system, empire. Our results expose essential tradeoffs to consider when
designing test suites for use in regression testing evolving systems. / Graduation date: 2003
|
Page generated in 0.0663 seconds