• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 210
  • 182
  • 36
  • 19
  • 8
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 534
  • 534
  • 154
  • 145
  • 137
  • 120
  • 64
  • 59
  • 56
  • 54
  • 52
  • 48
  • 47
  • 45
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Budget-sensitive testing and analysis strategies and their applications to concurrent and service-based systems

Zhai, Ke, 翟可 January 2013 (has links)
Software testing is the most widely practiced approach to assure the correctness of programs. Despite decades of research progress, software testing is still considered very resource-demanding and time-consuming. In the recent decade, the wide adoption of multithreaded programs and the service-based architecture has further aggravated the problem that we are facing. In this thesis, we study issues in software testing where resource constraints (such as time spent and memory space allocated) are important considerations, and we look for testing techniques that are significantly advanced in effectiveness and efficiency given limited quotas of resources, which we refer to as budget. Our main focus is on two types of systems: concurrent systems and service-based systems. The concurrent system is a class of computing system where programs are designed as collections of interacting and parallel computational processes. Unfortunately, concurrent programs are well known to be difficult to write and test: various concurrency bugs still exist in heavily-tested programs. To make it worse, detecting concurrency bugs is expensive, which is, for example, notorious for dynamic detection techniques that target high precision. This thesis proposes a dynamic sampling framework, CARISMA, to reduce the overhead dramatically while still largely preserving the bug detection capability. To achieve its goal, CARISMA intelligently allocates the limited budget on the computation resource through sampling. The core of CARISMA is a budget estimation and allocation framework whose correctness has been proven mathematically. Another source of cost comes from the nondeterministic nature of concurrent systems. A common practice to test concurrent system is through stress testing where a system is executed with a large number of test cases to achieve a high coverage of the execution space. Stress testing is inherently costly. To this end, it is critical that the bug detection for each execution is effective, which demands a powerful test oracle. This thesis proposes such a test oracle, OLIN, which reports anomalies in the concurrent executions of programs. OLIN finds concurrency bugs that are consistently missed by previous techniques and incurs low overhead. OLIN can achieve a higher effectiveness within given time and computational budgets. Service-based systems are composed of loosely coupled and unassociated units of functional units and are often highly concurrent and distributed. We have witnessed their prosperity in recent decades. Service-based systems are highly dynamic and regression testing techniques are applied to ensure their previously established functionality and correctness are not adversely affected by subsequent evolutions. However, regression testing is expensive and our thesis focuses on the prioritization of regression test cases to improve the effectiveness of testing within predefined constraints. This thesis proposes a family of prioritization metrics for regression testing of location-based services and presents a case study to evaluate their performance. In conclusion, this thesis makes the following contributions to software testing and analysis: (1) a dynamic sampling framework for concurrency bug detection, (2) a test oracle for concurrent testing, and (3) a family of test case prioritization techniques for service-based systems. These contributions significantly improve the effectiveness and efficiency of resource utilization in software testing. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy

A trade-off model between cost and reliability during the design phase of software development

Burnett, Robert Carlisle January 1995 (has links)
This work proposes a method for estimating the development cost of a software system with modular structure taking into account the target level of reliability for that system. The required reliability of each individual module is set in order to meet the overall required reliability of the system. Consequently the individual cost estimates for each module and the overall cost of the software system are linked to the overall required reliability. Cost estimation is carried out during the early design phase, that is, well in advance of any detailed development. Where a satisfactory compromise between cost and reliability is feasible, this will enable a project manager to plan the allocation of resources to the implementation and testing phases so that the estimated total system cost does not exceed the project budget and the estimated system reliability matches the required target. The line of argument developed here is that the operational reliability of a software module can be linked to the effort spent during the testing phase. That is, a higher level of desired reliability will require more testing effort and will therefore cost more. A method is developed which enable us to estimate the cost of development based on an estimate of the number of faults to be found and fixed, in order to achieve the required reliability, using data obtained from the requirements specification and historical data. Using Markov analysis a method is proposed for allocating an appropriate reliability requirement to each module of a modular software system. A formula to calculate an estimate of the overall system reliability is established. Using this formula, a procedure to allocate the reliability requirement for each module is derived using a minimization process, which takes into account the stipulated overall required level of reliability. This procedure allow us to construct some scenarios for cost and the overall required reliability. The foremost application of the outcome of this work is to establish a basis for a trade-off model between cost and reliability during the design phase of the development of a modular software system. The proposed model is easy to understand and suitable for use by a project manager.

Strong mutation testing strategies

Duncan, Ishbel M. M. January 1993 (has links)
Mutation Testing (or Mutation Analysis) is a source code testing technique which analyses code by altering code components. The output from the altered code is compared with output from the original code. If they are identical then Mutation Testing has been successful in discerning a weakness in either the test code or the test data. A mutation test therefore helps the tester to develop a program devoid of simple faults with a well developed test data set. The confidence in both program and data set is then increased. Mutation Analysis is resource intensive. It requires program copies, with one altered component, to be created and executed. Consequently, it has been used mainly by academics analysing small programs. This thesis describes an experiment to apply Mutation Analysis to larger, multi-function test programs. Mutations, alterations to the code, are induced using a sequence derived from the code control flow graph. The detection rate of live mutants, programs whose output match the original, was plotted and compared against data generated from the standard technique of mutating in statement order. This experiment was repeated for different code components such as relational operators, conditional statement or pointer references. A test was considered efficient if the majority of live mutants was detected early in the test sequence. The investigations demonstrated that control flow driven mutation could improve the efficiency of a test. However, the experiments also indicated that concentrations of live mutants of a few functions or statements could effect the efficiency of a test. This conclusion lead to the proposal that mutation testing should be directed towards functions or statements containing groupings of the code component that give rise to the live mutants. This effectively forms a test focused onto particular functions or statements.

An effective approach for testing program branches and linear code sequences and jumps

Malevris, N. January 1988 (has links)
No description available.

On proportional sampling strategies in software testing

奚永忻, Hsi, Yung-shing, Paul. January 2001 (has links)
published_or_final_version / Computer Science and Information Systems / Master / Master of Philosophy

A study on improving adaptive random testing

Liu, Ning, Lareina, 劉寧 January 2006 (has links)
published_or_final_version / abstract / Computer Science / Master / Master of Philosophy

Studies of different variations of Adaptive Random Testing

Towey, David Peter. January 2006 (has links)
published_or_final_version / abstract / Computer Science / Doctoral / Doctor of Philosophy

Towards a satisfaction relation between CCS specifications and their refinements

Baillie, Elizabeth Jean January 1992 (has links)
No description available.

Assessing the adequacy of test data for object-oriented programs using the mutation method

Kim, Sun-Woo January 2001 (has links)
No description available.

Automated structural test data generation

Cousins, Michael Anthony January 1995 (has links)
No description available.

Page generated in 0.1307 seconds