• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 200
  • 36
  • 19
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 579
  • 579
  • 167
  • 163
  • 152
  • 141
  • 65
  • 60
  • 60
  • 58
  • 58
  • 58
  • 55
  • 52
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Empirical study - pairwise prediction of fault based on coverage

Shamasunder, Shalini 14 June 2012 (has links)
Researchers/engineers in the field of software testing have valued coverage as a testing metric for decades now. There have been various empirical results that have shown that as coverage increases the ability of the test program to detect a fault also increases. As a result numerous coverage techniques have been introduced. Which coverage criteria co-relates better with fault detection? Which coverage criteria on the other hand have lower correlation with fault detection? In other words, does it make more sense to achieve a higher percentage of c1 kind of coverage over a higher percentage of c2 coverage to gain good fault detection rate. Do the popular block and branch coverage perform better or does path coverage outperform them? Answering these questions will help future engineers/researchers in generating more efficient test suites and in gaining a better metric of measurement. This also helps in test suite minimization. This thesis studies the relationship between coverage and mutant kill-rates over large, randomly generated test suites for statement, branch, predicate, and path coverage of two realistic programs to answer the above open questions. The experiments both confirm conventional wisdom about these coverage criteria and contains a few surprises. / Graduation date: 2013
72

Applying Resource Usage Analysis to Software Testing

Liu, Wei-Cheng 02 August 2007 (has links)
With the developing of the software and network environment, software becomes more and more complex. The network attacks which exploit the software vulnerability make the traditional software testing face a crucible challenge. According to the report by the CSI/FBI, the lose cause from Denial-of-Service remains in top 5 highest rank of network attacks in the past 3 years. Besides the network bandwidth consuming, the commonest attack is to exploit the software vulnerabilities. In my research, I found the traditional testing technique could not find the software vulnerabilities efficiently for they just verify the correctness of software. This way of thinking would bypass many software vulnerabilities which do not belong to the logical error such as memory leak. In another way, some test techniques to solve the resource usage vulnerability were proposed in recent years but the results of them are very primitive. Thus, I try to give the software testing a new definition from the resource usage analysis. I propose 3 test criteria in this paper. Testers could combine these test criteria with existing tools as a guide to test the resource usage of the program. With these test criteria, testers can find out the unhealthy usage of software resource.
73

Design-time performance testing

Hopkins, Ian Keith 01 April 2009
Software designers make decisions between alternate approaches early in the development of a software application and these decisions can be difficult to change later. Designers make these decisions based on estimates of how alternatives affect software qualities. One software quality that can be difficult to predict is performance, that is, the efficient use of resources in the system. It is particularly challenging to estimate the performance of large, interconnected software systems composed of components. With the proliferation of class libraries, middle-ware systems, web services, and third party components, many software projects rely on third party services to meet their requirements. Often choosing between services involves considering both the functionality and performance of the services. To help software developers compare their designs and third-party services, I propose using performance prototypes of alternatives and test suites to estimate performance trade-offs early in the development cycle, a process called Design-Time Performance Testing (DTPT).<p> Providing software designers with performance evidence based on prototypes will allow designers to make informed decisions regarding performance trade-offs. To show how DTPT can help inform real design decisions. In particular: a process for DTPT, a framework implementation written in Java, and experiments to verify and validate the process and implementation. The implemented framework assists when designing, running, and documenting performance test suites, allowing designers to make accurate comparisons between alternate approaches. Performance metrics are captured by instrumenting and running prototypes.<p> This thesis describes the process and framework for gathering software performance estimates at design-time using prototypes and test suites.
74

Design-time performance testing

Hopkins, Ian Keith 01 April 2009 (has links)
Software designers make decisions between alternate approaches early in the development of a software application and these decisions can be difficult to change later. Designers make these decisions based on estimates of how alternatives affect software qualities. One software quality that can be difficult to predict is performance, that is, the efficient use of resources in the system. It is particularly challenging to estimate the performance of large, interconnected software systems composed of components. With the proliferation of class libraries, middle-ware systems, web services, and third party components, many software projects rely on third party services to meet their requirements. Often choosing between services involves considering both the functionality and performance of the services. To help software developers compare their designs and third-party services, I propose using performance prototypes of alternatives and test suites to estimate performance trade-offs early in the development cycle, a process called Design-Time Performance Testing (DTPT).<p> Providing software designers with performance evidence based on prototypes will allow designers to make informed decisions regarding performance trade-offs. To show how DTPT can help inform real design decisions. In particular: a process for DTPT, a framework implementation written in Java, and experiments to verify and validate the process and implementation. The implemented framework assists when designing, running, and documenting performance test suites, allowing designers to make accurate comparisons between alternate approaches. Performance metrics are captured by instrumenting and running prototypes.<p> This thesis describes the process and framework for gathering software performance estimates at design-time using prototypes and test suites.
75

The Creation and Application of Software Testing Institution: with A Case Study of MES Application in Semiconductor Manufacturing Environment

Fan, Hui-Lin 21 January 2006 (has links)
Even though software testing takes more than 40% of the total development cost, especially the massive amount of devoted efforts and resources, software testing is nevertheless the least respectable part in many software projects when comparing to design and development. The challenge becomes bigger as the software becomes more complicated. This has been further compounded by lacking of appropriate attention and suitable resource allocation. As a result, it becomes a global concern and issue on how software testing can be more effective to guard the software quality control. Software testing techniques have evolved for decades and almost reached the maturity level. Why software testing is not successful is mostly related to lacking of enough respect by management. Therefore, creating a software testing institution is necessary to put enough control on the process and to establish a regulation for implementation. This research employs software testing theory standards, institutional theory and control theory to come out with an ideal software testing institution. A case study is used to validate the ideal software testing institution. Software testing theories are to create a software testing process, which can be divided into planning, design and execution phases. Institutional theory is to create a regulation and as a basis for implementation. Control theory is to empower control mechanisms on testing to ensure the progress comply with the final goals. The ideal software testing institution provided by this research is appropriate for joint-development outsourcing project. When both customer and vendor are involved in testing, it¡¦s recommended to define separate test plans with consistent schedule to prevent from resource idle or inconsistency between software and documentation. Since both parties will produce software source code and documentation, it¡¦s also recommended to define the working model and version control rules as a basis for cooperation. Finally, Employing configuration management can avoid unnecessary conflicts and confusion.
76

A hybrid genetic algorithm for automatic test data generation

Wang, Hsiu-Chi 13 July 2006 (has links)
Automatic test data generation is a hot topic in recent software testing research. Various techniques have been proposed with different emphases. Among them, most of the methods are based on Genetic Algorithms (GA). However, whether it is the best Metaheuristic method for such a problem remains unclear. In this paper, we choose to use another approach which arms a GA with an intensive local searcher (the so-called Memetic Algorithm (MA) according to the recent terminology). The idea of incorporating local searcher is based on the observations from many real-world programs. It turns out the results outperform many other known Metaheuristic methods so far. We argue the needs of local search for software testing in the discussion of the paper.
77

An Automated Method for Resource Testing

Chen, Po-Kai 27 July 2006 (has links)
This thesis introduces a method that combines automated test data generation techniques with high volume testing and resource monitoring. High volume testing repeats test cases many times, simulating extended execution intervals. These testing techniques have been found useful for uncovering errors resulting from component coordination problems, as well as system resource consumption (e.g. memory leaks) or corruption. Coupling automated test data generation with high volume testing and resource monitoring could make this approach more scalable and effective in the field.
78

Empirical study on strategy for Regression Testing

Hsu, Pai-Hung 03 August 2006 (has links)
Software testing plays a necessary role in software development and maintenance. This activity is performed to support quality assurance. It is very common to design a number of testing suite to test their programs manually for most test engineers. To design test data manually is an expensive and labor-wasting process. Base on this reason, how to generate software test data automatically becomes a hot issue. Most researches usually use the meta-heuristic search methods like genetic algorithm or simulated annealing to gain the test data. In most circumstances, test engineers will generate the test suite first if they have a new program. When they debug or change some code to become a new one, they still design another new test suite to test it. Nearly no people will reserve the first test data and reuse it. In this research, we want to discuss whether it is useful to store the original test data.
79

A New Combinatorial Strategy to Black-box Testing with Constraints

Tsai, Tsung-Han 23 July 2007 (has links)
In recent year, a lot of scholar try to generate test sets for combinatorial strategy automatically. But these algorithms based on combinatorial strategy don¡¦t consider conflicts of input parameter model. A conflict exists when the result of combining two or more values of different parameter dose not make sense. Thus, invalid sub-combinations may be included in test cases in the test suite, and these are useless to us. Besides, these algorithms all directly generate all test cases once, in other words, it is unable to utilize test cases generated at present to feedback and revise the algorithm, so it is easy to generate useless combinations. So, this paper proposes new test generation algorithm for combinatorial testing based on constraint satisfaction problem(CSP) to solve problem which invalid sub-combinations may be included in test cases, and we can add constraints flexibly during generating test cases to avoid generate useless or repeated combinations. The experimental result indicate that our algorithm perform well, with respect to the amount of time required for test generation, otherwise, we can generate conflict-free test cases directly.
80

A Flexible Combinatorial Strategy based on Constraint Statisfaction Problem

Li, Cheng-Hsuan 23 August 2009 (has links)
In recent year the research field of the combinatorial testing, which can roughlybe divided into two kinds including pair-wise coverage and multi-wise coverage. a lot of scholar try to use various strategies to generate test data automatically. In order to weight the generated test set, the generated test data must satisfy certain criterion. But these combinatorial strategy neglected the flexibility of using on the practice. Considering software testing from the practice, which often be restricted by the cost. For this reason, how to obtaint the greatest testing benefits under the limited cost must be considered on the parctice. But in the extant combinatorial strategy, there is no flexible use. In other words, we must testing test set totally. Therefore, there is very great restriction to exist on using the test data generated by the extant combinatorial strategy on the practice. So, this paper proposes a flexible combinatorial strategy based on CSP, which allow users to do the most valid testing under the limited cost, which also allow users join the constraints that needs at any time during the testing process, revise the test data that we produced dynamically. The experimental result indicate that our method perform well, it can avoid including the test data whether some users think the interests less or unnecessarier, in order to achive the greatest testing beneifts.Further, we can achive the goal of reducing the quantity of testing data.

Page generated in 0.0976 seconds