• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 11
  • 11
  • 5
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 89
  • 89
  • 41
  • 34
  • 33
  • 21
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Interconnect-Driven Layout-Aware Multiple Scan Tree Synthesis Simultaneously for Test Time, Compression and Routing

Huang, Jr-Yang 29 July 2008 (has links)
An interconnect-driven layout-aware multiple scan tree synthesis methodology is proposed in this paper. Multiple scan trees, also known as a scan forest, greatly reduce test data volume and test application time in SOC testing. However, previous researches on scan tree synthesis rarely considered routing length issues, and hence create scan trees with excessively long routing paths. The proposed algorithm effectively considers both test compression rate and routing length and hence produces better results than all previous known methods in both regards. In this method, a density-driven dynamic clustering algorithm is applied to determine scan cells in each scan tree. A compatibility based clique partition algorithm is used to determine tree topology, and then a Voronoi diagram is used to establish physical connections. Compared with previous works on scan tree synthesis, the proposed method reduces test data volume by 1.4X to 2.1X, while the reduction in test application time ranges from 15.9X to 24.6X. The significant improvement in test application time is mainly due to the multiple scan trees architecture. The final routing structure is also better, as 1.3X to 3.2X reduction in routing length is achieved.
32

Techniques for Automatic Generation of Tests from Programs and Specifications

Edvardsson, Jon January 2006 (has links)
Software testing is complex and time consuming. One way to reduce the effort associated with testing is to generate test data automatically. This thesis is divided into three parts. In the first part a mixed-integer constraint solver developed by Gupta et. al is studied. The solver, referred to as the Unified Numerical Approach (una), is an important part of their generator and it is responsible for solving equation systems that correspond to the program path currently under test. In this thesis it is shown that, in contrast to traditional optimization methods, the una is not bounded by the size of the solved equation system. Instead, it depends on how the system is composed. That is, even for very simple systems consisting of one variable we can easily get more than a thousand iterations. It is also shown that the una is not complete, that is, it does not always find a mixed-integer solution when there is one. It is found that a better approach is to use a traditional optimization method, like the simplex method in combination with branch-and-bound and/or a cutting-plane algorithm as a constraint solver. The second part explores a specification-based approach for generating tests developed by Meudec. Tests are generated by partitioning the specification input domain into a set of subdomains using a rule-based automatic partitioning strategy. An important step of Meudec’s method is to reduce the number of generated subdomains and find a minimal partition. This thesis shows that Meudec’s minimal partition algorithm is incorrect. Furthermore, two new efficient alternative algorithms are developed. In addition, an algorithm for finding the upper and lower bound on the number of subdomains in a partition is also presented. Finally, in the third part, two different designs of automatic testing tools are studied. The first tool uses a specification as an oracle. The second tool, on the other hand, uses a reference program. The fault-detection effectiveness of the tools is evaluated using both randomly and systematically generated inputs.
33

Search-based software testing and complex test data generation in a dynamic programming language

Mairhofer, Stefan January 2008 (has links)
Manually creating test cases is time consuming and error prone. Search-based software testing (SBST) can help automate this process and thus to reduce time and effort and increase quality by automatically generating relevant test cases. Previous research have mainly focused on static programming languages with simple test data inputs such as numbers. In this work we present an approach for search-based software testing for dynamic programming languages that can generate test scenarios and both simple and more complex test data. This approach is implemented as a tool in and for the dynamic programming language Ruby. It uses an evolutionary algorithm to search for tests that gives structural code coverage. We have evaluated the system in an experiment on a number of code examples that differ in complexity and the type of input data they require. We compare our system with the results obtained by a random test case generator. The experiment shows, that the presented approach can compete with random testing and, for many situations, quicker finds tests and data that gives a higher structural code coverage.
34

A Systematic Review of Automated Test Data Generation Techniques / A Systematic Review of Automated Test Data Generation Techniques

Mahmood, Shahid January 2007 (has links)
Automated Test Data Generation (ATDG) is an activity that in the course of software testing automatically generates test data for the software under test (SUT). It usually makes the testing more efficient and cost effective. Test Data Generation (TDG) is crucial for software testing because test data is one of the key factors for determining the quality of any software test during its execution. The multi-phased activity of ATDG involves various techniques for each of its phases. This research field is not new by any means, albeit lately new techniques have been devised and a gradual increase in the level of maturity has brought some diversified trends into it. To this end several ATDG techniques are available, but emerging trends in computing have raised the necessity to summarize and assess the current status of this area particularly for practitioners, future researchers and students. Further, analysis of the ATDG techniques becomes even more important when Miller et al. [4] highlight the hardship in general acceptance of these techniques. Under this scenario only a systematic review can address the issues because systematic reviews provide evaluation and interpretation of all available research relevant to a particular research question, topic area, or phenomenon of interest. This thesis, by using a trustworthy, rigorous, and auditable methodology, provides a systematic review that is aimed at presenting a fair evaluation of research concerning ATDG techniques of the period 1997-2006. Moreover it also aims at identifying probable gaps in research about ATDG techniques of defined period so as to suggest the scope for further research. This systematic review is basically presented on the pattern of [5 and 8] and follows the techniques suggested by [1].The articles published in journals and conference proceedings during the defined period are of concern in this review. The motive behind this selection is quite logical in the sense that the techniques that are discussed in literature of this period might reflect their suitability for the prevailing software environment of today and are believed to fulfill the needs of foreseeable future. Furthermore only automated and/or semiautomated ATDG techniques have been chosen for consideration while leaving the manual techniques as they are out of the scope. As a result of the preliminary study the review identifies ATDG techniques and relevant articles of the defined period whereas the detailed study evaluates and interprets all available research relevant to ATDG techniques. For interpretation and elaboration of the discovered ATDG techniques a novel approach called ‘Natural Clustering’ is introduced. To accomplish the task of systematic review a comprehensive research method has been developed. Then on the practical implications of this research method important results have been gained. These results have been presented in statistical/numeric, diagrammatic, and descriptive forms. Additionally the thesis also introduces various criterions for classification of the discovered ATDG techniques and presents a comprehensive analysis of the results of these techniques. Some interesting facts have also been highlighted during the course of discussion. Finally, the discussion culminates with inferences and recommendations which emanate from this analysis. As the research work produced in the thesis is based on a rich amount of trustworthy information, therefore, it could also serve the purpose of being an upto- date guide about ATDG techniques. / Shahid Mahmood Folkparksvägen 14:23 372 40 Ronneby Sweden +46 76 2971676
35

Automatický nástroj k získávání metadat komponent pro úlohy průběžné integrace / Automatic Component Metadata Extractor and Consolidator for Continuous Integration

Kulda, Jiří January 2017 (has links)
Tato diplomová práce popisuje úpravu průběžné integrace pro Platform tým ve společnosti Red Hat. Výsledkem práce je nástroj Metamorph, který umožní sjednocení ostatních nástrojů průběžné integrace pod týmem Platform. Teoretická část popisuje vznik, popis a přidané hodnoty průběžné integrace. Následně jsou blíže přiblíženy existující nástroje na trhu. Dále je zde popsáno použití průběžné integrace v nástroji Jenkins. V práci jsou také dopodrobna popsány existující řešení průběžné integrace ve společnosti Red Hat. Dále je zde popsán návrh a implementace výše zmíněného nástroje. V závěru jsou výsledky práce otestovány týmem z firmy Red Hat a nastíněny možnosti rozšíření.
36

Nástroj pro tvorbu obsahu databáze pro účely testování software / Test Data Generator for Relational Databases

Kotyz, Jan January 2018 (has links)
This thesis deals with the problematic of test-data generation for relational databases. The aim of the this thesis is to design and implement tool which meets defined constrains and allows us to generate test-data. This tool uses SMT solver for constraint solving and test-data generation.
37

Test data generation based on binary search for class-level testing

Beydeda, Sami, Gruhn, Volker 08 November 2018 (has links)
One of the important tasks during software testing is the generation of appropriate test data. Various techniques have been proposed to automate this task. The techniques available, however, often have problems limiting their use. In the case of dynamic test data generation techniques, a frequent problem is that a large number of iterations might be necessary to obtain test data. This article proposes a novel technique for automated test data generation based on binary search. Binary search conducts searching tasks in logarithmic time, as long as its assumptions are fulfilled. This article shows that these assumptions can also be fulfilled in the case of path-oriented test data generation and presents a technique which can be used to generate test data covering certain paths in class methods.
38

An automatic test data generation from UML state diagram using genetic algorithm.

Doungsa-ard, Chartchai, Dahal, Keshav P., Hossain, M. Alamgir, Suwannasart, T. January 2007 (has links)
Software testing is a part of software development process. However, this part is the first one to miss by software developers if there is a limited time to complete the project. Software developers often finish their software construction closed to the delivery time, they usually don¿t have enough time to create effective test cases for testing their programs. Creating test cases manually is a huge work for software developers in the rush hours. A tool which automatically generates test cases and test data can help the software developers to create test cases from software designs/models in early stage of the software development (before coding). Heuristic techniques can be applied for creating quality test data. In this paper, a GA-based test data generation technique has been proposed to generate test data from UML state diagram, so that test data can be generated before coding. The paper details the GA implementation to generate sequences of triggers for UML state diagram as test cases. The proposed algorithm has been demonstrated manually for an example of a vending machine.
39

Test data generation from UML state machine diagrams using GAs

Doungsa-ard, Chartchai, Dahal, Keshav P., Hossain, M. Alamgir, Suwannasart, T. January 2008 (has links)
Automatic test data generation helps testers to validate software against user requirements more easily. Test data can be generated from many sources; for example, experience of testers, source program, or software specification. Selecting a proper test data set is a decision making task. Testers have to decide what test data that they should use, and a heuristic technique is needed to solve this problem automatically. In this paper, we propose a framework for generating test data from software specifications. The selected specification is Unified Modeling Language (UML) state machine diagram. UML state machine diagram describes a system in term of state which can be changed when there is an action occurring in the system. The generated test data is a sequence of these actions. These sequences of action help testers to know how they should test the system. The quality of generated test data is measured by the number of transitions which is fired using the test data. The more transitions test data can fire, the better quality of test data is. The number of coverage transitions is also used as a feedback for a heuristic search for a better test set. Genetic algorithms (GAs) are selected for searching the best test data. Our experimental results show that the proposed GA-based approach can work well for generating test data for some types of UML state machine diagrams.
40

Design for Testability Techniques to Optimize VLSI Test Cost

Donglikar, Swapneel B. 28 July 2009 (has links)
High test data volume and long test application time are two major concerns for testing scan based circuits. The Illinois Scan (ILS) architecture has been shown to be effective in addressing both these issues. The ILS achieves a high degree of test data compression thereby reducing both the test data volume and test application time. The degree of test data volume reduction depends on the fault coverage achievable in the broadcast mode. However, the fault coverage achieved in the broadcast mode of ILS architecture depends on the actual configuration of individual scan chains, i.e., the number of chains and the mapping of the individual flip-flops of the circuit to the respective scan chain positions. Current methods for constructing scan chains in ILS are either ad-hoc or use test pattern information from an a-priori automatic test pattern generation (ATPG) run. In this thesis, we present novel low cost techniques to construct ILS scan configuration for a given design. These techniques efficiently utilize the circuit topology information and try to optimize the flip-flop assignment to a scan chain location without much compromise in the fault coverage in the broadcast mode. Thus, they eliminate the need of an a-priori ATPG run or any test set information. In addition, we also propose a new scan architecture which combines the broadcast mode of ILS and Random Access Scan architecture to enable further test volume reduction on and above effectively configured conventional ILS architecture using the aforementioned heuristics with reasonable area overhead. Experimental results on the ISCAS'89 benchmark circuits show that the proposed ILS configuration methods can achieve on an average 5% more fault coverage in the broadcast mode and on average 15% more test data volume and test application time reduction than existing methods. The proposed new architecture achieves, on an average, 9% and 33% additional test data volume and test application time reduction respectively on top of our proposed ILS configuration heuristics. / Master of Science

Page generated in 0.0789 seconds