• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 34
  • 27
  • 27
  • 22
  • 14
  • 14
  • 13
  • 12
  • 11
  • 9
  • 9
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Independence fault collapsing and concurrent test generation

Doshi, Alok Shreekant. Agrawal, Vishwani D., January 2006 (has links) (PDF)
Thesis(M.S.)--Auburn University, 2006. / Abstract. Vita. Includes bibliographic references (p.74-78).
22

BIST-based performance characterization of mixed-signal circuits

Yu, Hak-soo, Abraham, Jacob A. January 2004 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2004. / Supervisor: Jacob A. Abraham. Vita. Includes bibliographical references. Also available from UMI.
23

A Functional-test specification language.

Williams, Dewi L. (Dewi Lloyd), Carleton University. Dissertation. Engineering, Electrical. January 1988 (has links)
Thesis (M. Eng.)--Carleton University, 1988. / Also available in electronic format on the Internet.
24

New test vector compression techniques based on linear expansion

Chakravadhanula, Krishna V. Touba, Nur A., January 2004 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2004. / Supervisor: Nur Touba. Vita. Includes bibliographical references.
25

Model-Driven Testing in Umple

Almaghthawi, Sultan Eid A. 08 April 2020 (has links)
In this thesis we present a language and technique to facilitate model-based testing. The core of our approach is an xUnit-like language that allows tests to refer to model entities such as associations. This language can be used by developers to describe tests based on an existing UML model. The tests might even be written before creating a UML model, and be based on requirements. The testing language, including its parser and generators, is written entirely in Umple, an open-source textual modeling tool with semantics closely based on UML, and which generates Java, PHP and several other target languages. Tests in our language can be embedded in Umple or in standalone files. The test language compiler converts our abstract testing language into JUnit, PHPUnit and other domain-language testing environments. In addition to allowing developers to write tests manually, we have created generators that create abstract tests for any Umple model. These generators can be used to verify the Umple compiler and to give Umple users extra confidence in their models. User-defined tests can be standalone or embedded in methods; they can be generic, referring to metamodel elements. Tests can also be located in traits or mixsets to allow testing of separate concerns or product lines. To test our language and the tests written in it, we have created an extensive test suite. We have also implemented mutation testing, that enables varying of features of the models to ensure that runs of the pre-mutation tests then fail.
26

Automatic Generation of Test Cases for Agile using Natural Language Processing

Rane, Prerana Pradeepkumar 24 March 2017 (has links)
Test case design and generation is a tedious manual process that requires 40-70% of the software test life cycle. The test cases written manually by inexperienced testers may not offer a complete coverage of the requirements. Frequent changes in requirements reduce the reusability of the manually written test cases costing more time and effort. Most projects in the industry follow a Behavior-Driven software development approach to capturing requirements from the business stakeholders through user stories written in natural language. Instead of writing test cases manually, this thesis investigates a practical solution for automatically generating test cases within an Agile software development workflow using natural language-based user stories and acceptance criteria. However, the information provided by the user story is insufficient to create test cases using natural language processing (NLP), so we have introduced two new input parameters, Test Scenario Description and Dictionary, to improve the test case generation process. To establish the feasibility, we developed a tool that uses NLP techniques to generate functional test cases from the free-form test scenario description automatically. The tool reduces the effort required to create the test cases while improving the test coverage and quality of the test suite. Results from the feasibility study are presented in this thesis. / Master of Science
27

On Enhancing Deterministic Sequential ATPG

Duong, Khanh Viet 15 March 2011 (has links)
This thesis presents four different techniques for improving the average-case performance of deterministic sequential circuit Automatic Test Patterns Generators (ATPG). Three techniques make use of information gathered during test generation to help identify more unjustifiable states with higher percentage of "don't care" value. An approach for reducing the search space of the ATPG was introduced. The technique can significantly reduce the size of the search space but cannot ensure the completeness of the search. Results on ISCAS–85 benchmark circuits show that all of the proposed techniques allow for better fault detection in shorter amounts of time. These techniques, when used together, produced test vectors with high fault coverages. Also investigated in this thesis is the Decision Inversion Problem which threatens the completeness of ATPG tools such as HITEC or ATOMS. We propose a technique which can eliminate this problem by forcing the ATPG to consider search space with certain flip-flops untouched. Results show that our technique eliminated the decision inversion problem, ensuring the soundness of the search algorithm under the 9-valued logic model. / Master of Science
28

Techniques for Automatic Generation of Tests from Programs and Specifications

Edvardsson, Jon January 2006 (has links)
<p>Software testing is complex and time consuming. One way to reduce the effort associated with testing is to generate test data automatically. This thesis is divided into three parts. In the first part a mixed-integer constraint solver developed by Gupta et. al is studied. The solver, referred to as the Unified Numerical Approach (una), is an important part of their generator and it is responsible for solving equation systems that correspond to the program path currently under test.</p><p>In this thesis it is shown that, in contrast to traditional optimization methods, the una is not bounded by the size of the solved equation system. Instead, it depends on how the system is composed. That is, even for very simple systems consisting of one variable we can easily get more than a thousand iterations. It is also shown that the una is not complete, that is, it does not always find a mixed-integer solution when there is one. It is found that a better approach is to use a traditional optimization method, like the simplex method in combination with branch-and-bound and/or a cutting-plane algorithm as a constraint solver.</p><p>The second part explores a specification-based approach for generating tests developed by Meudec. Tests are generated by partitioning the specification input domain into a set of subdomains using a rule-based automatic partitioning strategy. An important step of Meudec’s method is to reduce the number of generated subdomains and find a minimal partition. This thesis shows that Meudec’s minimal partition algorithm</p><p>is incorrect. Furthermore, two new efficient alternative algorithms are developed. In addition, an algorithm for finding the upper and lower bound on the number of subdomains in a partition is also presented.</p><p>Finally, in the third part, two different designs of automatic testing tools are studied. The first tool uses a specification as an oracle. The second tool, on the other hand, uses a reference program. The fault-detection effectiveness of the tools is evaluated using both randomly and systematically generated inputs.</p>
29

Optimum Sensor Localization/Selection In A Diagnostic/Prognostic Architecture

Zhang, Guangfan 17 February 2005 (has links)
Optimum Sensor Localization/Selection in A Diagnostic/Prognostic Architecture Guangfan Zhang 107 Pages Directed by Dr. George J. Vachtsevanos This research addresses the problem of sensor localization/selection for fault diagnostic purposes in Prognostics and Health Management (PHM)/Condition-Based Maintenance (CBM) systems. The performance of PHM/CBM systems relies not only on the diagnostic/prognostic algorithms used, but also on the types, location, and number of sensors selected. Most of the research reported in the area of sensor localization/selection for fault diagnosis focuses on qualitative analysis and lacks a uniform figure of merit. Moreover, sensor localization/selection is mainly studied as an open-loop problem without considering the performance feedback from the on-line diagnostic/prognostic system. In this research, a novel approach for sensor localization/selection is proposed in an integrated diagnostic/prognostic architecture to achieve maximum diagnostic performance. First, a fault detectability metric is defined quantitatively. A novel graph-based approach, the Quantified-Directed Model, is called upon to model fault propagation in complex systems and an appropriate figure-of-merit is defined to maximize fault detectability and minimize the required number of sensors while achieving optimum performance. Secondly, the proposed sensor localization/selection strategy is integrated into a diagnostic/prognostic system architecture while exhibiting attributes of flexibility and scalability. Moreover, the performance is validated and verified in the integrated diagnostic/prognostic architecture, and the performance of the integrated diagnostic/prognostic architecture acts as useful feedback for further optimizing the sensors considered. The approach is tested and validated through a five-tank simulation system. This research has led to the following major contributions: ??generalized methodology for sensor localization/selection for fault diagnostic purposes. ??quantitative definition of fault detection ability of a sensor, a novel Quantified-Directed Model (QDG) method for fault propagation modeling purposes, and a generalized figure of merit to maximize fault detectability and minimize the required number of sensors while achieving optimum diagnostic performance at the system level. ??novel, integrated architecture for a diagnostic/prognostic system. ??lidation of the proposed sensor localization/selection approach in the integrated diagnostic/prognostic architecture.
30

Evaluation of Model-Based Testing on a Base Station Controller

Trimmel, Stefan January 2008 (has links)
<p>This master thesis investigates how well suited the model-based testing process is for testing a new feature of a Base Station Controller. In model-based testing the tester designs a behavioral model of the system under test, or some part of the system. This model is then given to a test generation tool that will analyze the model and produce interesting test cases. These test cases can either be run on the system in an automatic or manual way depending on what type of setup there is.</p><p>In this report it is suggested that the behavioral model should be produced in as early a stage as possible and that it should be a collaboration between the test team and the design team.</p><p>The advantages with the model-based testing process are a better overview of the test cases, the test cases are always up to date, it helps in finding errors or contradictions in requirements and it performs closer collaboration between the test team and the design team. The disadvantages with model-based testing process are that it introduces more sources where an error can occur. The behavioral model can have errors, the layer between the model and the generated test cases can have errors and the layer between the test cases and the system under test can have errors. This report also indicates that the time needed for testing will be longer compared with manual testing.</p><p>During the pilot, when a part of a new feature was tested, of this master thesis a test generation tool called Qtronic was used. This tool solves a very challenging task which is generating test cases from a general behavioral model and with a good result. This tool provides many good things but it also has its shortages. One of the biggest shortages is the debugging of the model for finding errors. This step is very time consuming because it requires that a test case generation is performed on the whole model. When there is a fault in the model then this test generation can take very long time, before the tool decides that it is impossible to cover the model.</p><p>Under the circumstances that the Qtronic tool is improved on varies issues suggested in the thesis, one of the most important issues is to do something about the long debugging time needed, then the next step can be to use model-based testing in a larger evaluation project at BSC Design, Ericsson.</p>

Page generated in 0.0752 seconds