• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 9
  • 9
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Guiding RTL Test Generation Using Relevant Potential Invariants

Khanna, Tania 02 August 2018 (has links)
In this thesis, we propose to use relevant potential invariants in a simulation-based swarmintelligence-based test generation technique to generate relevant test vectors for design validation at the Register Transfer Level (RTL). Providing useful guidance to the test generator for such techniques is critical. In our approach, we provide guidance by exploiting potential invariants in the design. These potential invariants are obtained using random stimuli such that they are true under these stimuli. Since these potential invariants are only likely to be true, we try to generate stimuli that can falsify them. Any such vectors would help reach some corners of the design. However, the space of potential invariants can be extremely large. To reduce execution time, we also implement a two-layer filter to remove the irrelevant potential invariants that may not contribute in reaching difficult states. With the filter, the vectors generated thus help to reduce the overall test length while still reach the same coverage as considering all unfiltered potential invariants. Experimental results show that with only the filtered potential invariants, we were able to reach equal or better branch coverage than that reported by BEACON in the ITC99 benchmarks, with considerable reduction in vector lengths, at reduced execution time. / Master of Science / Over the recent years, size and complexity of hardware designs are increasing at an enormous rate. Due to this, verification of these designs is of utmost importance and demands much more resources and time than designing of these hardware. To project the information of the designs, developers use Hardware Descriptive Languages (HDL), that includes the important decision points of the system, also called branches of the circuit. There are several methodologies proposed to check how many branches of the design can be traversed by set of inputs. This practice is important to confirm correct functionality of the design as we can catch all the faults in the design at these decision points. Some of these methodologies include checking with random inputs, exhaustively checking for every possible input, investing many hours of labor to verify with appropriate inputs, or simply automating the process of generating inputs. In this thesis, we focus on one such automated process called BEACON or Branch-oriented Evolutionary Ant Colony OptimizatioN. We propose a modification to improve this method by using standard properties of the design. These properties, also known as invariants, help to cover those branches that require extra effort in terms of both inputs and time, and are thus, hard to cover. When we add these significant invariants to the design, modified BEACON is able to cover almost all accessible branches in the system with significantly less amount of time and lesser number of vectors than original BEACON itself, which helps save a lot of resources.
2

An Efficient 2-Phase Strategy to Achieve High Branch Coverage

Prabhu, Sarvesh P. 06 March 2012 (has links)
Symbolic execution-based test generation is gaining popularity for software test generation. The increasing complexity of the software program is posing new challenges in software execution-based test generation because of the path explosion problem. We present a new 2-phase symbolic execution driven strategy that achieves high branch coverage in software quickly. Phase 1 follows a greedy approach that quickly covers as many branches as possible by exploring each branch through its corresponding shortest path prefix. Phase 2 covers the remaining branches that are left uncovered if the shortest path to the branch was infeasible. In Phase 1, a basic conflict driven learning is used to skip all the paths that may have any of the earlier encountered conflicting conditions, while in Phase 2, a more intelligent conflict driven learning is used to skip regions that do not have a feasible path to any unexplored branch. This results in considerable reduction in unnecessary SMT solver calls. Experimental results show that significant speedup can be achieved, effectively reducing the time to detect a bug and providing higher branch coverage for a fixed time out period than previous techniques. / Master of Science
3

Automated Navigation Model Extraction For Web Load Testing

Kara, Ismihan Refika 01 December 2011 (has links) (PDF)
Web pages serve a huge number of internet users in nearly every area. An adequate testing is needed to address the problems of web domains for more efficient and accurate services. We present an automated tool to test web applications against execution errors and the errors occured when many users connect the same server concurrently. Our tool, called NaMoX, attains the clickables of the web pages, creates a model exerting depth first search algorithm. NaMoX simulates a number of users, parses the developed model, and tests the model by branch coverage analysis. We have performed experiments on five web sites. We have reported the response times when a click operation is eventuated. We have found 188 errors in total. Quality metrics are extracted and this is applied to the case studies.
4

Branch Guided Metrics for Functional and Gate-level Testing

Acharya, Vineeth Vadiraj 31 March 2015 (has links)
With the increasing complexity of modern day processors and system-on-a-chip (SOCs), designers invest a lot of time and resources into testing and validating these designs. To reduce the time-to-market and cost, the techniques used to validate these designs have to constantly improve. Since most of the design activity has moved to the register transfer level (RTL), test methodologies at the RTL have been gaining momentum. We present a novel functional test generation framework for functional test generation at RTL. A popular software-based metric for measuring the effectiveness of an RTL test suite is branch coverage. But exercising hard-to-reach branches is still a challenge and requires good understanding of the design semantics. The proposed framework uses static analysis to extract certain semantics of the circuit and uses several data structures to model these semantics. Using these data structures, we assist the branch-guided search to exercise these hard-to-reach branches. Since the correlation between high branch coverage and detecting defects and bugs is not clear, we present a new metric at the RTL which augments the RTL branch coverage with state values. Vectors which have higher scores on the new metric achieve higher branch and state coverages, and therefore can be applied at different levels of abstraction such as post-silicon validation. Experimental results show that use of the new metric in our test generation framework can achieve a high level of branch and fault coverage for several benchmark circuits, while reducing the length of the vector sequence. This work was supported in part by the NSF grant 1016675. / Master of Science
5

RTL Functional Test Generation Using Factored Concolic Execution

Pinto, Sonal 21 July 2017 (has links)
This thesis presents a novel concolic testing methodology and CORT, a test generation framework that uses it for high-level functional test generation. The test generation effort is visualized as the systematic unraveling of the control-flow response of the design over multiple (factored) explorations. We begin by transforming the Register Transfer Level (RTL) source for the design into a high-performance C++ compiled functional simulator which is instrumented for branch coverage. An exploration begins by simulating the design with concrete stimuli. Then, we perform an interleaved cycle-by-cycle symbolic evaluation over the concrete execution trace extracted from the Control Flow Graph (CFG) of the design. The purpose of this task is to dynamically discover means to divert the control flow of the system, by mutating primary-input stimulated control statements in this trace. We record the control-flow response as a Test Decision Tree (TDT), a new representation for the test generation effort. Successive explorations begin at system states heuristically selected from a global TDT, onto which each new decision tree resultant from an exploration is stitched. CORT succeeds at constructing functional tests for ITC99 and IWLS-2005 benchmarks that achieve high branch coverage using the fewest number of input vectors, faster than existing methods. Furthermore, we achieve orders of magnitude speedup compared to previous hybrid concrete and symbolic simulation based techniques. / Master of Science
6

Improving Bio-Inspired Frameworks

Varadarajan, Aravind Krishnan 05 October 2018 (has links)
In this thesis, we provide solutions to two different bio-inspired algorithms. The first is enhancing the performance of bio-inspired test generation for circuits described in RTL Verilog, specifically for branch coverage. We seek to improve upon an existing framework, BEACON, in terms of performance. BEACON is an Ant Colony Optimization (ACO) based test generation framework. Similar to other ACO frameworks, BEACON also has a good scope in improving performance using parallel computing. We try to exploit the available parallelism using both multi-core Central Processing Units (CPUs) and Graphics Processing Units(GPUs). Using our new multithreaded approach we can reduce test generation time by a factor of 25 — compared to the original implementation for a wide variety of circuits. We also provide a 2-dimensional factoring method for BEACON to improve available parallelism to yield some additional speedup. The second bio-inspired algorithm we address is for Deep Neural Networks. With the increasing prevalence of Neural Nets in artificial intelligence and mission-critical applications such as self-driving cars, questions arise about its reliability and robustness. We have developed a test-generation based technique and metric to evaluate the robustness of a Neural Nets outputs based on its sensitivity to its inputs. This is done by generating inputs which the neural nets find difficult to classify but at the same time is relatively apparent to human perception. We measure the degree of difficulty for generating such inputs to calculate our metric. / MS / High-level Hardware Design Languages (HDLs) has allowed designers to implement complicated hardware designs with considerably lesser effort. Unfortunately, design verification for the same circuits has failed to scale gracefully in terms of time and effort. Not only has it become more difficult for formal methods due to exponential complexity from increasing path explosion, but concrete test generation frameworks also face new issues such as the increased requirement in the volume of simulations. The advent of parallel computing using General Purpose Graphics Processing Units (GPGPUs) has led to improved performance for various applications. We propose to leverage both the multi-core CPU and the GPGPU for RTL test generation. This is achieved by implementing a test generation framework that can utilize the SIMD type parallelism available in GPGPUs and task level parallelism available on CPUs. The speedup achieved is extracted from both the test generation framework itself and also from refactoring the hardware model for multi-threaded test generation. For this purpose, we translate the RTL Verilog to a C++ and a CUDA compilable program. Experimental results show that considerable speedup can be achieved for test generation without loss of coverage. In recent years, machine learning and artificial intelligence have taken a substantial leap forward with the discovery of Deep Neural Networks(DNN). Unfortunately, apart from Accuracy and FTest numbers, there exist very few metrics to qualify a DNN. This becomes a reliability issue as DNNs are quite frequently used in safety-critical applications. It is difficult to interpret how the parameters of a trained DNN help store the knowledge from the training inputs. Therefore it is also difficult to infer whether a DNN has learned parameters which might cause an output neuron to misfire wrongly, a bug. An exhaustive search of the input space of the DNN is not only infeasible but is also misleading. Thus, in our work, we try to apply test generation techniques to generate new test inputs based on existing training and testing set to qualify the underlying robustness. Attempts to generate these inputs are guided only by the prediction probability values at the final output layer. We observe that depending on the amount of perturbation and time needed to generate these inputs we can differentiate between DNNs of varying quality.
7

Improving Branch Coverage in RTL Circuits with Signal Domain Analysis and Restrictive Symbolic Execution

Bagri, Sharad 18 March 2015 (has links)
Considerable research has been directed towards efficient test stimuli generation for Register Transfer Level (RTL) circuits. However, stimuli generation frameworks are still not capable of generating effective stimuli for all circuits. Some of the limiting factors are 1) It is hard to ascertain if a branch in the RTL code is reachable, and 2) Some hard-to-reach branches require intelligent algorithms to reach them. Since unreachable branches cannot be reached by any test sequence, we propose a method to deduce unreachability of a branch by looking for the possible values which a signal can take in an RTL code without explicit unrolling of the design. To the best of our knowledge, this method has been able to identify more unreachable branches than any method published in this domain, while being computationally less expensive. Moreover, some branches require very specific values on input signals in specific cycles to reach them. Conventional symbolic execution can generate those values but is computationally expensive. We propose a cycle-by-cycle restrictive symbolic execution that analyzes only a selected subset of program statements to reduce the computational cost. Our proposed method gathers information from an initial execution trace generated by any technique, to intelligently decide specific cycles where the application of this method will be helpful. This method can hybrid with simulation-based test stimuli generation methods to reduce the cost of formal verification. With this method, we were able to reach some previously unreached branches in ITC99 benchmark circuits. / Master of Science
8

Instrumentation and Coverage Analysis of Cyber Physical System Models

January 2016 (has links)
abstract: A Cyber Physical System consists of a computer monitoring and controlling physical processes usually in a feedback loop. These systems are increasingly becoming part of our daily life ranging from smart buildings to medical devices to automobiles. The controller comprises discrete software which may be operating in one of the many possible operating modes and interacting with a changing physical environment in a feedback loop. The systems with such a mix of discrete and continuous dynamics are usually termed as hybrid systems. In general, these systems are safety critical, hence their correct operation must be verified. Model Based Design (MBD) languages like Simulink are being used extensively for the design and analysis of hybrid systems due to the ease in system design and automatic code generation. It also allows testing and verification of these systems before deployment. One of the main challenges in the verification of these systems is to test all the operating modes of the control software and reduce the amount of user intervention. This research aims to provide an automated framework for the structural analysis and instrumentation of hybrid system models developed in Simulink. The behavior of the components introducing discontinuities in the model are automatically extracted in the form of state transition graphs. The framework is integrated in the S-TaLiRo toolbox to demonstrate the improvement in mode coverage. / Dissertation/Thesis / Masters Thesis Computer Science 2016
9

Analysis of test coverage metrics in a business critical setup / Analys av mätvärden för test i ett affärskritiskt system

Mishra, Shashank January 2017 (has links)
Test coverage is an important parameter of analyzing how well the product is being tested in any domain within the IT industry. Unit testing is one of the important processes that have gained even more popularity with the rise in Test driven development (TDD) culture.This degree project, conducted at NASDAQ Technology AB, analyzes the existing unit tests in one of the products, and compares various coverage models in terms of quality. Further, the study examines the factors that affect code coverage, presents the best practices for unit testing, and a proven test process used in a real world project.To conclude, recommendations are given to NASDAQ based on the findings of this study and industry standards. / Testtäckning är en viktig parameter för att analysera hur väl en produkt är testad inom alla domäner i IT-industrin. Enhetstestning är en av de viktiga processerna som har ökat sin popularitet med testdriven utveckling. Detta examensarbete, utfört på NASDAQ Technology AB, analyserar de befintliga testen i en av produkterna, och jämför olika kvalitetsmodeller. Vidare undersöker undersökningen de faktorer som påverkar koddekning, presenterar de bästa metoderna för enhetstestning och en beprövad testprocess som används i ett verkligt världsprojekt. Avslutningsvis ges rekommendationer till NASDAQ baserat på resultaten från denna studie och industristandarder.

Page generated in 0.0677 seconds