• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 10
  • 8
  • 5
  • 4
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 28
  • 16
  • 16
  • 15
  • 14
  • 14
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Strategies for Scalable Symbolic Execution-based Test Generation

Krishnamoorthy, Saparya 02 August 2010 (has links)
With the advent of advanced program analysis and constraint solving techniques, several test generation tools use variants of symbolic execution. Symbolic techniques have been shown to be very effective in path-based test generation; however, they fail to scale to large programs due to the exponential number of paths to be explored. In this thesis, we focus on tackling this path explosion problem and propose search strategies to achieve quick branch coverage under symbolic execution, while exploring only a fraction of paths in the program. We present a reachability-guided strategy that makes use of the reachability graph of the program to explore unvisited portions of the program and a conflict driven backtracking strategy that utilizes conflict analysis to perform nonchronological backtracking. We also propose error-directed search strategies, that are aimed at catching bugs in the program faster, by targeting those parts of the program where bugs are likely to be found or those that are hard to reach. We present experimental evidence that these strategies can significantly reduce the search space and improve the speed of test generation for programs. / Master of Science
32

Techniques for Enhancing Test and Diagnosis of Digital Circuits

Prabhu, Sarvesh P. 10 January 2015 (has links)
Test and Diagnosis are critical areas in semiconductor manufacturing. Every chip manufactured using a new or premature technology or process needs to be tested for manufacturing defects to ensure defective chips are not sold to the customer. Conventionally, test is done by mounting the chip on an Automated Test Equipment (ATE) and applying test patterns to test for different faults. With shrinking feature sizes, the complexity of the circuits on chip is increasing, which in turn increases the number of test patterns needed to test the chip comprehensively. This increases the test application time which further increases the cost of test, ultimately leading to increase in the cost per device. Furthermore, chips that fail during test need to be diagnosed to determine the cause of the failure so that the manufacturing process can be improved to increase the yield. With increase in the size and complexity of the circuits, diagnosis is becoming an even more challenging and time consuming process. Fast diagnosis of failing chips can help in reducing the ramp-up to the high volume manufacturing stage and thus reduce the time to market. To reduce the time needed for diagnosis, efficient diagnostic patterns have to be generated that can distinguish between several faults. However, in order to reduce the test application time, the total number of patterns should be minimized. We propose a technique for generating diagnostic patterns that are inherently compact. Experimental results show up to 73% reduction in the number of diagnostic patterns needed to distinguish all faults. Logic Built-in Self-Test (LBIST) is an alternative methodology for testing, wherein all components needed to test the chip are on the chip itself. This eliminates the need of expensive ATEs and allows for at-speed testing of chips. However, there is hardware overhead incurred in storing deterministic test patterns on chip and failing chips are hard to diagnose in this LBIST architecture due to limited observability. We propose a technique to reduce the number of patterns needed to be stored on chip and thus reduce the hardware overhead. We also propose a new LBIST architecture which increases the diagnosability in LBIST with a minimal hardware overhead. These two techniques overcome the disadvantages of LBIST and can make LBIST more popular solution for testing of chips. Modern designs may contain a large number of small embedded memories. Memory Built-in Self-Test (MBIST) is the conventional technique of testing memories, but it incurs hardware overhead. Using MBIST for small embedded memories is impractical as the hardware overhead would be significantly high. Test generation for such circuits is difficult because the fault effect needs to be propagated through the memory. We propose a new technique for testing of circuits with embedded memories. By using SMT solver, we model memory at a high level of abstraction using theory of array, while keeping the surrounding logic at gate level. This effectively converts the test generation problem into a combinational test generation problem and make test generation easier than the conventional techniques. / Ph. D.
33

Improving Branch Coverage in RTL Circuits with Signal Domain Analysis and Restrictive Symbolic Execution

Bagri, Sharad 18 March 2015 (has links)
Considerable research has been directed towards efficient test stimuli generation for Register Transfer Level (RTL) circuits. However, stimuli generation frameworks are still not capable of generating effective stimuli for all circuits. Some of the limiting factors are 1) It is hard to ascertain if a branch in the RTL code is reachable, and 2) Some hard-to-reach branches require intelligent algorithms to reach them. Since unreachable branches cannot be reached by any test sequence, we propose a method to deduce unreachability of a branch by looking for the possible values which a signal can take in an RTL code without explicit unrolling of the design. To the best of our knowledge, this method has been able to identify more unreachable branches than any method published in this domain, while being computationally less expensive. Moreover, some branches require very specific values on input signals in specific cycles to reach them. Conventional symbolic execution can generate those values but is computationally expensive. We propose a cycle-by-cycle restrictive symbolic execution that analyzes only a selected subset of program statements to reduce the computational cost. Our proposed method gathers information from an initial execution trace generated by any technique, to intelligently decide specific cycles where the application of this method will be helpful. This method can hybrid with simulation-based test stimuli generation methods to reduce the cost of formal verification. With this method, we were able to reach some previously unreached branches in ITC99 benchmark circuits. / Master of Science
34

Branch Guided Metrics for Functional and Gate-level Testing

Acharya, Vineeth Vadiraj 31 March 2015 (has links)
With the increasing complexity of modern day processors and system-on-a-chip (SOCs), designers invest a lot of time and resources into testing and validating these designs. To reduce the time-to-market and cost, the techniques used to validate these designs have to constantly improve. Since most of the design activity has moved to the register transfer level (RTL), test methodologies at the RTL have been gaining momentum. We present a novel functional test generation framework for functional test generation at RTL. A popular software-based metric for measuring the effectiveness of an RTL test suite is branch coverage. But exercising hard-to-reach branches is still a challenge and requires good understanding of the design semantics. The proposed framework uses static analysis to extract certain semantics of the circuit and uses several data structures to model these semantics. Using these data structures, we assist the branch-guided search to exercise these hard-to-reach branches. Since the correlation between high branch coverage and detecting defects and bugs is not clear, we present a new metric at the RTL which augments the RTL branch coverage with state values. Vectors which have higher scores on the new metric achieve higher branch and state coverages, and therefore can be applied at different levels of abstraction such as post-silicon validation. Experimental results show that use of the new metric in our test generation framework can achieve a high level of branch and fault coverage for several benchmark circuits, while reducing the length of the vector sequence. This work was supported in part by the NSF grant 1016675. / Master of Science
35

RTL Functional Test Generation Using Factored Concolic Execution

Pinto, Sonal 21 July 2017 (has links)
This thesis presents a novel concolic testing methodology and CORT, a test generation framework that uses it for high-level functional test generation. The test generation effort is visualized as the systematic unraveling of the control-flow response of the design over multiple (factored) explorations. We begin by transforming the Register Transfer Level (RTL) source for the design into a high-performance C++ compiled functional simulator which is instrumented for branch coverage. An exploration begins by simulating the design with concrete stimuli. Then, we perform an interleaved cycle-by-cycle symbolic evaluation over the concrete execution trace extracted from the Control Flow Graph (CFG) of the design. The purpose of this task is to dynamically discover means to divert the control flow of the system, by mutating primary-input stimulated control statements in this trace. We record the control-flow response as a Test Decision Tree (TDT), a new representation for the test generation effort. Successive explorations begin at system states heuristically selected from a global TDT, onto which each new decision tree resultant from an exploration is stitched. CORT succeeds at constructing functional tests for ITC99 and IWLS-2005 benchmarks that achieve high branch coverage using the fewest number of input vectors, faster than existing methods. Furthermore, we achieve orders of magnitude speedup compared to previous hybrid concrete and symbolic simulation based techniques. / Master of Science
36

Improving Bio-Inspired Frameworks

Varadarajan, Aravind Krishnan 05 October 2018 (has links)
In this thesis, we provide solutions to two different bio-inspired algorithms. The first is enhancing the performance of bio-inspired test generation for circuits described in RTL Verilog, specifically for branch coverage. We seek to improve upon an existing framework, BEACON, in terms of performance. BEACON is an Ant Colony Optimization (ACO) based test generation framework. Similar to other ACO frameworks, BEACON also has a good scope in improving performance using parallel computing. We try to exploit the available parallelism using both multi-core Central Processing Units (CPUs) and Graphics Processing Units(GPUs). Using our new multithreaded approach we can reduce test generation time by a factor of 25 — compared to the original implementation for a wide variety of circuits. We also provide a 2-dimensional factoring method for BEACON to improve available parallelism to yield some additional speedup. The second bio-inspired algorithm we address is for Deep Neural Networks. With the increasing prevalence of Neural Nets in artificial intelligence and mission-critical applications such as self-driving cars, questions arise about its reliability and robustness. We have developed a test-generation based technique and metric to evaluate the robustness of a Neural Nets outputs based on its sensitivity to its inputs. This is done by generating inputs which the neural nets find difficult to classify but at the same time is relatively apparent to human perception. We measure the degree of difficulty for generating such inputs to calculate our metric. / MS / High-level Hardware Design Languages (HDLs) has allowed designers to implement complicated hardware designs with considerably lesser effort. Unfortunately, design verification for the same circuits has failed to scale gracefully in terms of time and effort. Not only has it become more difficult for formal methods due to exponential complexity from increasing path explosion, but concrete test generation frameworks also face new issues such as the increased requirement in the volume of simulations. The advent of parallel computing using General Purpose Graphics Processing Units (GPGPUs) has led to improved performance for various applications. We propose to leverage both the multi-core CPU and the GPGPU for RTL test generation. This is achieved by implementing a test generation framework that can utilize the SIMD type parallelism available in GPGPUs and task level parallelism available on CPUs. The speedup achieved is extracted from both the test generation framework itself and also from refactoring the hardware model for multi-threaded test generation. For this purpose, we translate the RTL Verilog to a C++ and a CUDA compilable program. Experimental results show that considerable speedup can be achieved for test generation without loss of coverage. In recent years, machine learning and artificial intelligence have taken a substantial leap forward with the discovery of Deep Neural Networks(DNN). Unfortunately, apart from Accuracy and FTest numbers, there exist very few metrics to qualify a DNN. This becomes a reliability issue as DNNs are quite frequently used in safety-critical applications. It is difficult to interpret how the parameters of a trained DNN help store the knowledge from the training inputs. Therefore it is also difficult to infer whether a DNN has learned parameters which might cause an output neuron to misfire wrongly, a bug. An exhaustive search of the input space of the DNN is not only infeasible but is also misleading. Thus, in our work, we try to apply test generation techniques to generate new test inputs based on existing training and testing set to qualify the underlying robustness. Attempts to generate these inputs are guided only by the prediction probability values at the final output layer. We observe that depending on the amount of perturbation and time needed to generate these inputs we can differentiate between DNNs of varying quality.
37

Classification de menaces d’erreurs par analyse statique, simplification syntaxique et test structurel de programmes / Classification of errors threats by static analysis, program sclicing and structural testing of programs

Chebaro, Omar 13 December 2011 (has links)
La validation des logiciels est une partie cruciale dans le cycle de leur développement. Deux techniques de vérification et de validation se sont démarquées au cours de ces dernières années : l’analyse statique et l’analyse dynamique. Les points forts et faibles des deux techniques sont complémentaires. Nous présentons dans cette thèse une combinaison originale de ces deux techniques. Dans cette combinaison, l’analyse statique signale les instructions risquant de provoquer des erreurs à l’exécution, par des alarmes dont certaines peuvent être de fausses alarmes, puis l’analyse dynamique (génération de tests) est utilisée pour confirmer ou rejeter ces alarmes. L’objectif de cette thèse est de rendre la recherche d’erreurs automatique, plus précise, et plus efficace en temps. Appliquée à des programmes de grande taille, la génération de tests, peut manquer de temps ou d’espace mémoire avant de confirmer certaines alarmes comme de vraies erreurs ou conclure qu’aucun chemin d’exécution ne peut atteindre l’état d’erreur de certaines alarmes et donc rejeter ces alarmes. Pour surmonter ce problème, nous proposons de réduire la taille du code source par le slicing avant de lancer la génération de tests. Le slicing transforme un programme en un autre programme plus simple, appelé slice, qui est équivalent au programme initial par rapport à certains critères. Quatre utilisations du slicing sont étudiées. La première utilisation est nommée all. Elle consiste à appliquer le slicing une seule fois, le critère de simplification étant l’ensemble de toutes les alarmes du programme qui ont été détectées par l’analyse statique. L’inconvénient de cette utilisation est que la génération de tests peut manquer de temps ou d’espace et les alarmes les plus faciles à classer sont pénalisées par l’analyse d’autres alarmes plus complexes. Dans la deuxième utilisation, nommée each, le slicing est effectué séparément par rapport à chaque alarme. Cependant, la génération de tests est exécutée pour chaque programme et il y a un risque de redondance d’analyse si des alarmes sont incluses dans d’autres slices. Pour pallier ces inconvénients, nous avons étudié les dépendances entre les alarmes et nous avons introduit deux utilisations avancées du slicing, nommées min et smart, qui exploitent ces dépendances. Dans l’utilisation min, le slicing est effectué par rapport à un ensemble minimal de sous-ensembles d’alarmes. Ces sous-ensembles sont choisis en fonction de dépendances entre les alarmes et l’union de ces sous-ensembles couvre l’ensemble de toutes les alarmes. Avec cette utilisation, on a moins de slices qu’avec each, et des slices plus simples qu’avec all. Cependant, l’analyse dynamique de certaines slices peut manquer de temps ou d’espace avant de classer certaines alarmes, tandis que l’analyse dynamique d’une slice éventuellement plus simple permettrait de les classer. L’utilisation smart consiste à appliquer l’utilisation précédente itérativement en réduisant la taille des sous-ensembles quand c’est nécessaire. Lorsqu’une alarme ne peut pas être classée par l’analyse dynamique d’une slice, des slices plus simples sont calculées. Nous prouvons la correction de la méthode proposée. Ces travaux sont implantés dans sante, notre outil qui relie l’outil de génération de tests PathCrawler et la plate-forme d’analyse statique Frama-C. Des expérimentations ont montré, d’une part, que notre combinaison est plus performante que chaque technique utilisée indépendamment et, d’autre part, que la vérification devient plus rapide avec l’utilisation du slicing. De plus, la simplification du programme par le slicing rend les erreurs détectées et les alarmes restantes plus faciles à analyser / Software validation remains a crucial part in software development process. Two major techniques have improved in recent years, dynamic and static analysis. They have complementary strengths and weaknesses. We present in this thesis a new original combination of these methods to make the research of runtime errors more accurate, automatic and reduce the number of false alarms. We prove as well the correction of the method. In this combination, static analysis reports alarms of runtime errors some of which may be false alarms, and test generation is used to confirm or reject these alarms. When applied on large programs, test generation may lack time or space before confirming out certain alarms as real bugs or finding that some alarms are unreachable. To overcome this problem, we propose to reduce the source code by program slicing before running test generation. Program slicing transforms a program into another simpler program, which is equivalent to the original program with respect to certain criterion. Four usages of program slicing were studied. The first usage is called all. It applies the slicing only once, the simplification criterion is the set of all alarms in the program. The disadvantage of this usage is that test generation may lack time or space and alarms that are easier to classify are penalized by the analysis of other more complex alarms. In the second usage, called each, program slicing is performed with respect to each alarm separately. However, test generation is executed for each sliced program and there is a risk of redundancy if some alarms are included in many slices. To overcome these drawbacks, we studied dependencies between alarms on which we base to introduce two advanced usages of program slicing : min and smart. In the min usage, the slicing is performed with respect to subsets of alarms. These subsets are selected based on dependencies between alarms and the union of these subsets cover the whole set of alarms. With this usage, we analyze less slices than with each, and simpler slices than with all. However, the dynamic analysis of some slices may lack time or space before classifying some alarms, while the dynamic analysis of a simpler slice could possibly classify some. Usage smart applies previous usage iteratively by reducing the size of the subsets when necessary. When an alarm cannot be classified by the dynamic analysis of a slice, simpler slices are calculated. These works are implemented in sante, our tool that combines the test generation tool PathCrawler and the platform of static analysis Frama-C. Experiments have shown, firstly, that our combination is more effective than each technique used separately and, secondly, that the verification is faster after reducing the code with program slicing. Simplifying the program by program slicing also makes the detected errors and the remaining alarms easier to analyze
38

Symbolic string execution

Redelinghuys, Gideon 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Symbolic execution is a well-established technique for automated test generation and for nding errors in complex code. Most of the focus has however been on programs that manipulate integers, booleans, and even, references in object-oriented programs. Recently researchers have started looking at programs that do lots of string processing, motivated, in part, by the popularity of the web and the risk that errors in web servers may lead to security violations. Attempts to extend symbolic execution to the domain of strings are mainly divided into one of two camps: automata-based approaches and approaches based on bitvector analysis. Here we investigate these two approaches in a uni ed setting, namely the symbolic execution framework of Java PathFinder. We describe the implementations of both approaches and then do an evaluation to show under what circumstances each approach performs well (or not so well). We also illustrate the usefulness of the symbolic execution of strings by nding errors in real-world examples. / AFRIKAANSE OPSOMMING: Simboliese uitvoering is 'n bekende tegniek vir automatiese genereering van toetse en om foute te vind in ingewikkelde bronkode. Die fokus sover was grotendeels op programme wat gebruik maak van heelgetalle, boolse waardes en selfs verwysings in objek geörienteerde programme. Navorsers het onlangs begin kyk na programme wat baie gebruik maak van string prosessering, deelteliks gemotiveerd deur die populariteit van die web en die gepaardgaande risiko's daarvan. Vorige implementasies van simboliese string uitvoering word binne twee kampe verdeel: die automata gebaseerde benadering en bitvektoor gebaseerde benadering. Binne hierdie tesis word die twee benaderings onder een dak gebring, naamliks Java PathFinder. Die implentasie van beide benaderings word bespreek en ge-evalueer om die omstandighede uit te wys waarbinne elk beter sou vaar. Die nut van simboliese string uitvoering word geïllustreer deur dit toe te pas in foutiewe regte wêreld voorbeelde.
39

Efficient state space exploration for parallel test generation

Ramasamy Kandasamy, Manimozhian 03 September 2009 (has links)
Automating the generation of test cases for software is an active area of research. Specification based test generation is an approach in which a formal representation of a method is analyzed to generate valid test cases. Constraint solving and state space exploration are important aspects of the specification based test generation. One problem with specification based testing is that the size of the state space explodes when we apply this approach to a code of practical size. Hence finding ways to reduce the number of candidates to explore within the state space is important to make this approach practical in industry. Korat is a tool which generates test cases for Java programs based on predicates that validate the inputs to the method. Various ongoing researches intend to increase the tools effectiveness in handling large state space. Parallelizing Korat and minimizing the exploration of invalid candidates are the active research directions. This report surveys the basic algorithms of Korat, PKorat, and Fast Korat. PKorat is a parallel version of Korat and aims to take advantage of multi-processor and multicore systems available. Fast Korat implements four optimizations which reduce the number of candidate explored to generate validate candidates and reduce the amount of time required to explore each candidate. This report also presents the execution time results for generating test candidates for binary tree, doubly linked list, and sorted singly linked list, from their respective predicates. / text
40

Passive interoperability testing for communication protocols / Le test d'interopérabilité passif pour les protocoles de communication

Chen, Nanxing 24 June 2013 (has links)
Dans le domaine des réseaux, le test de protocoles de communication est une activité importante afin de valider les protocoles applications avant de les mettre en service. Généralement, les services qu'un protocole doit fournir sont décrits dans sa spécification. Cette spécification est une norme ou un standard défini par des organismes de normalisation tels que l'ISO (International Standards Organisation), l'IETF (Internet Engineering Task Force), l'ITU (International Telecommunication Union), etc. Le but du test est de vérifier que les implémentations du protocole fonctionnent correctement et rendent bien les services prévus. Pour atteindre cet objectif, différentes méthodes de tests peuvent être utilisées. Parmi eux, le test de conformité vérifie qu'un produit est conforme à sa spécification. Le test de robustesse vérifie les comportements de l'implémentation de protocole face à des événements imprévus. Dans cette thèse, nous nous intéressons plus particulièrement au test d'interopérabilité, qui vise à vérifier que plusieurs composants réseaux interagissent correctement et fournissent les services prévus. L'architecture générale de test d'interopérabilité fait intervenir un système sous test (SUT) composé de plusieurs implémentations sous test (IUT). Les objectifs du test d'interopérabilité sont à la fois de vérifier que plusieurs implémentations (basées sur des protocoles conçus pour fonctionner ensemble) sont capables d'interagir et que, lors de leur interaction, elles rendent les services prévus dans leurs spécifications respectives. En général, les méthodes de test d'interopérabilité peuvent être classées en deux grandes approches: le test actif et le test passif. Le test actif est une technique de validation très populaire, dont l'objectif est essentiellement de tester les implémentations (IUT), en pratiquant une suite de contrôles et d'observations sur celles-ci. Cependant, une caractéristique fondamentale du test actif est que le testeur possède la capacité de contrôler les IUTs. Cela implique que le testeur perturbe le fonctionnement normal du système testé. De ce fait, le test actif n'est pas une technique appropriée pour le test d'interopérabilité, qui est souvent effectué dans les réseaux opérationnels, où il est difficile d'insérer des entrées arbitraires sans affecter les services ou les fonctionnements normaux des réseaux. A l'inverse, le test passif est une technique se basant uniquement sur les observations. Le testeur n'a pas besoin d'agir sur le SUT notamment en lui envoyant des stimuli. Cela permet au test d'être effectué sans perturber l'environnement normal du système sous test. Le test passif possède également d'autres avantages comme par exemple, pour les systèmes embarqués où le testeur n'a pas d'accès direct, de pourvoir effectuer le test en collectant des traces d'exécution du système, puis de détecter les éventuelles erreurs ou déviations de ces traces vis-à-vis du comportement du système. / In the field of networking, testing of communication protocols is an important activity to validate protocol applications before commercialisation. Generally, the services that must be provided by a protocol are described in its specification(s). A specification is generally a standard defined by standards bodies such as ISO (International Standards Organization), IETF (Internet Engineering Task Force), ITU (International Telecommunication Union), etc. The purpose of testing is to verify that the protocol implementations work correctly and guarantee the quality of the services in order to meet customers expectations. To achieve this goal, a variety of testing methods have been developed. Among them, interoperability testing is to verify that several network components cooperate correctly and provide expected services. Conformance testing verifies that a product conforms to its specification. Robustness testing determines the degree to which a system operates correctly in the presence of exceptional inputs or stressful environmental conditions. In this thesis, we focus on interoperability testing. The general architecture of interoperability testing involves a system under test (SUT), which consists of at least two implementations under test (IUT). The objectives of interoperability testing are to ensure that interconnected protocol implementations are able to interact correctly and, during their interaction, provide the services predefined in their specifications. In general, the methods of interoperability testing can be classified into two approaches: active and passive testing. Among them, active test is the most conventionally used technique, which aims to test the implementations (IUT) by injecting a series of test messages (stimuli) and observing the corresponding outputs. However, the intrusive nature of active testing is that the tester has the ability to control IUTS. This implies that the tester interrupts inevitably the normal operations of the system under test. In this sense, active testing is not a suitable technique for interoperability testing, which is often carried out in operational networks. In such context, it is difficult to insert arbitrary testing messages without affecting the normal behavior and the services of the system. On the contrary, passive testing is a technique based only on observation. The tester does not need to interact with the SUT. This allows the test to be carried out without disturbing the normal operations of the system under test. Besides, passive testing also has other advantages such as: for embedded systems to which the tester does not have direct access, test can still be performed by collecting the execution traces of the system and then detect errors by comparing the trace with the behavior of the system described in its specification. In addition, passive testing makes it possible to moniter a system over a long period, and report abnomality at any time.

Page generated in 0.1378 seconds