• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 12
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 153
  • 44
  • 43
  • 39
  • 36
  • 27
  • 22
  • 21
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking

Hu, Wei 14 February 2011 (has links)
Verification, as opposed to Testing and Post-Silicon Validation, is a critical step for Integrated Circuits (IC) Design, answering the question "Are we designing the right function?" before the chips are manufactured. One of the core areas of Verification is Equivalence Checking (EC), which is a special yet independent case of Model Checking (MC). Equivalence Checking aims to prove that two circuits, when fed with the same inputs, produce the exact same outputs. There are broadly two ways to conduct Equivalence Checking, simulation and Formal Equivalence Checking. Simulation requires one to try out different input combinations and observe if the two circuits produce the same output. Obviously, since it is not possible to enumerate all combinations of different inputs, completeness cannot be guaranteed. On the other hand, Formal Equivalence Checking can achieve 100% confidence. As the number of gates and in particular, the number of flip-flops, in circuits has grown tremendously during the recent years, the problem of Formal Equivalence Checking has become much harder â A recent evaluation of a general-case Formal Equivalence Checking engine [1] shows that about 15% of industrial designs cannot be verified after a typical sequential synthesis flow. As a result, a lot of attention on Formal Equivalence Checking has been drawn both academically and industrially. For years Combinational Equivalence Checking(CEC) has been the pervasive framework for Formal Equivalence Checking(FEC) in the industry. However, due to the limitation of being able to verify circuits only with 1:1 flip-flop pairing, a pure CEC-based methodology requires a full regression of the verification process, meaning that performing sequential optimizations like retiming or FSM re-encoding becomes somewhat of a bottleneck in the design cycle [2]. Therefore, a more powerful framework — Sequential Equivalence Checking (SEC) — has been gradually adopted in industry. In this thesis, we target on Sequential Equivalence Checking by finding efficient yet powerful group of relationships (invariants) among the signals of the two circuits being compared. In order to achieve a high success rate on some of the extremely hard-to-verify circuits, we are interested in both two-node and multi-node (up to 4 nodes) invariants. Also we are interested in invariants among both flip-flops and internal signals. For large circuits, there can be too many potential invariants requiring much time to prove. However, we observed that a large portion of them may not even contribute to equivalence checking. Moreover, equivalence checking can be significantly helped if there exists a method to check if a subset of potential invariants would be sufficient (e.g., whether two-nodes are enough or multi-nodes are also needed) prior to the verification step. Therefore, we propose two sufficiency-based approaches to identify useful invariants out of the initial potential invariants for SEC. Experimental results show that our approach can either demonstrate insufficiency of the invariants or select a small portion of them to successfully prove the equivalence property. Our approaches are quite case-independent and flexible. They can be applied on circuits with different synthesis techniques and combined with other techniques. / Master of Science
92

Enhancing SAT-based Formal Verification Methods using Global Learning

Arora, Rajat 25 May 2004 (has links)
With the advances in VLSI and System-On-Chip (SOC) technology, the complexity of hardware systems has increased manifold. Today, 70% of the design cost is spent in verifying these intricate systems. The two most widely used formal methods for design verification are Equivalence Checking and Model Checking. Equivalence Checking requires that the implementation circuit should be exactly equivalent to the specification circuit (golden model). In other words, for each possible input pattern, the implementation circuit should yield the same outputs as the specification circuit. Model checking, on the other hand, checks to see if the design holds certain properties, which in turn are indispensable for the proper functionality of the design. Complexities in both Equivalence Checking and Model Checking are exponential to the circuit size. In this thesis, we firstly propose a novel technique to improve SAT-based Combinational Equivalence Checking (CEC) and Bounded Model Checking (BMC). The idea is to perform a low-cost preprocessing that will statically induce global signal relationships into the original CNF formula of the circuit under verification and hence reduce the complexity of the SAT instance. This efficient and effective preprocessing quickly builds up the implication graph for the circuit under verification, yielding a large set of logic implications composed of direct, indirect and extended backward implications. These two-node implications (spanning time-frame boundaries) are converted into two-literal clauses, and added to the original CNF database. The added clauses constrain the search space of the SAT-solver engine, and provide correlation among the different variables, which enhances the Boolean Constraint Propagation (BCP). Experimental results on large and difficult ISCAS'85, ISCAS'89 (full scan) and ITC'99 (full scan) CEC instances and ISCAS'89 BMC instances show that our approach is independent of the state-of-the-art SAT-solver used, and that the added clauses help to achieve more than an order of magnitude speedup over the conventional approach. Also, comparison with Hyper-Resolution [Bacchus 03] suggests that our technique is much more powerful, yielding non-trivial clauses that significantly simplify the SAT instance complexity. Secondly, we propose a novel global learning technique that helps to identify highly non-trivial relationships among signals in the circuit netlist, thereby boosting the power of the existing implication engine. We call this new class of implications as 'extended forward implications', and show its effectiveness through additional untestable faults they help to identify. Thirdly, we propose a suite of lemmas and theorems to formalize global learning. We show through implementation that these theorems help to significantly simplify a generic CNF formula (from Formal Verification, Artificial Intelligence etc.) by identifying the necessary assignments, equivalent signals, complementary signals and other non-trivial implication relationships among its variables. We further illustrate through experimental results that the CNF formula simplification obtained using our tool outshines the simplification obtained using other preprocessors. / Master of Science
93

Sequential Equivalence Checking with Efficient Filtering Strategies for Inductive Invariants

Nguyen, Huy 24 May 2011 (has links)
Powerful sequential optimization techniques can drastically change the Integrated Circuit (IC) design paradigm. Due to the limited capability of sequential verification tools, aggressive sequential optimization is shunned nowadays as there is no efficient way to prove the preservation of equivalence after optimization. Due to the fact that the number of transistors fitting on single fixed-size die increases with Moore's law, the problem gets harder over time and in an exponential rate. It is no surprise that functional verification becomes a major bottleneck in the time-to-market of a product. In fact, literature has reported that 70% of design time is spent on making sure the design is bug-free and operating correctly. One of the core verification tasks in achieving high quality products is equivalence checking. Essentially, equivalence checking ensures the preservation of optimized product's functionality to the unoptimized model. This is important for industry because the products are modified constantly to meet different goals such as low power, high performance, etc. The mainstream in conducting equivalence checking includes simulation and formal verification. In simulation approach, golden design and design under verification (DUV) are fed with same stimuli for input expecting outputs to produce identical responses. In case of discrepancy, traces will be generated and DUV will undergo modifications. With the increase in input pins and state elements in designs, exhaustive simulation becomes infeasible. Hence, the completeness of the approach is not guaranteed and notions of coverage has to be accompanied. On the other hand, formal verification incorporates mathematical proofs and guarantee the completeness over the search space. However, formal verification has problems of its own in which it is usually resource intensive. In addition, not all design can be verified after optimization processes. That is to say the golden model and DUV are vastly different in structure which cause modern checker to give inconclusive result. Due to this nature, this thesis focuses in improving the strength and the efficiency of sequential equivalence checking (SEC) using formal approach. While there has been great strides made in the verification for combinational circuits, SEC still remains rather rudimentary. Without powerful SEC as a backbone, aggressive sequential synthesis and optimization are often avoided if the optimized design cannot be proved to be equivalent to the original one. In an attempt to take on the challenges of SEC, we propose two frameworks that successfully determining equivalence for hard-to-verify circuits. The first framework utilizes arbitrary relations between any two nodes within the two sequential circuits in question. The two nodes can reside in the same or across the circuits; likewise, they can be from the same time-frame or across time-frames. The merit for this approach is to use global structure of the circuits to speed up the verification process. The second framework introduces techniques to identify subset but yet powerful multi-node relations (involve more than 2 nodes) which then help to prune large don't care search space and result in a successful SEC framework. In contrast with previous approaches in which exponential number of multi-node relations are mined and learned, we alleviate the computation cost by selecting much fewer invariants to achieve desired conclusion. Although independent, the two frameworks could be used in sequential to complement each other. Experimental results demonstrate that our frameworks can take on many hard-to-verify cases and show a significant speed up over previous approaches. / Master of Science
94

Strategies for Scalable Symbolic Execution-based Test Generation

Krishnamoorthy, Saparya 02 August 2010 (has links)
With the advent of advanced program analysis and constraint solving techniques, several test generation tools use variants of symbolic execution. Symbolic techniques have been shown to be very effective in path-based test generation; however, they fail to scale to large programs due to the exponential number of paths to be explored. In this thesis, we focus on tackling this path explosion problem and propose search strategies to achieve quick branch coverage under symbolic execution, while exploring only a fraction of paths in the program. We present a reachability-guided strategy that makes use of the reachability graph of the program to explore unvisited portions of the program and a conflict driven backtracking strategy that utilizes conflict analysis to perform nonchronological backtracking. We also propose error-directed search strategies, that are aimed at catching bugs in the program faster, by targeting those parts of the program where bugs are likely to be found or those that are hard to reach. We present experimental evidence that these strategies can significantly reduce the search space and improve the speed of test generation for programs. / Master of Science
95

Design Validation of RTL Circuits using Binary Particle Swarm Optimization and Symbolic Execution

Puri, Prateek 05 August 2015 (has links)
Over the last two decades, chip design has been conducted at the register transfer (RT) Level using Hardware Descriptive Languages (HDL), such as VHDL and Verilog. The modeling at the behavioral level not only allows for better representation and understanding of the design, but also allows for encapsulation of the sub-modules as well, thus increasing productivity. Despite these benefits, validating a RTL design is not necessarily easier. Today, design validation is considered one of the most time and resource consuming aspects of hardware design. The high costs associated with late detection of bugs can be enormous. Together with stringent time to market factors, the need to guarantee the correct functionality of the design is more critical than ever. The work done in this thesis tackles the problem of RTL design validation and presents new frameworks for functional test generation. We use branch coverage as our metric to evaluate the quality of the generated test stimuli. The initial effort for test generation utilized simulation based techniques because of their scalability with design size and ease of use. However, simulation based methods work on input spaces rather than the DUT's state space and often fail to traverse very narrow search paths in large input spaces. To encounter this problem and enhance the ability of test generation framework, in the following work in this thesis, certain design semantics are statically extracted and recurrence relationships between different variables are mined. Information such as relations among variables and loops can be extremely valuable from test generation point of view. The simulation based method is hybridized with Z3 based symbolic backward execution engine with feedback among different stages. The hybridized method performs loop abstraction and is able to traverse narrow design paths without performing costly circuit analysis or explicit loop unrolling. Also structural and functional unreachable branches are identified during the process of test generation. Experimental results show that the proposed techniques are able to achieve high branch coverage on several ITC'99 benchmark circuits and their modified variants, with significant speed up and reduction in the sequence length. / Master of Science
96

Exploits in Concurrency for Boolean Satisfiability

Sohanghpurwala, Ali Asgar Ali Akbar 14 December 2018 (has links)
Boolean Satisfiability (SAT) is a problem that holds great theoretical significance along with effective formulations that benefit many real-world applications. While the general problem is NP-complete, advanced solver algorithms and heuristics allow for fast solutions to many large industrial problems. In addition to SAT, many applications rely on generalizations of Satisfiability such as MaxSAT, and Satisfiability Modulo Theories (SMT). Much of the advancement in SAT solver performance has been in the realm of improved sequential solvers with advanced conflict resolution, learning mechanisms, and sophisticated heuristics. There have been some successful demonstrations of massively parallel and hardware-accelerated solvers for SAT, but these have failed to find their way into mainstream usage. This document first presents previous work in Hardware Acceleration of Satisfiability followed by an analysis of why these attempts failed to gain widespread acceptance. It then demonstrates an alternative, hardware-centric approach, based on distributed Stochastic Local Search (SLS) that is better suited to efficient hardware implementation. Then a parallel SLS/CDCL hybrid approach is proposed that is suitable for distributed search with minimal communication overhead while maintaining completeness. Finally the efficacy and flexibility of distributed local search is considered with an adaptation to Weighted Partial MaxSAT (WPMS) and a focused case study on converted Probabilistic Inference instances. / Ph. D. / The Boolean Satisfiability (SAT) problem is an important decision problem that asks whether there exists a solution that satisfies all given constraints over a set of variables that can assume values of either 0 or 1. May real-world decision problems can be translated into SAT, and there exist efficient sequential solvers that can quickly resolve many such instances. Less progress has been made in efficiently scaling SAT solvers to modern multi-core systems and massively parallel hardware accelerators such as GPUs and Field Programmable Gate Arrays (FPGAs). This thesis explore different approaches to solving SAT based decision and optimization problems with the goal of increasing concurrency.
97

An SMT-based framework for the formal analysis of Switched Multi-Domain Kirchhoff Networks

Sessa, Mirko 28 October 2019 (has links)
Many critical systems are based on the combination of components from different physical domains (e.g. mechanical, electrical, hydraulic), and are mathematically modeled as Switched Multi-Domain Kirchhoff Networks (SMDKN). In this thesis, we tackle a major obstacle to formal verification of SMDKN, namely devising a global model amenable to verification in the form of a Hybrid Automaton. This requires the combination of the local dynamics of the components, expressed as Differential Algebraic Equations, according to Kirchhoff's laws, depending on the (exponentially many) operation modes of the network. We propose an automated SMT-based method to analyze networks from multiple physical domains, detecting which modes induce invalid (i.e. inconsistent) constraints, and to produce a Hybrid Automaton model that accurately describes, in terms of Ordinary Differential Equations, the system evolution in the valid modes, catching also the possible non-deterministic behaviors. The experimental evaluation demonstrates that the proposed approach allows several complex multi-domain systems to be formally analyzed and model checked against various system requirements.
98

A Framework for Autonomous Generation of Strategies in Satisfiability Modulo Theories / Un cadre pour la génération autonome de stratégies dans la satisfiabilité modulo des théories

Galvez Ramirez, Nicolas 19 December 2018 (has links)
La génération de stratégies pour les solveurs en Satisfiabilité Modulo des Théories (SMT) nécessite des outils théoriques et pratiques qui permettent aux utilisateurs d’exercer un contrôle stratégique sur les aspects heuristiques fondamentaux des solveurs de SMT, tout en garantissant leur performance. Nous nous intéressons dans cette thèse au solveur Z3 , l’un des plus efficaces lors des compétitions SMT (SMT-COMP). Dans les solveurs SMT, la définition d’une stratégie repose sur un ensemble de composants et paramètres pouvant être agencés et configurés afin de guider la recherche d’une preuve de (in)satisfiabilité d’une instance donnée. Dans cette thèse, nous abordons ce défi en définissant un cadre pour la génération autonome de stratégies pour Z3, c’est-à-dire un algorithme qui permet de construire automatiquement des stratégies sans faire appel à des connaissances d’expertes. Ce cadre général utilise une approche évolutionnaire (programmation génétique), incluant un système à base de règles. Ces règles formalisent la modification de stratégies par des principes de réécriture, les algorithmes évolutionnaires servant de moteur pour les appliquer. Cette couche intermédiaire permettra d’appliquer n’importe quel algorithme ou opérateur sans qu’il soit nécessaire de modifier sa structure, afin d’introduire de nouvelles informations sur les stratégies. Des expérimentations sont menées sur les jeux classiques de la compétition SMT-COMP. / The Strategy Challenge in Satisfiability Modulo Theories (SMT) claims to build theoretical and practical tools allowing users to exert strategic control over core heuristic aspects of high-performance SMT solvers. In this work, we focus in Z3 Theorem Prover: one of the most efficient SMT solver according to the SMT Competition, SMT-COMP. In SMT solvers, the definition of a strategy relies on a set of tools that can be scheduled and configured in order to guide the search for a (un)satisfiability proof of a given instance. In this thesis, we address the Strategy Challenge in SMT defining a framework for the autonomous generation of strategies in Z3, i.e. a practical system to automatically generate SMT strategies without the use of expert knowledge. This framework is applied through an incremental evolutionary approach starting from basic algorithms to more complex genetic constructions. This framework formalise strategies modification as rewriting rules, where algorithms acts as enginess to apply them. This intermediate layer, will allow apply any algorithm or operator with no need to being structurally modified, in order to introduce new information in strategies. Validation is done through experiments on classic benchmarks of the SMT-COMP.
99

Circumscriptive reasoning

Halland, Kenneth John 08 1900 (has links)
We show how the non-monotonic nature of common-sense reasoning can be formalised by circumscription. Various forms of circumscription are discussed. A new form of circumscription, namely naive circumscription, is introduced in order to facilitate the comparison of the various forms. Finally, some issues connected with the automation of circumscriptive reasoning are examined. / Computing / M. Sc. (Computer Science)
100

Satisfiability and Optimization in Periodic Traffic Flow Problems / Aussagenlogische Erfüllbarkeit und Optimierung in periodischen Verkehrsflussproblemen

Großmann, Peter 08 November 2016 (has links) (PDF)
Automatically calculating periodic timetables in public railway transport systems is an NP-complete problem – namely the Periodic Event Scheduling Problem (PESP). The original model is restricted to basic periodic timetabling. Extending the model by decisional transport networks with flows induces new possibilities in the timetabling and planning process. Subsequently, the given flexibility results in a generic model extension of PESP that can be applied in subsets of the timetabling process. The successful utilization of this approach is presented for distinct chain paths, duplicated chain paths and non-connected flow graphs that represent integration of routing and timetabling, planning of periodic rail freight train paths and track allocation, respectively. Furthermore, the encoding of this generic model into a binary propositional formula is introduced and the appropriate usage of several techniques like SAT solving and MaxSAT to calculate and optimize the corresponding instances will be presented accordingly. Computational results for real-world scenarios suggest the practical impact and give promising perspectives for further scientific research.

Page generated in 0.0665 seconds