• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 2
  • Tagged with
  • 26
  • 26
  • 11
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

SAT Encodings of Finite CSPs

Nguyen, Van-Hau 30 March 2015 (has links) (PDF)
Boolean satisfiability (SAT) is the problem of determining whether there exists an assignment of the Boolean variables to the truth values such that a given Boolean formula evaluates to true. SAT was the first example of an NP-complete problem. Only two decades ago SAT was mainly considered as of a theoretical interest. Nowadays, the picture is very different. SAT solving becomes mature and is a successful approach for tackling a large number of applications, ranging from artificial intelligence to industrial hardware design and verification. SAT solving consists of encodings and solvers. In order to benefit from the tremendous advances in the development of solvers, one must first encode the original problems into SAT instances. These encodings should not only be easily generated, but should also be efficiently processed by SAT solvers. Furthermore, an increasing number of practical applications in computer science can be expressed as constraint satisfaction problems (CSPs). However, encoding a CSP to SAT is currently regarded as more of an art than a science, and choosing an appropriate encoding is considered as important as choosing an algorithm. Moreover, it is much easier and more efficient to benefit from highly optimized state-of-the-art SAT solvers than to develop specialized tools from scratch. Hence, finding appropriate SAT encodings of CSPs is one of the most fascinating challenges for solving problems by SAT. This thesis studies SAT encodings of CSPs and aims at: 1) conducting a comprehensively profound study of SAT encodings of CSPs by separately investigating encodings of CSP domains and constraints; 2) proposing new SAT encodings of CSP domains; 3) proposing new SAT encoding of the at-most-one constraint, which is essential for encoding CSP variables; 4) introducing the redundant encoding and the hybrid encoding that aim to benefit from both two efficient and common SAT encodings (i.e., the sparse and order encodings) by using the channeling constraint (a term used in Constraint Programming) for SAT; and 5) revealing interesting guidelines on how to choose an appropriate SAT encoding in the way that one can exploit the availability of many efficient SAT solvers to solve CSPs efficiently and effectively. Experiments show that the proposed encodings and guidelines improve the state-of-the-art SAT encodings of CSPs.
12

SAT Encodings of Finite CSPs

Nguyen, Van-Hau 27 February 2015 (has links)
Boolean satisfiability (SAT) is the problem of determining whether there exists an assignment of the Boolean variables to the truth values such that a given Boolean formula evaluates to true. SAT was the first example of an NP-complete problem. Only two decades ago SAT was mainly considered as of a theoretical interest. Nowadays, the picture is very different. SAT solving becomes mature and is a successful approach for tackling a large number of applications, ranging from artificial intelligence to industrial hardware design and verification. SAT solving consists of encodings and solvers. In order to benefit from the tremendous advances in the development of solvers, one must first encode the original problems into SAT instances. These encodings should not only be easily generated, but should also be efficiently processed by SAT solvers. Furthermore, an increasing number of practical applications in computer science can be expressed as constraint satisfaction problems (CSPs). However, encoding a CSP to SAT is currently regarded as more of an art than a science, and choosing an appropriate encoding is considered as important as choosing an algorithm. Moreover, it is much easier and more efficient to benefit from highly optimized state-of-the-art SAT solvers than to develop specialized tools from scratch. Hence, finding appropriate SAT encodings of CSPs is one of the most fascinating challenges for solving problems by SAT. This thesis studies SAT encodings of CSPs and aims at: 1) conducting a comprehensively profound study of SAT encodings of CSPs by separately investigating encodings of CSP domains and constraints; 2) proposing new SAT encodings of CSP domains; 3) proposing new SAT encoding of the at-most-one constraint, which is essential for encoding CSP variables; 4) introducing the redundant encoding and the hybrid encoding that aim to benefit from both two efficient and common SAT encodings (i.e., the sparse and order encodings) by using the channeling constraint (a term used in Constraint Programming) for SAT; and 5) revealing interesting guidelines on how to choose an appropriate SAT encoding in the way that one can exploit the availability of many efficient SAT solvers to solve CSPs efficiently and effectively. Experiments show that the proposed encodings and guidelines improve the state-of-the-art SAT encodings of CSPs.
13

Improvements to Field-Programmable Gate Array Design Efficiency using Logic Synthesis

Ling, Andrew Chaang 18 February 2010 (has links)
As Field-Programmable Gate Array (FPGA) capacity can now support several processors on a single device, the scalability of FPGA design tools and methods has emerged as a major obstacle for the wider use of FPGAs. For example, logic synthesis, which has traditionally been the fastest step in the FPGA Computer-Aided Design (CAD) flow, now takes several hours to complete in a typical FPGA compile. In this work, we address this problem by focusing on two areas. First, we revisit FPGA logic synthesis and attempt to improve its scalability. Specifically, we look at a binary decision diagram (BDD) based logic synthesis flow, referred to as FBDD, where we improve its runtime by several fold with a marginal impact to the resulting circuit area. We do so by speeding up the classical cut generation problem by an order-of-magnitude which enables its application directly at the logic synthesis level. Following this, we introduce a guided partitioning technique using a fast global budgeting formulation, which enables us to optimize individual “pockets” within the circuit without degrading the overall circuit performance. By using partitioning we can significantly reduce the solution space of the logic synthesis problem and, furthermore, open up the possibility of parallelizing the logic synthesis step. The second area we look at is the area of Engineering Change Orders (ECOs). ECOs are incremental modifications to a design late in the design flow. This is beneficial since it is minimally disruptive to the existing circuit which preserves much of the engineering effort invested previously in the design. In a design flow where most of the steps are fully automated, ECOs still remain largely a manual process. This can often tie up a designer for weeks leading to missed project deadlines which is very detrimental to products whose life-cycle can span only a few months. As a solution to this, we show how we can leverage existing logic synthesis techniques to automatically modify a circuit in a minimally disruptive manner. This can significantly reduce the turn-around time when applying ECOs.
14

Improvements to Field-Programmable Gate Array Design Efficiency using Logic Synthesis

Ling, Andrew Chaang 18 February 2010 (has links)
As Field-Programmable Gate Array (FPGA) capacity can now support several processors on a single device, the scalability of FPGA design tools and methods has emerged as a major obstacle for the wider use of FPGAs. For example, logic synthesis, which has traditionally been the fastest step in the FPGA Computer-Aided Design (CAD) flow, now takes several hours to complete in a typical FPGA compile. In this work, we address this problem by focusing on two areas. First, we revisit FPGA logic synthesis and attempt to improve its scalability. Specifically, we look at a binary decision diagram (BDD) based logic synthesis flow, referred to as FBDD, where we improve its runtime by several fold with a marginal impact to the resulting circuit area. We do so by speeding up the classical cut generation problem by an order-of-magnitude which enables its application directly at the logic synthesis level. Following this, we introduce a guided partitioning technique using a fast global budgeting formulation, which enables us to optimize individual “pockets” within the circuit without degrading the overall circuit performance. By using partitioning we can significantly reduce the solution space of the logic synthesis problem and, furthermore, open up the possibility of parallelizing the logic synthesis step. The second area we look at is the area of Engineering Change Orders (ECOs). ECOs are incremental modifications to a design late in the design flow. This is beneficial since it is minimally disruptive to the existing circuit which preserves much of the engineering effort invested previously in the design. In a design flow where most of the steps are fully automated, ECOs still remain largely a manual process. This can often tie up a designer for weeks leading to missed project deadlines which is very detrimental to products whose life-cycle can span only a few months. As a solution to this, we show how we can leverage existing logic synthesis techniques to automatically modify a circuit in a minimally disruptive manner. This can significantly reduce the turn-around time when applying ECOs.
15

Hardware Acceleration of Electronic Design Automation Algorithms

Gulati, Kanupriya 2009 December 1900 (has links)
With the advances in very large scale integration (VLSI) technology, hardware is going parallel. Software, which was traditionally designed to execute on single core microprocessors, now faces the tough challenge of taking advantage of this parallelism, made available by the scaling of hardware. The work presented in this dissertation studies the acceleration of electronic design automation (EDA) software on several hardware platforms such as custom integrated circuits (ICs), field programmable gate arrays (FPGAs) and graphics processors. This dissertation concentrates on a subset of EDA algorithms which are heavily used in the VLSI design flow, and also have varying degrees of inherent parallelism in them. In particular, Boolean satisfiability, Monte Carlo based statistical static timing analysis, circuit simulation, fault simulation and fault table generation are explored. The architectural and performance tradeoffs of implementing the above applications on these alternative platforms (in comparison to their implementation on a single core microprocessor) are studied. In addition, this dissertation also presents an automated approach to accelerate uniprocessor code using a graphics processing unit (GPU). The key idea is to partition the software application into kernels in an automated fashion, such that multiple instances of these kernels, when executed in parallel on the GPU, can maximally benefit from the GPU?s hardware resources. The work presented in this dissertation demonstrates that several EDA algorithms can be successfully rearchitected to maximally harness their performance on alternative platforms such as custom designed ICs, FPGAs and graphic processors, and obtain speedups upto 800X. The approaches in this dissertation collectively aim to contribute towards enabling the computer aided design (CAD) community to accelerate EDA algorithms on arbitrary hardware platforms.
16

Υλοποίηση διαδικτυακού προσομοιωτή για αλγορίθμους επίλυσης προβλημάτων SAT

Χαρατσάρης, Δημήτριος 08 January 2013 (has links)
Η παρούσα διπλωµατική εργασία ασχολείται με το θέμα των Αλγορίθμων Επίλυσης Προβληµάτων SAT. Η εργασία αυτή εκπονήθηκε στα πλαίσια του Εργαστηρίου Ενσύρµατης Επικοινωνίας του Τµήματος Ηλεκτρολόγων Μηχανικών και Τεχνολογίας Υπολογιστών της Πολυτεχνικής Σχολής του Πανεπιστηµίου Πατρών. Σκοπός της είναι η δημιουργία ενός Προσομοιωτή των αλγορίθμων αυτών, ο οποίος να μπορεί να προσπελαστεί από οποιονδήποτε μέσω του διαδικτύου. Αρχικά έγινε µία εισαγωγή στο αντικείμενο της Τεχνητής Νοημοσύνης και πιο συγκεκριµένα στην Προτασιακή Λογική, ενώ δόθηκε και το απαραίτητο υπόβαθρο για να κατανοηθεί το πρόβληµμα και οι τεχνικές λύσης του. Τέλος, επιλέχθηκε να γίνει η υλοποίηση του Προσωμοιωτή σε Java. / This diploma dissertation deals with SAT solvers, algorithms for the Boolean satisfiability problem. It was produced in the Wire Communications Laboratory of the Electrical and Computer Engineering Department of the University of Patras. Its aim is to create a simulator for these algorithms, accessible to anyone via the Internet. An introduction to the field of Artificial Intelligence and more specifically to Propositional Calculus was given as well as the necessary groundwork to understand the problem and its solution approaches. The simulation implementation was developed in Java
17

FPGA Based Satisfiability Checking

Subramanian, Rishi Bharadwaj 15 June 2020 (has links)
No description available.
18

Synthesis of Neural Networks using SAT Solvers

Warpe, Ludvig, Johnson Palm, August January 2023 (has links)
Artificial neural networks (ANN) have found extensive use in solving real-world problems in recent years, where their exceptional information processing is the main advantage. Facing increasingly complex problems, there is a need to improve their information processing. In this thesis, we explore new ways of synthesizing ANNs by reducing the synthesis problem to the Boolean satisfiability problem (SAT) that is, the problem of determining whether a given Boolean formula is satisfiable. Also known as the SAT problem, it aims to determine if there exists such a combination of Boolean variables in a propositional formula for which the formula evaluates to true. We derived a general formula in conjunctive normal form (CNF) representing the synthesis of a neural network. Given randomly generated datasets, we were able to construct CNF formulas whose satisfying assignments encode neural networks consistent with the datasets. These formulas were run through an off-the-shelf SAT solver, where the outputted models simulated the synthesis of neural networks consistent with the datasets. The experiments conducted in this thesis showed that our method had the ability to produce feed-forward neural networks of varying sizes consistent with randomly generated datasets of binary strings.
19

FPGA Based Complete SAT Solver

Kannan, Sai Surya January 2022 (has links)
No description available.
20

Sufficiency-based Filtering of Invariants for Sequential Equivalence Checking

Hu, Wei 14 February 2011 (has links)
Verification, as opposed to Testing and Post-Silicon Validation, is a critical step for Integrated Circuits (IC) Design, answering the question "Are we designing the right function?" before the chips are manufactured. One of the core areas of Verification is Equivalence Checking (EC), which is a special yet independent case of Model Checking (MC). Equivalence Checking aims to prove that two circuits, when fed with the same inputs, produce the exact same outputs. There are broadly two ways to conduct Equivalence Checking, simulation and Formal Equivalence Checking. Simulation requires one to try out different input combinations and observe if the two circuits produce the same output. Obviously, since it is not possible to enumerate all combinations of different inputs, completeness cannot be guaranteed. On the other hand, Formal Equivalence Checking can achieve 100% confidence. As the number of gates and in particular, the number of flip-flops, in circuits has grown tremendously during the recent years, the problem of Formal Equivalence Checking has become much harder â A recent evaluation of a general-case Formal Equivalence Checking engine [1] shows that about 15% of industrial designs cannot be verified after a typical sequential synthesis flow. As a result, a lot of attention on Formal Equivalence Checking has been drawn both academically and industrially. For years Combinational Equivalence Checking(CEC) has been the pervasive framework for Formal Equivalence Checking(FEC) in the industry. However, due to the limitation of being able to verify circuits only with 1:1 flip-flop pairing, a pure CEC-based methodology requires a full regression of the verification process, meaning that performing sequential optimizations like retiming or FSM re-encoding becomes somewhat of a bottleneck in the design cycle [2]. Therefore, a more powerful framework — Sequential Equivalence Checking (SEC) — has been gradually adopted in industry. In this thesis, we target on Sequential Equivalence Checking by finding efficient yet powerful group of relationships (invariants) among the signals of the two circuits being compared. In order to achieve a high success rate on some of the extremely hard-to-verify circuits, we are interested in both two-node and multi-node (up to 4 nodes) invariants. Also we are interested in invariants among both flip-flops and internal signals. For large circuits, there can be too many potential invariants requiring much time to prove. However, we observed that a large portion of them may not even contribute to equivalence checking. Moreover, equivalence checking can be significantly helped if there exists a method to check if a subset of potential invariants would be sufficient (e.g., whether two-nodes are enough or multi-nodes are also needed) prior to the verification step. Therefore, we propose two sufficiency-based approaches to identify useful invariants out of the initial potential invariants for SEC. Experimental results show that our approach can either demonstrate insufficiency of the invariants or select a small portion of them to successfully prove the equivalence property. Our approaches are quite case-independent and flexible. They can be applied on circuits with different synthesis techniques and combined with other techniques. / Master of Science

Page generated in 0.085 seconds