• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 2
  • Tagged with
  • 26
  • 26
  • 11
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Enhancing SAT-based Formal Verification Methods using Global Learning

Arora, Rajat 25 May 2004 (has links)
With the advances in VLSI and System-On-Chip (SOC) technology, the complexity of hardware systems has increased manifold. Today, 70% of the design cost is spent in verifying these intricate systems. The two most widely used formal methods for design verification are Equivalence Checking and Model Checking. Equivalence Checking requires that the implementation circuit should be exactly equivalent to the specification circuit (golden model). In other words, for each possible input pattern, the implementation circuit should yield the same outputs as the specification circuit. Model checking, on the other hand, checks to see if the design holds certain properties, which in turn are indispensable for the proper functionality of the design. Complexities in both Equivalence Checking and Model Checking are exponential to the circuit size. In this thesis, we firstly propose a novel technique to improve SAT-based Combinational Equivalence Checking (CEC) and Bounded Model Checking (BMC). The idea is to perform a low-cost preprocessing that will statically induce global signal relationships into the original CNF formula of the circuit under verification and hence reduce the complexity of the SAT instance. This efficient and effective preprocessing quickly builds up the implication graph for the circuit under verification, yielding a large set of logic implications composed of direct, indirect and extended backward implications. These two-node implications (spanning time-frame boundaries) are converted into two-literal clauses, and added to the original CNF database. The added clauses constrain the search space of the SAT-solver engine, and provide correlation among the different variables, which enhances the Boolean Constraint Propagation (BCP). Experimental results on large and difficult ISCAS'85, ISCAS'89 (full scan) and ITC'99 (full scan) CEC instances and ISCAS'89 BMC instances show that our approach is independent of the state-of-the-art SAT-solver used, and that the added clauses help to achieve more than an order of magnitude speedup over the conventional approach. Also, comparison with Hyper-Resolution [Bacchus 03] suggests that our technique is much more powerful, yielding non-trivial clauses that significantly simplify the SAT instance complexity. Secondly, we propose a novel global learning technique that helps to identify highly non-trivial relationships among signals in the circuit netlist, thereby boosting the power of the existing implication engine. We call this new class of implications as 'extended forward implications', and show its effectiveness through additional untestable faults they help to identify. Thirdly, we propose a suite of lemmas and theorems to formalize global learning. We show through implementation that these theorems help to significantly simplify a generic CNF formula (from Formal Verification, Artificial Intelligence etc.) by identifying the necessary assignments, equivalent signals, complementary signals and other non-trivial implication relationships among its variables. We further illustrate through experimental results that the CNF formula simplification obtained using our tool outshines the simplification obtained using other preprocessors. / Master of Science
22

Sequential Equivalence Checking with Efficient Filtering Strategies for Inductive Invariants

Nguyen, Huy 24 May 2011 (has links)
Powerful sequential optimization techniques can drastically change the Integrated Circuit (IC) design paradigm. Due to the limited capability of sequential verification tools, aggressive sequential optimization is shunned nowadays as there is no efficient way to prove the preservation of equivalence after optimization. Due to the fact that the number of transistors fitting on single fixed-size die increases with Moore's law, the problem gets harder over time and in an exponential rate. It is no surprise that functional verification becomes a major bottleneck in the time-to-market of a product. In fact, literature has reported that 70% of design time is spent on making sure the design is bug-free and operating correctly. One of the core verification tasks in achieving high quality products is equivalence checking. Essentially, equivalence checking ensures the preservation of optimized product's functionality to the unoptimized model. This is important for industry because the products are modified constantly to meet different goals such as low power, high performance, etc. The mainstream in conducting equivalence checking includes simulation and formal verification. In simulation approach, golden design and design under verification (DUV) are fed with same stimuli for input expecting outputs to produce identical responses. In case of discrepancy, traces will be generated and DUV will undergo modifications. With the increase in input pins and state elements in designs, exhaustive simulation becomes infeasible. Hence, the completeness of the approach is not guaranteed and notions of coverage has to be accompanied. On the other hand, formal verification incorporates mathematical proofs and guarantee the completeness over the search space. However, formal verification has problems of its own in which it is usually resource intensive. In addition, not all design can be verified after optimization processes. That is to say the golden model and DUV are vastly different in structure which cause modern checker to give inconclusive result. Due to this nature, this thesis focuses in improving the strength and the efficiency of sequential equivalence checking (SEC) using formal approach. While there has been great strides made in the verification for combinational circuits, SEC still remains rather rudimentary. Without powerful SEC as a backbone, aggressive sequential synthesis and optimization are often avoided if the optimized design cannot be proved to be equivalent to the original one. In an attempt to take on the challenges of SEC, we propose two frameworks that successfully determining equivalence for hard-to-verify circuits. The first framework utilizes arbitrary relations between any two nodes within the two sequential circuits in question. The two nodes can reside in the same or across the circuits; likewise, they can be from the same time-frame or across time-frames. The merit for this approach is to use global structure of the circuits to speed up the verification process. The second framework introduces techniques to identify subset but yet powerful multi-node relations (involve more than 2 nodes) which then help to prune large don't care search space and result in a successful SEC framework. In contrast with previous approaches in which exponential number of multi-node relations are mined and learned, we alleviate the computation cost by selecting much fewer invariants to achieve desired conclusion. Although independent, the two frameworks could be used in sequential to complement each other. Experimental results demonstrate that our frameworks can take on many hard-to-verify cases and show a significant speed up over previous approaches. / Master of Science
23

The Generalized Splitting method for Combinatorial Counting and Static Rare-Event Probability Estimation

Zdravko Botev Unknown Date (has links)
This thesis is divided into two parts. In the first part we describe a new Monte Carlo algorithm for the consistent and unbiased estimation of multidimensional integrals and the efficient sampling from multidimensional densities. The algorithm is inspired by the classical splitting method and can be applied to general static simulation models. We provide examples from rare-event probability estimation, counting, optimization, and sampling, demonstrating that the proposed method can outperform existing Markov chain sampling methods in terms of convergence speed and accuracy. In the second part we present a new adaptive kernel density estimator based on linear diffusion processes. The proposed estimator builds on existing ideas for adaptive smoothing by incorporating information from a pilot density estimate. In addition, we propose a new plug-in bandwidth selection method that is free from the arbitrary normal reference rules used by existing methods. We present simulation examples in which the proposed approach outperforms existing methods in terms of accuracy and reliability.
24

Search, propagation, and learning in sequencing and scheduling problems / Recherche, propagation et apprentissage dans les problèmes de séquencement et d'ordonnancement

Siala, Mohamed 13 May 2015 (has links)
Les problèmes de séquencement et d'ordonnancement forment une famille de problèmes combinatoires qui implique la programmation dans le temps d'un ensemble d'opérations soumises à des contraintes de capacités et de ressources. Nous contribuons dans cette thèse à la résolution de ces problèmes dans un contexte de satisfaction de contraintes et d'optimisation combinatoire. Nos propositions concernent trois aspects différents: comment choisir le prochain nœud à explorer (recherche)? comment réduire efficacement l'espace de recherche (propagation)? et que peut-on apprendre des échecs rencontrés lors de la recherche (apprentissage)? Nos contributions commencent par une étude approfondie des heuristiques de branchement pour le problème de séquencement de chaîne d'assemblage de voitures. Cette évaluation montre d'abord les paramètres clés de ce qui constitue une bonne heuristique pour ce problème. De plus, elle montre que la stratégie de branchement est aussi importante que la méthode de propagation. Deuxièmement, nous étudions les mécanismes de propagation pour une classe de contraintes de séquencement à travers la conception de plusieurs algorithmes de filtrage. En particulier, nous proposons un algorithme de filtrage complet pour un type de contrainte de séquence avec une complexité temporelle optimale dans le pire cas. Troisièmement, nous investiguons l'impact de l'apprentissage de clauses pour résoudre le problème de séquencement de véhicules à travers une nouvelle stratégie d'explication réduite pour le nouveau filtrage. L'évaluation expérimentale montre l'importance de l'apprentissage de clauses pour ce problème. Ensuite, nous proposons une alternative pour la génération retardée de variables booléennes pour encoder les domaines. Finalement, nous revisitons les algorithmes d'analyse de conflits pour résoudre les problèmes d'ordonnancement disjonctifs. En particulier, nous introduisons une nouvelle procédure d'analyse de conflits dédiée pour cette famille de problèmes. La nouvelle méthode diffère des algorithmes traditionnels par l'apprentissage de clauses portant uniquement sur les variables booléennes de disjonctions. Enfin, nous présentons les résultats d'une large étude expérimentale qui démontre l'impact de ces nouveaux mécanismes d'apprentissage. En particulier, la nouvelle méthode d'analyse de conflits a permis de découvrir de nouvelle bornes inférieures pour des instances largement étudiées de la littérature / Sequencing and scheduling involve the organization in time of operations subject to capacity and resource constraints. We propose in this dissertation several improvements to the constraint satisfaction and combinatorial optimization methods for solving these problems. These contributions concern three different aspects: how to choose the next node to explore (search)? how much, and how efficiently, one can reduce the search space (propagation)? and what can be learnt from previous failures (learning)? Our contributions start with an empirical study of search heuristics for the well known car-sequencing problem. This evaluation characterizes the key aspects of a good heuristic and shows that the search strategy is as important as the propagation aspect in this problem. Second, we carefully investigate the propagation aspect in a class of sequencing problems. In particular, we propose an algorithm for filtering a type of sequence constraints which worst case time complexity is lower than the best known upper bound, and indeed optimal. Third, we investigate the impact of clause learning for solving the car-sequencing problem. In particular, we propose reduced explanations for the new filtering. The experimental evaluation shows compelling evidence supporting the importance of clause learning for solving efficiently this problem. Next, we revisit the general approach of lazy generation for the Boolean variables encoding the domains. Our proposition avoids a classical redundancy issue without computational overhead. Finally, we investigate conflict analysis algorithms for solving disjunctive scheduling problems. In particular, we introduce a novel learning procedure tailored to this family of problems. The new conflict analysis differs from conventional methods by learning clauses whose size are not function of the scheduling horizon. Our comprehensive experimental study with traditional academic benchmarks demonstrates the impact of the novel learning scheme that we propose. In particular, we find new lower bounds for a well known scheduling benchmark
25

Neuro-inspired computing enhanced by scalable algorithms and physics of emerging nanoscale resistive devices

Parami Wijesinghe (6838184) 16 August 2019 (has links)
<p>Deep ‘Analog Artificial Neural Networks’ (AANNs) perform complex classification problems with high accuracy. However, they rely on humongous amount of power to perform the calculations, veiling the accuracy benefits. The biological brain on the other hand is significantly more powerful than such networks and consumes orders of magnitude less power, indicating some conceptual mismatch. Given that the biological neurons are locally connected, communicate using energy efficient trains of spikes, and the behavior is non-deterministic, incorporating these effects in Artificial Neural Networks (ANNs) may drive us few steps towards a more realistic neural networks. </p> <p> </p> <p>Emerging devices can offer a plethora of benefits including power efficiency, faster operation, low area in a vast array of applications. For example, memristors and Magnetic Tunnel Junctions (MTJs) are suitable for high density, non-volatile Random Access Memories when compared with CMOS implementations. In this work, we analyze the possibility of harnessing the characteristics of such emerging devices, to achieve neuro-inspired solutions to intricate problems.</p> <p> </p> <p>We propose how the inherent stochasticity of nano-scale resistive devices can be utilized to realize the functionality of spiking neurons and synapses that can be incorporated in deep stochastic Spiking Neural Networks (SNN) for image classification problems. While ANNs mainly dwell in the aforementioned classification problem solving domain, they can be adapted for a variety of other applications. One such neuro-inspired solution is the Cellular Neural Network (CNN) based Boolean satisfiability solver. Boolean satisfiability (k-SAT) is an NP-complete (k≥3) problem that constitute one of the hardest classes of constraint satisfaction problems. We provide a proof of concept hardware based analog k-SAT solver that is built using MTJs. The inherent physics of MTJs, enhanced by device level modifications, is harnessed here to emulate the intricate dynamics of an analog, CNN based, satisfiability (SAT) solver. </p> <p> </p> <p>Furthermore, in the effort of reaching human level performance in terms of accuracy, increasing the complexity and size of ANNs is crucial. Efficient algorithms for evaluating neural network performance is of significant importance to improve the scalability of networks, in addition to designing hardware accelerators. We propose a scalable approach for evaluating Liquid State Machines: a bio-inspired computing model where the inputs are sparsely connected to a randomly interlinked reservoir (or liquid). It has been shown that biological neurons are more likely to be connected to other neurons in the close proximity, and tend to be disconnected as the neurons are spatially far apart. Inspired by this, we propose a group of locally connected neuron reservoirs, or an ensemble of liquids approach, for LSMs. We analyze how the segmentation of a single large liquid to create an ensemble of multiple smaller liquids affects the latency and accuracy of an LSM. In our analysis, we quantify the ability of the proposed ensemble approach to provide an improved representation of the input using the Separation Property (SP) and Approximation Property (AP). Our results illustrate that the ensemble approach enhances class discrimination (quantified as the ratio between the SP and AP), leading to improved accuracy in speech and image recognition tasks, when compared to a single large liquid. Furthermore, we obtain performance benefits in terms of improved inference time and reduced memory requirements, due to lower number of connections and the freedom to parallelize the liquid evaluation process.</p>
26

Παραμετροποίηση στοχαστικών μεθόδων εξόρυξης γνώσης από δεδομένα, μετασχηματισμού συμβολοσειρών και τεχνικών συμπερασματικού λογικού προγραμματισμού / Parameterization of stochastic data mining methods, string conversion algorithms and deductive logic programming techniques

Λύρας, Δημήτριος 02 February 2011 (has links)
Η παρούσα διατριβή πραγματεύεται το αντικείμενο της μάθησης από δύο διαφορετικές οπτικές γωνίες: την επαγωγική και την παραγωγική μάθηση. Αρχικά, παρουσιάζονται παραμετροποιήσεις στοχαστικών μεθόδων εξόρυξης γνώσης από δεδομένα υπό τη μορφή τεσσάρων καινοτόμων εξατομικευμένων μοντέλων στήριξης ασθενών που πάσχουν από διαταραχές άγχους. Τα τρία μοντέλα προσανατολίζονται στην ανεύρεση πιθανών συσχετίσεων μεταξύ των περιβαλλοντικών παραμέτρων των ασθενών και του επιπέδου άγχους που αυτοί παρουσιάζουν, ενώ παράλληλα προτείνεται και η χρήση ενός Μπεϋζιανού μοντέλου πρόβλεψης του επιπέδου άγχους που είναι πιθανό να εμφανίσει κάποιος ασθενής δεδομένων ορισμένων τιμών του περιβαλλοντικού του πλαισίου εφαρμογής. Αναφορικά με το χώρο της εξόρυξης γνώσης από κείμενο και του μετασχηματισμού συμβολοσειρών, προτείνεται η εκπαίδευση μοντέλων δέντρων αποφάσεων για την αυτόματη μεταγραφή Ελληνικού κειμένου στην αντίστοιχη φωνητική του αναπαράσταση, πραγματοποιείται η στοχαστική μοντελοποίηση όλων των πιθανών μεταγραφικών νορμών από ορθογραφημένα Ελληνικά σε Greeklish και τέλος παρουσιάζεται ένας καινοτόμος αλγόριθμος που συνδυάζει δύο γνωστά για την ικανοποιητική τους απόδοση μέτρα σύγκρισης ομοιότητας αλφαριθμητικών προκειμένου να επιτευχθεί η αυτόματη λημματοποίηση του κειμένου εισόδου. Επιπρόσθετα, στα πλαίσια της ανάπτυξης συστημάτων που θα διευκολύνουν την ανάκτηση εγγράφων ή πληροφοριών προτείνεται η συνδυαστική χρήση του προαναφερθέντος αλγορίθμου λημματοποίησης παράλληλα με τη χρήση ενός πιθανοτικού δικτύου Bayes στοχεύοντας στην ανάπτυξη ενός εύρωστου και ανταγωνιστικού ως προς τις επιδόσεις συστήματος ανάκτησης πληροφοριών. Τέλος, παρουσιάζονται οι προτάσεις μας που αφορούν στο χώρο της παραγωγικής μάθησης και του ελέγχου ικανοποιησιμότητας λογικών εκφράσεων. Συγκεκριμένα περιλαμβάνουν: i) την ανάλυση και εκτενή παρουσίαση μιας καινοτόμας μαθηματικής μοντελοποίησης με την ονομασία AnaLog (Analytic Tableaux Logic) η οποία δύναται να εκφράσει τη λογική που διέπει τους αναλυτικούς πίνακες για προτασιακούς τύπους σε κανονική διαζευκτική μορφή. Mέσω του λογισμού Analog επιτυγχάνεται η εύρεση των κλειστών κλάδων του πλήρως ανεπτυγμένου δέντρου Smullyan, χωρίς να είναι απαραίτητος ο αναλυτικός σχεδιασμός του δέντρου, και ii) την παράθεση ενός αναλυτικού αλγορίθμου που μπορεί να αξιοποιήσει τον φορμαλισμό AnaLog σε ένα πλαίσιο αριθμητικής διαστημάτων μέσω του οποίου μπορούμε να αποφανθούμε για την ικανοποιησιμότητα συμβατικών διαζευκτικών προτασιακών εκφράσεων. / The present dissertation deals with the problem of learning from two different perspectives, meaning the inferential and the deductive learning. Initially, we present our suggestions regarding the parameterization of stochastic data mining methods in the form of four treatment supportive services for patients suffering from anxiety disorders. Three of these services focus on the discovery of possible associations between the patients’ contextual data whereas the last one aims at predicting the stress level a patient might suffer from, in a given environmental context. Our proposals with regards to the wider area of text mining and string conversion include: i) the employment of decision-tree based models for the automatic conversion of Greek texts into their equivalent CPA format, ii) the stochastic modeling of all the existing transliteration norms for the Greek to Greeklish conversion in the form of a robust transcriber and iii) a novel algorithm that is able to combine two well-known for their satisfactory performance string distance metric models in order to address the problem of automatic word lemmatization. With regards to the development of systems that would facilitate the automatic information retrieval, we propose the employment of the aforementioned lemmatization algorithm in order to reduce the ambiguity posed by the plethora of morphological variations of the processed language along with the parallel use of probabilistic Bayesian Networks aiming at the development of a robust and competitive modern information retrieval system. Finally, our proposals regarding logical deduction and satisfiability checking include: i) a novel mathematical formalism of the analytic tableaux methodology named AnaLog (after the terms Analytic Tableaux Logic) which allows us to efficiently simulate the structure and the properties of a complete clausal tableau given an input CNF formula. Via the AnaLog calculus it is made possible to calculate all the closed branches of the equivalent complete Smullyan tree without imposing the need to fully construct it, and ii) a practical application of the AnaLog calculus within an interval arithmetic framework which is able to decide upon the satisfiability of propositional formulas in CNF format. This framework, apart from constituting an illustrative demonstration of the application of the AnaLog calculus, it may also be employed as an alternative conventional SAT system.

Page generated in 0.0792 seconds