• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 12
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 153
  • 44
  • 43
  • 39
  • 36
  • 27
  • 22
  • 21
  • 20
  • 19
  • 19
  • 19
  • 18
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Exploitation de la logique propositionnelle pour la résolution parallèle des problèmes cryptographiques / Exploitation of propositional logic for parallel solving of cryptographic problems

Legendre, Florian 30 June 2014 (has links)
La démocratisation des ordinateurs, des téléphones portables et surtout de l'Internet a considérablement révolutionné le monde de la communication. Les besoins en matière de cryptographie sont donc plus nombreux et la nécessité de vérifier la sûreté des algorithmes de chiffrement est vitale. Cette thèse s'intéresse à l'étude d'une nouvelle cryptanalyse, appelée cryptanalyse logique, qui repose sur l'utilisation de la logique propositionnelle - à travers le problème de satisfaisabilité - pour exprimer et résoudre des problèmes cryptographiques. Plus particulièrement, les travaux présentés ici portent sur une certaine catégorie de chiffrements utilisés dans les protocoles d'authentification et d'intégrité de données qu'on appelle fonctions de hachage cryptographiques. Un premier point concerne l'aspect modélisation d'un problème cryptographique en un problème de satisfaisabilité et sa simplification logique. Sont ensuite présentées plusieurs façons pour utiliser cette modélisation fine, dont un raisonnement probabiliste sur les données du problème qui permet, entres autres, d'améliorer les deux principaux points d'une attaque par cryptanalyse logique, à savoir la modélisation et la résolution. Un second point traite des attaques menées en pratique. Dans ce cadre, la recherche de pré-Image pour les fonctions de hachage les plus couramment utilisées mènent à repousser les limites de la résistance de ces fonctions à la cryptanalyse logique. À cela s'ajoute plusieurs attaques pour la recherche de collisions dans le cadre de la logique propositionnelle. / Democratization of increasingly high-Performance digital technologies and especially the Internet has considerably changed the world of communication. Consequently, needs in cryptography are more and more numerous and the necessity of verifying the security of cipher algorithms is essential.This thesis deals with a new cryptanalysis, called logical cryptanalysis, which is based on the use of logical formalism to express and solve cryptographic problems. More precisely, works presented here focuses on a particular category of ciphers, called cryptographic hash functions, used in authentication and data integrity protocols.Logical cryptanalysis is a specific algebraic cryptanalysis where the expression of the cryptographic problem is done through the satisfiabilty problem, fluently called sat problem. It consists in a combinatorial problem of decision which is central in complexity theory. In the past years, works led by the scientific community have allowed to develop efficient solvers for industrial and academical problems.Works presented in this thesis are the fruit of an exploration between satisfiability and cryptanalysis, and have enabled to display new results and innovative methods to weaken cryptographic functions.The first contribution is the modeling of a cryptographic problem as a sat problem. For this, we present some rules that lead to describe easily basic operations involved in cipher algorithms. Then, a section is dedicated to logical reasoning in order to simplify the produced sat formulas and show how satisfiability can help to enrich a knowledge on a studied problem. Furthermore, we also present many points of view to use our smooth modeling to apply a probabilistic reasoning on all the data associated with the generated sat formulas. This has then allowed to improve both the modeling and the solving of the problem and underlined a weakness about the use of round constants.Second, a section is devoted to practical attacks. Within this framework, we tackled preimages of the most popular cryptographic hash functions. Moreover, the collision problem is also approached in different ways, and particularly, the one-Bloc collision attack of Stevens on MD5 was translated within a logical context. It's interesting to remark that in both cases, logical cryptanalysis takes a new look on the considered problems.
52

Certification of static analysis in many-sorted first-order logic / Analyse statique certifiée en logique du premier ordre multi-sortée

Cornilleau, Pierre-Emmanuel 25 March 2013 (has links)
L'analyse statique est utilisée pour vérifier de manière formelle qu'un programme ne fait pas d'erreurs, mais un analyseur statique est lui même un programme complexe sujet aux erreurs. Une analyse statique formalisée comme un interpreteur abstrait peut être prouvée correcte, cependant un telle preuve ne porte pas directement sur l'implementation de l'analyseur. Pour résoudre cette difficultée, nous proposons de générer des conditions de vérification (VCs, des formules logiques valides seulement si le résultat de l'analyseur est correct), et de les décharger à l'aide d'un prouveur de théorèmes automatique (ATP). Les VCs générées appartiennent à la logic du premier ordre multi-sortée (MSFOL), une logique utilisée avec succés en vérification déductive, suffisament expressive pour encoder les résultats d'analyses complexes et pour formaliser la sémantique operationnelle d'un langage objet, ce qui nous permet de prouver la correction des VCs générées à l'aide d'outils de vérification deductive. Pour assurer que les VCs puissent être déchargée automatiquement pour des analyses du tas, nous introduisons un calcul de VCs appartenant à un fragment décidable de MSFOL, et afin de pouvoir utiliser le même calcul pour différentes analyses, nous décrivons une famille d'analyses à l'aide d'une fonction de concretisation et d'un instrumentation de la sémantique paramétrées. Pour améliorer la fiabilité des ATPs, nous étudions aussi la certification de résultat des proveurs de satisfiabilité modulo théories, une famille d'ATPs dédiée à MSFOL. Nous proposons un système de preuve et un vérifieur modulaires, qui s'appuient sur des vérifieur dédiés aux théories sous-jacentes. / Static program analysis is a core technology for both verifying and finding errors in programs but most static analyzers are complex pieces of software that are not without error. A Static analysis formalised as an abstract interpreter can be proved sound, however such proofs are significantly harder to do on the actual implementation of an analyser. To alleviate this problem we propose to generate Verification Conditions (VCs, formulae valid only if the results of the analyser are correct) and to discharge them using an Automated Theorem Prover (ATP). We generate formulae in Many-Sorted First-Order Logic (MSFOL), a logic that has been successfully used in deductive program verification. MSFOL is expressive enough to describe the results of complex analyses and to formalise the operational semantics of object-oriented languages. Using the same logic for both tasks allows us to prove the soundness of the VC generator using deductive verification tools. To ensure that VCs can be automatically discharged for complex analyses of the heap, we introduce a VC calculus that produces formulae belonging to a decidable fragment of MSFOL. Furthermore, to be able to certify different analyses with the same calculus, we describe a family of analyses with a parametric concretisation function and instrumentation of the semantics. To improve the reliability of ATPs, we also studied the result certification of Satisfiability Modulo Theory solvers, a family of ATPs dedicated to MSFOL. We propose a modular proof-system and a modular proof-verifier programmed and proved correct in Coq, that rely on exchangeable verifiers for each of the underlying theories.
53

Automated reasoning over string constraints

Liang, Tianyi 01 December 2014 (has links)
An increasing number of applications in verification and security rely on or could benefit from automatic solvers that can check the satisfiability of constraints over a rich set of data types that includes character strings. Unfortunately, most string solvers today are standalone tools that can reason only about some fragment of the theory of strings and regular expressions, sometimes with strong restrictions on the expressiveness of their input language (such as, length bounds on all string variables). These specialized solvers reduce string problems to satisfiability problems over specific data types, such as bit vectors, or to automata decision problems. On the other side, despite their power and success as back-end reasoning engines, general-purpose Satisfiability Modulo Theories (SMT) solvers so far have provided minimal or no native support for string reasoning. This thesis presents a deductive calculus describing a new algebraic approach that allows solving constraints over the theory of unbounded strings and regular expressions natively, without reduction to other problems. We provide proofs of refutation soundness and solution soundness of our calculus, and solution completeness under a fair proof strategy. Moreover, we show that our calculus is a decision procedure for the theory of regular language membership with length constraints. We have implemented our calculus as a string solver for the theory of (unbounded) strings with concatenation, length, and membership in regular languages, and incorporated it into the SMT solver CVC4 to expand its already large set of built-in theories. This work makes CVC4 the first SMT solver that is able to accept and process a rich set of mixed constraints over strings, integers, reals, arrays and other data types. In addition, our initial experimental results show that, over string problems, CVC4 is highly competitive with specialized string solvers with a comparable input language. We believe that the approach we described in this thesis provides a new idea for string-based formal methods.
54

Improvements to Clause Weighting Local Search for Propositional Satisfiability

Ferreira Junior, Valnir, N/A January 2007 (has links)
The propositional satisfiability (SAT) problem is of considerable theoretical and practical relevance to the artificial intelligence (AI) community and has been used to model many pervasive AI tasks such as default reasoning, diagnosis, planning, image interpretation, and constraint satisfaction. Computational methods for SAT have historically fallen into two broad categories: complete search and local search. Within the local search category, clause weighting methods are amongst the best alternatives for SAT, becoming particularly attractive on problems where a complete search is impractical or where there is a need to find good candidate solutions within a short time. The thesis is concerned with the study of improvements to clause weighting local search methods for SAT. The main contributions are: A component-based framework for the functional analysis of local search methods. A clause weighting local search heuristic that exploits longer-term memory arising from clause weight manipulations. The approach first learns which clauses are globally hardest to satisfy and then uses this information to treat these clauses differentially during weight manipulation [Ferreira Jr and Thornton, 2004]. A study of heuristic tie breaking in the domain of additive clause weighting local search methods, and the introduction of a competitive method that uses heuristic tie breaking instead of the random tie breaking approach used in most existing methods [Ferreira Jr and Thornton, 2005]. An evaluation of backbone guidance for clause weighting local search, and the introduction of backbone guidance to three state-of-the-art clause weighting local search methods [Ferreira Jr, 2006]. A new clause weighting local search method for SAT that successfully exploits synergies between the longer-term memory and tie breaking heuristics developed in the thesis to significantly improve on the performance of current state-of-the-art local search methods for SAT-encoded instances containing identifiable CSP structure. Portions of this thesis have appeared in the following refereed publications: Longer-term memory in clause weighting local search for SAT. In Proceedings of the 17th Australian Joint Conference on Artificial Intelligence, volume 3339 of Lecture Notes in Artificial Intelligence, pages 730-741, Cairns, Australia, 2004. Tie breaking in clause weighting local search for SAT. In Proceedings of the 18th Australian Joint Conference on Artificial Intelligence, volume 3809 of Lecture Notes in Artificial Intelligence, pages 70–81, Sydney, Australia, 2005. Backbone guided dynamic local search for propositional satisfiability. In Proceedings of the Ninth International Symposium on Artificial Intelligence and Mathematics, AI&M, Fort Lauderdale, Florida, 2006.
55

Modelling and Exploiting Structures in Solving Propositional Satisfiability Problems

Pham, Duc Nghia, n/a January 2006 (has links)
Recent research has shown that it is often preferable to encode real-world problems as propositional satisfiability (SAT) problems and then solve using a general purpose SAT solver. However, much of the valuable information and structure of these realistic problems is flattened out and hidden inside the corresponding Conjunctive Normal Form (CNF) encodings of the SAT domain. Recently, systematic SAT solvers have been progressively improved and are now able to solve many highly structured practical problems containing millions of clauses. In contrast, state-of-the-art Stochastic Local Search (SLS) solvers still have difficulty in solving structured problems, apparently because they are unable to exploit hidden structure as well as the systematic solvers. In this thesis, we study and evaluate different ways to effectively recognise, model and efficiently exploit useful structures hidden in realistic problems. A summary of the main contributions is as follows: 1. We first investigate an off-line processing phase that applies resolution-based pre-processors to input formulas before running SLS solvers on these problems. We report an extensive empirical examination of the impact of SAT pre-processing on the performance of contemporary SLS techniques. It emerges that while all the solvers examined do indeed benefit from pre-processing, the effects of different pre-processors are far from uniform across solvers and across problems. Our results suggest that SLS solvers need to be equipped with multiple pre-processors if they are ever to match the performance of systematic solvers on highly structured problems. [Part of this study was published at the AAAI-05 conference]. 2. We then look at potential approaches to bridging the gap between SAT and constraint satisfaction problem (CSP) formalisms. One approach has been to develop a many-valued SAT formalism (MV-SAT) as an intermediate paradigm between SAT and CSP, and then to translate existing highly efficient SAT solvers to the MV-SAT domain. In this study, we follow a different route, developing SAT solvers that can automatically recognise CSP structure hidden in SAT encodings. This allows us to look more closely at how constraint weighting can be implemented in the SAT and CSP domains. Our experimental results show that a SAT-based mechanism to handle weights, together with a CSP-based method to instantiate variables, is superior to other combinations of SAT and CSP-based approaches. In addition, SLS solvers based on this many-valued weighting approach outperform other existing approaches to handle many-valued CSP structures. [Part of this study was published at the AAAI-05 conference]. 3. Finally, we propose and evaluate six different schemes to encode temporal reasoning problems, in particular the Interval Algebra (IA) networks, into SAT CNF formulas. We then empirically examine the performance of local search as well as systematic solvers on the new temporal SAT representations, in comparison with solvers that operate on native IA representations. Our empirical results show that zChaff (a state-of-the-art complete SAT solver) together with the best IA-to-SAT encoding scheme, can solve temporal problems significantly faster than existing IA solvers working on the equivalent native IA networks. [Part of this study was published at the CP-05 workshop].
56

Investigations into Satisfiability Search

Slater, Andrew, andrew.slater@csl.anu.edu.au January 2003 (has links)
In this dissertation we investigate theoretical aspects of some practical approaches used in solving and understanding search problems. We concentrate on the Satisfiability problem, which is a strong representative from search problem domains. The work develops general theoretical foundations to investigate some practical aspects of satisfiability search. This results in a better understanding of the fundamental mechanics for search algorithm construction and behaviour. A theory of choice or branching heuristics is presented, accompanied by results showing a correspondence of both parameterisations and performance when the method is compared to previous empirically motivated branching techniques. The logical foundations of the backtracking mechanism are explored alongside formulations for reasoning in relevant logics which results in the development of a malleable backtracking mechanism that subsumes other intelligent backtracking proof construction techniques and allows the incorporation of proof rearrangement strategies. Moreover, empirical tests show that relevant backtracking outperforms all other forms of intelligent backtracking search tree construction methods. An investigation into modelling and generating world problem instances justifies a modularised problem model proposal which is used experimentally to highlight the practicability of search algorithms for the proposed model and related domains.
57

Improvements to Field-Programmable Gate Array Design Efficiency using Logic Synthesis

Ling, Andrew Chaang 18 February 2010 (has links)
As Field-Programmable Gate Array (FPGA) capacity can now support several processors on a single device, the scalability of FPGA design tools and methods has emerged as a major obstacle for the wider use of FPGAs. For example, logic synthesis, which has traditionally been the fastest step in the FPGA Computer-Aided Design (CAD) flow, now takes several hours to complete in a typical FPGA compile. In this work, we address this problem by focusing on two areas. First, we revisit FPGA logic synthesis and attempt to improve its scalability. Specifically, we look at a binary decision diagram (BDD) based logic synthesis flow, referred to as FBDD, where we improve its runtime by several fold with a marginal impact to the resulting circuit area. We do so by speeding up the classical cut generation problem by an order-of-magnitude which enables its application directly at the logic synthesis level. Following this, we introduce a guided partitioning technique using a fast global budgeting formulation, which enables us to optimize individual “pockets” within the circuit without degrading the overall circuit performance. By using partitioning we can significantly reduce the solution space of the logic synthesis problem and, furthermore, open up the possibility of parallelizing the logic synthesis step. The second area we look at is the area of Engineering Change Orders (ECOs). ECOs are incremental modifications to a design late in the design flow. This is beneficial since it is minimally disruptive to the existing circuit which preserves much of the engineering effort invested previously in the design. In a design flow where most of the steps are fully automated, ECOs still remain largely a manual process. This can often tie up a designer for weeks leading to missed project deadlines which is very detrimental to products whose life-cycle can span only a few months. As a solution to this, we show how we can leverage existing logic synthesis techniques to automatically modify a circuit in a minimally disruptive manner. This can significantly reduce the turn-around time when applying ECOs.
58

Improvements to Field-Programmable Gate Array Design Efficiency using Logic Synthesis

Ling, Andrew Chaang 18 February 2010 (has links)
As Field-Programmable Gate Array (FPGA) capacity can now support several processors on a single device, the scalability of FPGA design tools and methods has emerged as a major obstacle for the wider use of FPGAs. For example, logic synthesis, which has traditionally been the fastest step in the FPGA Computer-Aided Design (CAD) flow, now takes several hours to complete in a typical FPGA compile. In this work, we address this problem by focusing on two areas. First, we revisit FPGA logic synthesis and attempt to improve its scalability. Specifically, we look at a binary decision diagram (BDD) based logic synthesis flow, referred to as FBDD, where we improve its runtime by several fold with a marginal impact to the resulting circuit area. We do so by speeding up the classical cut generation problem by an order-of-magnitude which enables its application directly at the logic synthesis level. Following this, we introduce a guided partitioning technique using a fast global budgeting formulation, which enables us to optimize individual “pockets” within the circuit without degrading the overall circuit performance. By using partitioning we can significantly reduce the solution space of the logic synthesis problem and, furthermore, open up the possibility of parallelizing the logic synthesis step. The second area we look at is the area of Engineering Change Orders (ECOs). ECOs are incremental modifications to a design late in the design flow. This is beneficial since it is minimally disruptive to the existing circuit which preserves much of the engineering effort invested previously in the design. In a design flow where most of the steps are fully automated, ECOs still remain largely a manual process. This can often tie up a designer for weeks leading to missed project deadlines which is very detrimental to products whose life-cycle can span only a few months. As a solution to this, we show how we can leverage existing logic synthesis techniques to automatically modify a circuit in a minimally disruptive manner. This can significantly reduce the turn-around time when applying ECOs.
59

Augmenting Local Search for Satisfiability

Southey, Finnegan January 2004 (has links)
This dissertation explores approaches to the satisfiability problem, focusing on local search methods. The research endeavours to better understand how and why some local search methods are effective. At the root of this understanding are a set of metrics that characterize the behaviour of local search methods. Based on this understanding, two new local search methods are proposed and tested, the first, SDF, demonstrating the value of the insights drawn from the metrics, and the second, ESG, achieving state-of-the-art performance and generalizing the approach to arbitrary 0-1 integer linear programming problems. This generality is demonstrated by applying ESG to combinatorial auction winner determination. Further augmentations to local search are proposed and examined, exploring hybrids that incorporate aspects of backtrack search methods.
60

Hardware Acceleration of Electronic Design Automation Algorithms

Gulati, Kanupriya 2009 December 1900 (has links)
With the advances in very large scale integration (VLSI) technology, hardware is going parallel. Software, which was traditionally designed to execute on single core microprocessors, now faces the tough challenge of taking advantage of this parallelism, made available by the scaling of hardware. The work presented in this dissertation studies the acceleration of electronic design automation (EDA) software on several hardware platforms such as custom integrated circuits (ICs), field programmable gate arrays (FPGAs) and graphics processors. This dissertation concentrates on a subset of EDA algorithms which are heavily used in the VLSI design flow, and also have varying degrees of inherent parallelism in them. In particular, Boolean satisfiability, Monte Carlo based statistical static timing analysis, circuit simulation, fault simulation and fault table generation are explored. The architectural and performance tradeoffs of implementing the above applications on these alternative platforms (in comparison to their implementation on a single core microprocessor) are studied. In addition, this dissertation also presents an automated approach to accelerate uniprocessor code using a graphics processing unit (GPU). The key idea is to partition the software application into kernels in an automated fashion, such that multiple instances of these kernels, when executed in parallel on the GPU, can maximally benefit from the GPU?s hardware resources. The work presented in this dissertation demonstrates that several EDA algorithms can be successfully rearchitected to maximally harness their performance on alternative platforms such as custom designed ICs, FPGAs and graphic processors, and obtain speedups upto 800X. The approaches in this dissertation collectively aim to contribute towards enabling the computer aided design (CAD) community to accelerate EDA algorithms on arbitrary hardware platforms.

Page generated in 0.0852 seconds