• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 38
  • 38
  • 31
  • 24
  • 21
  • 19
  • 15
  • 14
  • 14
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the automated verification of symmetric-key cryptographic algorithms: an approach based on SAT-solvers

Lafitte, Frédéric 19 September 2017 (has links)
A cryptographic protocol is a structured exchange of messages protected by means of cryptographic algorithms. Computer security in general relies heavily on these protocols and algorithms; in turn, these rely absolutely on smaller components called primitives. As technology advances, computers have reached a cost and a degree of miniaturisation conducive to their proliferation throughout society in the form of software-controlled network-enabled things. As these things find their way into environments where security is critical, their protection ultimately relies on primitives; if a primitive fails, all security solutions (protocols, policies, etc.) that are built on top of it are likely to offer no security at all. Lightweight symmetric-key primitives, in particular, will play a critical role.The security of protocols is frequently verified using formal and automated methods. Concerning algorithms and public-key primitives, formal proofs are often used, although they are somewhat error prone and current efforts aim to automate them. On the other hand, symmetric-key primitives are still currently analysed in a rather ad-hoc manner. Since their security is only guaranteed by the test-of-time, they traditionally have a built-in security margin. Despite being paramount to the security of embedded devices, lightweight primitives appear to have a smaller security margin and researchers would greatly benefit from automated tools in order to strengthen tests-of-time.In their seminal work back in 2000, Massacci and Marraro proposed to formulate primitives in propositional logic and to use SAT solvers to automatically verify their properties. At that time, SAT solvers were quite different from what they have become today; the continuous improvement of their performance makes them an even better choice for a verification back-end. The performance of SAT solvers improved so much that starting around 2006, some cryptanalysts started to use them, but mostly in order to speedup their attacks. This thesis introduces the framework CryptoSAT and shows its advantages for the purpose of verification. / La sécurité informatique repose en majeure partie sur des mécanismes cryptographiques, qui à leur tour dépendent de composants encore plus fondamentaux appelés primitives ;si une primitive échoue, toute la sécurité qui en dépend est vouée à l'échec. Les ordinateurs ont atteint un coût et un degré de miniaturisation propices à leur prolifération sous forme de systèmes embarqués (ou enfouis) qui offrent généralement peu de ressources calculatoires, notamment dans des environnements où la sécurité est primordiale. Leur sécurité repose donc lourdement sur les primitives dites à clé symétrique, puisque ce sont celles qui sont le mieux adaptées aux ressources limitées dont disposent les systèmes embarqués. Il n'est pas mathématiquement prouvé que les primitives à clé symétrique soient dépourvues de failles de sécurité, contrairement à tous les autres mécanismes cryptographiques :alors que la protection qu'offre la cryptographie peut, en général, être prouvée de façon formelle (dans un modèle limité) et parfois au moyen de méthodes automatisées qui laissent peu de place à l'erreur, la protection qu'offrent les primitives à clé symétrique n'est garantie que par “l'épreuve du temps”, c.-à-d. par la résistance (durable) de ces primitives face aux attaques conçues par la communauté des chercheurs en cryptologie. Pour compenser l'absence de garanties formelles, ces primitives sont traditionnellement pourvues d'une “marge de sécurité”, c.-à-d. de calculs supplémentaires, juste au cas où, dont le coût est difficile à justifier lorsque les ressources calculatoires sont rares.Afin de pallier à l'insuffisance de l'épreuve du temps et à la diminution des marges de sécurité, cette thèse revient sur les travaux de Massacci et Marraro qui, en 2000, avaient proposé de formuler les primitives en logique propositionnelle de sorte que leurs propriétés puissent être vérifiées automatiquement au moyen d'algorithmes SAT. A cette époque, les algorithmes SAT étaient très différents de ce qu'ils sont devenus aujourd'hui ;l'amélioration de leur performance, continuelle au fil des années, en fait un choix encore plus judicieux comme moteur de vérification. Dans le cadre de cette thèse, une méthode a été développée pour permettre à un cryptologue de facilement vérifier les propriétés d'une primitive à clé symétrique de façon formelle et automatique à l'aide d'algorithmes SAT, tout en lui permettant de faire abstraction de la logique propositionnelle. L'utilité de la méthode a ensuite été mise en évidence en obtenant des réponses formelles à des questions, posées dans la littérature en cryptanalyse, concernant des failles potentielles tant au niveau de la conception qu'au niveau de la mise en oeuvre de certaines primitives. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
2

Intersection types and higer-order model checking

Ramsay, Steven J. January 2014 (has links)
Higher-order recursion schemes are systems of equations that are used to define finite and infinite labelled trees. Since, as Ong has shown, the trees defined have a decidable monadic second order theory, recursion schemes have drawn the attention of research in program verification, where they sit naturally as a higher-order, functional analogue of Boolean programs. Driven by applications, fragments have been studied, algorithms developed and extensions proposed; the emerging theme is called higher-order model checking. Kobayashi has pioneered an approach to higher-order model checking using intersection types, from which many recent advances have followed. The key is a characterisation of model checking as a problem of intersection type assignment. This dissertation contributes to both the theory and practice of the intersection type approach. A new, fixed-parameter polynomial-time decision procedure is described for the alternating trivial automaton fragment of higher-order model checking. The algorithm uses a novel, type-directed form of abstraction refinement, in which behaviours of the scheme are distinguished according to the intersection types that they inhabit. Furthermore, by using types to reason about acceptance and rejection simultaneously, the algorithm is able to converge on a solution from two sides. An implementation, Preface, and an extensive body of evidence demonstrate empirically that the algorithm scales well to schemes of several thousand rules. A comparison with other tools on benchmarks derived from current practice and the related literature puts it well beyond the state-of-the-art. A generalisation of the intersection type approach is presented in which higher-order model checking is seen as an instance of exact abstract interpretation. Intersection type assignment is used to characterise a general class of safety checking problems, defined independently of any particular representation (such as automata) for a class of recursion schemes built over arbitrary constants. Decidability of any problem in the class is an immediate corollary. Moreover, the work looks beyond whole-program verification, the traditional territory of model checking, by giving a natural treatment of higher-type properties, which are sets of functions.
3

Small model theorems for data independent systems in Alloy

Momtahan, Lee January 2007 (has links)
A system is data independent in a type T if the only operations allowed on variables of type T are input, output, assignment and equality testing. This property can be exploited to give procedures for the automatic verification of such systems independently of the instance of the type T. Alloy is an extension of first-order logic for modelling software systems. Alloy has a fully automatic analyzer which attempts to refute Alloy formulas by searching for counterexamples within a finite scope. However, failure to find a counterexample does not prove the formula correct. A small model theorem is a theorem which shows that if a formula has a model then it has a model within some finite scope. The contribution of this thesis is to give a small model theorem which applies when modelling data-independent systems in Alloy. The theorem allows one to detect automatically whether an Alloy formula is data independent in some type T and then calculate a threshold scope for T, thereby completing the analysis of the automatic analyzer with respect to the type T. We derive the small model theorem using a model-theoretic approach. We build on the standard semantics of the Alloy language and introduce a more abstract interpretation of formulas, by way of a Galois insertion. This more abstract interpretation gives the same truth value as the original interpretation for many formulas. Indeed we show that this property holds for any formula built with a limited set of language constructors which we call data-independent constructors. The more abstract interpretation is designed so that it often lies within a finite scope and we can calculate whether this is the case and exactly how big the finite scope need be from the types of the free variables in the formula. In this way we can show that if a formula has any instance or counterexample at all then it has one within a threshold scope, the size of which we can calculate.
4

Precise verification of C programs

Lewis, Matt January 2014 (has links)
Most current approaches to software verification are one-sided -- a safety prover will try to prove that a program is safe, while a bug-finding tool will try to find bugs. It is rare to find an analyser that is optimised for both tasks, which is problematic since it is hard to know in advance whether a program you wish to analyse is safe or not. The result of taking a one-sided approach to verification is false alarms: safety provers will often claim that safe programs have errors, while bug-finders will often be unable to find errors in unsafe programs. Orthogonally, many software verifiers are designed for reasoning about idealised programming languages that may not have widespread use. A common assumption made by verification tools is that program variables can take arbitrary integer values, while programs in most common languages use fixed-width bitvectors for their variables. This can have a real impact on the verification, leading to incorrect claims by the verifier. In this thesis we will show that it is possible to analyse C programs without generating false alarms, even if they contain unbounded loops, use non-linear arithmetic and have integer overflows. To do this, we will present two classes of analysis based on underapproximate loop acceleration and second-order satisfiability respectively. Underapproximate loop acceleration addresses the problem of finding deep bugs. By finding closed forms for loops, we show that deep bugs can be detected without unwinding the program and that this can be done without introducing false positives or masking errors. We then show that programs accelerated in this way can be optimised by inlining trace automata to reduce their reachability diameter. This inlining allows acceleration to be used as a viable technique for proving safety, as well as finding bugs. In the second part of the thesis, we focus on using second-order logic for program analysis. We begin by defining second-order SAT: an extension of propositional SAT that allows quantification over functions. We show that this problem is NEXPTIME-complete, and that it is polynomial time reducible to finite-state program synthesis. We then present a fully automatic, sound and complete algorithm for synthesising C programs from a specification written in C. Our approach uses a combination of bounded model checking, explicit-state model checking and genetic programming to achieve surprisingly good performance for a problem with such high complexity. We conclude by using second-order SAT to precisely and directly encode several program analysis problems including superoptimisation, de-obfuscation, safety and termination for programs using bitvector arithmetic and dynamically allocated lists.
5

Incremental modelling for verified communication architectures

Boehm, Peter January 2011 (has links)
Modern computer systems are advancing from multi-core to many-core designs and System-on-chips (SoC) are becoming increasingly complex while integrating a great variety of components, thus constituting complex distributed systems. Such architectures rely on extremely complex communication protocols to exchange data with required performance. Arguing formally about the correctness of communication is an acknowledged verification challenge. This thesis presents a generic framework that formalises the idea of incremental modelling and step-wise verification to tackle this challenge: to control the overall complexity, features are added incrementally to a simple initial model and the complexity of each feature is encapsulated into an independent modelling step. Two main strategies reduce the verification effort. First, models are constructed with verification support in mind and the verification process is spread over the modelling process. Second, generic correctness results for framework components allow the verification to be reduced to discharging local assumptions when a component is instantiated. Models in the framework are based on abstract state machines formalised in higher order logic using the Isabelle theorem prover. Two case studies show the utility and breadth of the approach: the ARM AMBA Advanced High-performance Bus protocol, an arbiter-based master-slave bus protocol, represents the family of SoC protocols; the PCI Express protocol, an off-chip point-to-point protocol, illustrates the application of the framework to sophisticated, performance-related features of current and future on-chip protocols. The presented methodology provides an alternative to the traditional monolithic and post-hoc verification approach.
6

Forward looking logics and automata

Ley, Clemens January 2011 (has links)
This thesis is concerned with extending properties of regular word languages to richer structures. We consider intricate properties like the relationship between one-way and two-way temporal logics, minimization of automata, and the ability to effectively characterize logics. We investigate whether these properties can be extended to tree languages or word languages over an infinite alphabet. It is known that linear temporal logic (LTL) is as expressive as first-order logic over finite words [Kam68, GPSS80]. LTL is a unidirectional logic, that can only navigate forwards in a word, hence it is quite surprising that it can capture all of first-order logic. In fact, one of the main ideas of the proof of [GPSS80] is to show that the expressiveness of LTL is not increased if modalities for navigating backwards are added. It is also known that an extension of bidirectional LTL to ordered trees, called Conditional XPath, is first-order complete [Mar04]. We investigate whether the unidirectional fragment of Conditional XPath is also first-order complete. We show that this is not the case. In fact we show that there is a strict hierarchy of expressiveness consisting of languages that are all weaker than first-order logic. Unidirectional Conditional XPath is contained in the lowest level of this hierarchy. In the second part of the thesis we consider data word languages. That is, word languages over an infinite alphabet. We extend the theorem of Myhill and Nerode to a class of automata for data word languages, called deterministic finite memory automata (DMA). We give a characterization of the languages that are accepted by DMA, and also provide an algorithm for minimizing DMA. Finally we extend theorems of Büchi, Schützenberger, McNaughton, and Papert to data word languages. A theorem of Büchi states that a language is regular iff it can be defined in monadic second-order logic. Schützenberger, McNaughton, and Papert have provided an effective characterization of first-order logic, that is, an algorithm for deciding whether a regular language can be defined in first-order logic. We provide a counterpart of Büchi's theorem for data languages. More precisely we define a new logic and we show that it has the same expressiveness as non-deterministic finite memory automata. We then turn to a smaller class of data languages, those that are recognized by algebraic objects called orbit finite data monoids. We define a second new logic and show that it can define precisely the languages accepted by orbit finite data monoids. We provide an effective characterization of a first-order variant of this second logic, as well as of restrictions of first-order logic, such as its two variable fragment and local variants.
7

Techniques and tools for the verification of concurrent systems

Palikareva, Hristina January 2012 (has links)
Model checking is an automatic formal verification technique for establishing correctness of systems. It has been widely used in industry for analysing and verifying complex safety-critical systems in application domains such as avionics, medicine and computer security, where manual testing is infeasible and even minor errors could have dire consequences. In our increasingly parallelised world, concurrency has become pivotal and seamlessly woven within programming paradigms, however, extremely challenging when it comes to modelling and establishing correctness of intended behaviour. Tools for model checking concurrent systems face severe limitations due to scalability problems arising from the need to examine all possible interleavings (schedules) of executions of parallel components. Moreover, concurrency poses additional challenges to model checking, giving rise to phenomena such as nondeterminism, deadlock, livelock, etc. In this thesis we focus on adapting and developing novel model-checking techniques for concurrent systems in the setting of the process algebra CSP and its primary model checker FDR. CSP allows for a compact modelling and precise analysis of event-based concurrency, grounded on synchronous message passing as a fundamental mechanism of inter-component communication. In particular, we investigate techniques based on symbolic model checking, static analysis and abstraction, all of them exploiting the compositionality inherent in CSP and targeting to increase the scale of systems that can be tractably analysed. Firstly, we investigate symbolic model-checking techniques based on Boolean satisfiability (SAT), which we adapt for the traces model of CSP. We tailor bounded model checking (BMC), that can be used for bug detection, and temporal k-induction, which aims at establishing inductiveness of properties and is capable of both bug finding and establishing the correctness of systems. Secondly, we propose a static analysis framework for establishing livelock freedom of CSP processes, with lessons for other concurrent formalisms. As opposed to traditional exhaustive state-space exploration, our framework employs a system of rules on the syntax of a process to calculate a sound approximation of its fair/co-fair sets of events. The rules either safely classify a process as livelock-free or report inconclusiveness, thereby trading accuracy for speed. Finally, we develop a series of abstraction/refinement schemes for the traces, stable-failures and failures-divergences models of CSP and embed them into a fully automated and compositional CEGAR framework. For each of those techniques we present an implementation and an experimental evaluation on a set of CSP benchmarks.
8

Quantitative verification of real-time properties with application to medical devices

Diciolla, Marco January 2014 (has links)
Probabilistic model checking is a powerful technique used to ensure the correct functioning of systems which exhibit real-time and stochastic behaviours. Many such systems are embedded and used in safety-critical situations, to mention implantable medical devices. This thesis aims to develop a formal model-based framework that is tailored for the analysis and verification of cardiac pacemakers. The contributions are novel approaches for the automatic verification and validation of real-time properties over continuous-time models, which are applicable to software embedded in medical devices. First, we address the problem of model checking continuous-time Markov chain (CTMC) models against real-time specifications given in the form of temporal logic, namely, metric temporal logic (MTL) and linear duration properties (LDP), or as timed automata (TA). The main question that we address is “given a continuous-time Markov chain, what is the probability of the set of timed paths that satisfy the real-time property under consideration?”. We provide novel algorithms to approximate the probability through generating systems of linear inequalities over variables that represent the waiting times in system states, and then solving multidimensional integrals over this set. Second, we present a model-based framework to support the design and verification of pacemakers against real-time properties. The pacemaker is modelled as a network of timed automata, whereas the human heart is modelled either as a network of timed automata or as a network of hybrid automata. Our framework can be instantiated with personalised heart models whose parameters can be learnt from patient data, and we have done so to validate our approach. We introduce property patterns and the counting metric temporal logic (CMTL) in order to specify the properties of interest. We provide new verification algorithms for networks of timed or hybrid automata against property patterns and CMTL. Finally, we pose and solve the parameter synthesis problem, i.e., given a network of timed automata containing model parameters, an objective function and a CMTL formula, find the set of parameter valuations, whenever existing, which satisfy the CMTL formula and maximise the objective function. The framework has been implemented using Simulink, Matlab and Python code. Extensive experimental results on pacemaker models have been carried out and discussed in detail. The techniques developed in this thesis can assist in the design and verification of software embedded in medical devices.
9

Abstract satisfaction

Haller, Leopold Carl Robert January 2013 (has links)
This dissertation shows that satisfiability procedures are abstract interpreters. This insight provides a unified view of program analysis and satisfiability solving and enables technology transfer between the two fields. The framework underlying these developments provides systematic recipes that show how intuition from satisfiability solvers can be lifted to program analyzers, how approximation techniques from program analyzers can be integrated into satisfiability procedures and how program analyzers and satisfiability solvers can be combined. Based on this work, we have developed new tools for checking program correctness and for solving satisfiability of quantifier-free first-order formulas. These tools outperform existing approaches. We introduce abstract satisfaction, an algebraic framework for applying abstract interpre- tation to obtain sound, but potentially incomplete satisfiability procedures. The framework allows the operation of satisfiability procedures to be understood in terms of fixed point computations involving deduction and abduction transformers on lattices. It also enables satisfiability solving and program correctness to be viewed as the same algebraic problem. Using abstract satisfaction, we show that a number of satisfiability procedures can be understood as abstract interpreters, including Boolean constraint propagation, the dpll and cdcl algorithms, St ̊almarck’s procedure, the dpll(t) framework and solvers based on congruence closure and the Bellman-Ford algorithm. Our work leads to a novel understand- ing of satisfiability architectures as refinement procedures for abstract analyses and allows us to relate these procedures to independent developments in program analysis. We use this perspective to develop Abstract Conflict-Driven Clause Learning (acdcl), a rigorous, lattice-based generalization of cdcl, the central algorithm of modern satisfiability research. The acdcl framework provides a solution to the open problem of lifting cdcl to new prob- lem domains and can be instantiated over many lattices that occur in practice. We provide soundness and completeness arguments for acdcl that apply to all such instantiations. We evaluate the effectiveness of acdcl by investigating two practical instantiations: fp-acdcl, a satisfiability procedure for the first-order theory of floating point arithmetic, and cdfpl, an interval-based program analyzer that uses cdcl-style learning to improve the precision of a program analysis. fp-acdcl is faster than competing approaches in 80% of our benchmarks and it is faster by more than an order of magnitude in 60% of the benchmarks. Out of 33 safe programs, cdfpl proves 16 more programs correct than a mature interval analysis tool and can conclusively determine the presence of errors in 24 unsafe benchmarks. Compared to bounded model checking, cdfpl is on average at least 260 times faster on our benchmark set.
10

Tractable query answering for description logics via query rewriting

Perez-Urbina, Hector M. January 2010 (has links)
We consider the problem of answering conjunctive queries over description logic knowledge bases via query rewriting. Given a conjunctive query Q and a TBox T, we compute a new query Q′ that incorporates the semantic consequences of T such that, for any ABox A, evaluating Q over T and A can be done by evaluating the new query Q′ over A alone. We present RQR—a novel resolution-based rewriting algorithm for the description logic ELHIO¬ that generalizes and extends existing approaches. RQR not only handles a spectrum of logics ranging from DL-Lite_core up to ELHIO¬, but it is worst-case optimal with respect to data complexity for all of these logics; moreover, given the form of the rewritten queries, their evaluation can be delegated to off-the-shelf (deductive) database systems. We use RQR to derive the novel complexity results that conjunctive query answering for ELHIO¬ and DL-Lite+ are, respectively, PTime and NLogSpace complete with respect to data complexity. In order to show the practicality of our approach, we present the results of an empirical evaluation. Our evaluation suggests that RQR, enhanced with various straightforward optimizations, can be successfully used in conjunction with a (deductive) database system in order to answer queries over knowledge bases in practice. Moreover, in spite of being a more general procedure, RQR will often produce significantly smaller rewritings than the standard query rewriting algorithm for the DL-Lite family of logics.

Page generated in 0.1527 seconds