Spelling suggestions: "subject:"automated cerification"" "subject:"automated erification""
21 |
Program analysis with interpolantsWeissenbacher, Georg January 2010 (has links)
This dissertation discusses novel techniques for interpolation-based software model checking, an approximate method which uses Craig interpolation to compute invariants of programs. Our work addresses two aspects of program analyses based on model checking: verification (the construction of correctness proofs for programs) and falsification (the detection of counterexamples that violate the specification). In Hoare's calculus, a proof of correctness comprises assertions which establish that a program adheres to its specification. The principal challenge is to derive appropriate assertions and loop invariants. Contemporary software verification tools use Craig interpolation (as opposed to traditional predicate transformers such as the weakest precondition) to derive approximate assertions. The performance of the model checker is contingent on the Craig interpolants computed. We present novel interpolation techniques which provide the following advantages over existing methods. Firstly, the resulting interpolants are sound with respect to the bit-level semantics of programs, which is an improvement over interpolation systems that use linear arithmetic over the reals to approximate bit-vector arithmetic and/or do not support bit-level operations. Secondly, our interpolation systems afford us a choice of interpolants and enable us to fine-tune their logical strength and structure. In contrast, existing procedures are limited to a single ad-hoc choice of an interpolant. Interpolation-based verification tools are typically forced to refine an initial approximation repeatedly in order to achieve the accuracy required to establish or refute the correctness of a program. The detection of a counterexample containing a repetitive construct may necessitate one refinement step (involving the computation of additional interpolants) for each iteration of the loop. We present a heuristic that aims to avoid the repeated and computationally expensive construction of interpolants, thus enabling the detection of deeply buried defects such as buffer overflows. Finally, we present an implementation of our techniques and evaluate them on a set of standardised device driver and buffer overflow benchmarks.
|
22 |
Formal methods for the analysis of wireless network protocolsFruth, Matthias January 2011 (has links)
In this thesis, we present novel software technology for the analysis of wireless networks, an emerging area of computer science. To address the widely acknowledged lack of formal foundations in this field, probabilistic model checking, a formal method for verification and performance analysis, is used. Contrary to test and simulation, it systematically explores the full state space and therefore allows reasoning about all possible behaviours of a system. This thesis contributes to design, modelling, and analysis of ad-hoc networks and randomised distributed coordination protocols. First, we present a new hybrid approach that effectively combines probabilistic model checking and state-of-the-art models from the simulation community in order to improve the reliability of design and analysis of wireless sensor networks and their protocols. We describe algorithms for the automated generation of models for both analysis methods and their implementation in a tool. Second, we study spatial properties of wireless sensor networks, mainly with respect to Quality of Service and energy properties. Third, we investigate the contention resolution protocol of the networking standard ZigBee. We build a generic stochastic model for this protocol and analyse Quality of Service and energy properties of it. Furthermore, we assess the applicability of different interference models. Fourth, we explore slot allocation protocols, which serve as a bandwidth allocation mechanism for ad-hoc networks. We build a generic model for this class of protocols, study real-world protocols, and optimise protocol parameters with respect to Quality of Service and energy constraints. We combine this with the novel formalisms for wireless communication and interference models, and finally we optimise local (node) and global (network) routing policies. This is the first application of probabilistic model checking both to protocols of the ZigBee standard and protocols for slot allocation.
|
23 |
Topics in monitoring and planning for embedded real-time systemsHo, Hsi-Ming January 2015 (has links)
The verification of real-time systems has gained much interest in the formal verification community during the past two decades. In this thesis, we investigate two real-time verification problems that benefit from the techniques normally used in untimed verification. The first part of this thesis is concerned with the monitoring of real-time specifications. We study the expressiveness of metric temporal logics over timed words, a problem that dates back to early 1990s. We show that the logic obtained by extending Metric Temporal Logic (MTL) with two families of new modalities is expressively complete for the Monadic First-Order Logic of Order and Metric (FO[<,+1]) in time-bounded settings. Furthermore, by allowing rational constants, expressive completeness also holds in the general (time-unbounded) setting. Finally, we incorporate several notions and techniques from LTL monitoring to obtain the first trace-length independent monitoring procedure for this logic. The second part of this thesis concerns a decision problem regarding UAVs: given a set of targets (each ascribed with a relative deadline) and flight times between each pair of targets, is there a way to coordinate a flock of k identical UAVs so that all targets are visited infinitely often and no target is ever left unvisited for a time longer than its relative deadline? We show that the problem is PSPACE-complete even in the single-UAV case, thereby corrects an erroneous claim from the literature. We then complement this result by proposing an efficient antichain-based approach where a delayed simulation is used to prune the state space. Experimental results clearly demonstrate the effectiveness of our approach.
|
24 |
An algebraic theory of componentised interactionChilton, Christopher James January 2013 (has links)
This thesis provides a specification theory with strong algebraic and compositionality properties, allowing for the systematic construction of new components out of existing ones, while ensuring that given properties continue to hold at each stage of system development. The theory shares similarities with the interface automata of de Alfaro and Henzinger, but is linear-time in the style of Dill's trace theory, and is endowed with a richer collection of operators. Components are assumed to communicate with one another by synchronisation of input and output actions, with the component specifying the allowed sequences of interactions between itself and the environment. When the environment produces an interaction that the component is unwilling to receive, a communication mismatch occurs, which can correspond to run-time error or underspecification. These are modelled uniformly as inconsistencies. A linear-time refinement preorder corresponding to substitutivity preserves the absence of inconsistency under all environments, allowing for the safe replacement of components at run-time. To build complex systems, a range of compositional operators are introduced, including parallel composition, logical conjunction and disjunction, hiding, and quotient. These can be used to examine the structural behaviour of a system, combine independently developed requirements, abstract behaviour, and incrementally synthesise missing components, respectively. It is shown that parallel composition is monotonic under refinement, conjunction and disjunction correspond to the meet and join operations on the refinement preorder, and quotient is the adjoint of parallel composition. Full abstraction results are presented for the equivalence defined as mutual refinement, a consequence of the refinement being the weakest preorder capturing substitutivity. Extensions of the specification theory with progress-sensitivity (ensuring that refinement cannot introduce quiescence) and real-time constraints on when interactions may and may not occur are also presented. These theories are further complemented by assume-guarantee frameworks for supporting component-based reasoning, where contracts (characterising sets of components) separate the assumptions placed on the environment from the guarantees provided by the components. By defining the compositional operators directly on contracts, sound and complete assume-guarantee rules are formulated that preserve both safety and progress. Examples drawn from distributed systems are used to demonstrate how these rules can be used for mechanically deriving component-based designs.
|
25 |
Applications of process-oriented designWhitehead, James Norman January 2014 (has links)
Concurrency is generally considered to be difficult due to a lack of appropriate abstraction, rather than inherent complexity. Lock-based approaches to mutual exclusion are pervasive, despite the presence of models that are easier to understand, such as the message-passing model present in CSP (Communicating Sequential Processes). CSP provides a rich framework for building and reasoning about concurrent systems, but has historically required a change of programming language or paradigm in order to work with it. The Go programming language is a modern, imperative programming language that includes native support for processes and channels. The popularity of this language has grown and more and more people are being exposed to the fundamental ideas of CSP. There is a gap in the understanding of how a restrictive formal model can interact with and support the development of concurrent programs in a language such as Go. Through a series of case studies and analysis, we show how the CSP concurrency model can be used as the basis for the design of a concurrent system architecture without requiring the program to be written entirely as the composition of processes. It is also possible to use the CSP process algebra to build abstract models and use model-checking tools to verify properties of a concurrent system. These models can then be used to guide the decomposition of a system into a more fine-grained concurrent system. This thesis bridges the gap between the development of CSP-style concurrent software and the formal model of CSP. In particular, it shows how it is not necessary for a program or programming language to conform to rigid structure in order for CSP to be a useful tool for the development of reliable and easy to understand concurrent systems.
|
26 |
Monotonicity in shared-memory program verificationKaiser, Alexander January 2013 (has links)
Predicate abstraction is a key enabling technology for applying model checkers to programs written in mainstream languages. It has been used very successfully for debugging sequential system-level C code. Although model checking was originally designed for analysing concurrent systems, there is little evidence of fruitful applications of predicate abstraction to shared-variable concurrent software. The goal of the present thesis is to close this gap. We propose an algorithmic solution implementing predicate abstraction that targets safety properties in non-recursive programs executed by an unbounded number of threads, which communicate via shared memory or higher-level mechanisms, such as mutexes and broadcasts. As system-level code makes frequent use of such primitives, their correct usage is critical to ensure reliability. Monotonicity - the property that thread actions remain executable when other threads are added to the current global state - is a natural and common feature of human-written concurrent software. It is also useful: if every thread’s memory is finite, monotonicity often guarantees the decidability of safety properties even when the number of running threads is unspecified. In this thesis, we show that the process of obtaining finite-data thread abstrac tions for model checking is not always compatible with monotonicity. Predicate-abstracting certain mainstream asynchronous software such as the ticket busy-wait lock algorithm results in non-monotone multi-threaded Boolean programs, despite the monotonicity of the input program: the monotonicity is lost in the abstraction. As a result, the unbounded thread Boolean programs do not give rise to well quasi-ordered systems [1], for which sound and complete safety checking algorithms are available. In fact, safety checking turns out to be undecidable for the obtained class of abstract programs, despite the finiteness of the individual threads’ state spaces. Our solution is to restore the monotonicity in the abstract program, using an inexpensive closure operator that precisely preserves all safety properties from the (non-monotone) abstract program without the closure. As a second contribution, we present a novel, sound and complete, yet empirically much improved algorithm for verifying abstractions, applicable to general well quasi-ordered systems. Our approach is to gradually widen the set of safety queries during the search by program states that involve fewer threads and are thus easier to decide, and are likely to finalise the decision on earlier queries. To counter the negative impact of "bad guesses", i.e. program states that turn out feasible, the search is supported by a parallel engine that generates such states; these are never selected for widening. We present an implementation of our techniques and extensive experiments on multi-threaded C programs, including device driver code from FreeBSD and Solaris. The experiments demonstrate that by exploiting monotonicity, model checking techniques - enabled by predicate abstraction - scale to realistic programs even of a few thousands of multi-threaded C code lines.
|
27 |
Towards putting abstract interpretation of Prolog into practice : design, implementation and evaluation of a tool to verify and optimise Prolog programsGobert, François 11 December 2007 (has links)
Logic programming is appealing since it allows the programmer to concentrate on the meaning of the problem to be solved. Unfortunately, for efficiency reasons, the declarative and operational natures of Prolog do not coincide. Prolog uses an incomplete depth-first search rule, unifications and negations may be unsound, and there are extralogical features like the cut or dynamic predicates. Methodologies have been proposed to construct operationally correct and efficient Prolog code. Researchers have designed methods to automate the verification of operational properties on which optimisation of logic programs can be based. A few tools have been implemented but there is a lack of a unified framework.
<P>
The goal and topic of this thesis is the design, implementation and evaluation of an abstract interpretation framework of Prolog to integrate state-of-the-art techniques. The analyser is based on an original proposal that defines the notion of abstract sequence, which allows one to verify many desirable operational properties of a logic procedure. The properties include types, modes, sharing of terms, proving termination, linear relations between the size of input/output terms and the number of solutions to a call. A single global analysis is performed, and abstract sequences are derived at each program point.
<P>
In this thesis, we implement and evaluate the original framework, and, more importantly, we overcome its limitations to make it accurate and usable in practice: the improved framework accepts any Prolog code with modules, new abstract domains and operations are added, and the language of specifications is more expressive. We also design and implement an optimiser that generates specialised code. The optimiser uses the abstract information to safely apply source-to-source transformations. Code transformations include clause and literal reordering, introduction of cuts, and removal of redundant literals. The optimiser follows a precise strategy to choose the most rewarding transformations in best order.
|
28 |
Program synthesis from domain specific object modelsFaitelson, David January 2008 (has links)
Automatically generating a program from its specification eliminates a large source of errors that is often unavoidable in a manual approach. While a general purpose code generator is impossible to build, it is possible to build a practical code generator for a specific domain. This thesis investigates the theory behind Booster — a domain specific, object based specification language and automatic code generator. The domain of Booster is information systems — systems that consist of a rich object model in which the objects refer to each other to form a complicated network of associations. The operations of such systems are conceptually simple (changing the attributes of objects, adding or removing new objects and creating or destroying associations) but they are tricky to implement correctly. The thesis focuses on the theoretical foundation of the Booster approach, in particular on three contributions: semantics, model completion, and code generation. The semantics of a Booster model is a single abstract data type (ADT) where the invariants and the methods of all the classes in the model are promoted to the level of the ADT. This is different from the traditional view that considers each class as a separate ADT. The thesis argues that the Booster semantics is a better model of object oriented systems. The second important contribution is the idea of model completion — a process that augments the postconditions of methods with additional predicates that follow from the system’s invariant and the method’s original intention. The third contribution describes a simple but effective code generation technique that is based on interpreting postconditions as executable statements and uses weakest preconditions to ensure that the generated code refines its specification.
|
29 |
Saturation methods for global model-checking pushdown systemsHague, Matthew January 2009 (has links)
Pushdown systems equip a finite state system with an unbounded stack memory, and are thus infinite state. By recording the call history on the stack, these systems provide a natural model for recursive procedure calls. Model-checking for pushdown systems has been well-studied. Tools implementing pushdown model-checking (e.g. Moped) are an essential back-end component of high-profile software model checkers such as SLAM, Blast and Terminator. Higher-order pushdown systems define a more complex memory structure: a higher-order stack is a stack of lower-order stacks. These systems form a robust hierarchy closely related to the Caucal hierarchy and higher-order recursion schemes. This latter connection demonstrates their importance as models for programs with higher-order functions. We study the global model-checking problem for (higher-order) pushdown systems. In particular, we present a new algorithm for computing the winning regions of a parity game played over an order-1 pushdown system. We then show how to compute the winning regions of two-player reachability games over order-n pushdown systems. These algorithms extend the saturation methods of Bouajjani, Esparza and Maler for order-1 pushdown systems, and Bouajjani and Meyer for higher-order pushdown systems with a single control state. These techniques begin with an automaton recognising (higher-order) stacks, and iteratively add new transitions until the automaton becomes saturated. The reachability result, presented at FoSSaCS 2007 and in the LMCS journal, is the main contribution of the thesis. We break the saturation paradigm by adding new states to the automaton during the iteration. We identify the fixed points required for termination by tracking the updates that are applied, rather than by observing the transition structure. We give a number of applications of this result to LTL model-checking, branching-time model-checking, non-emptiness of higher-order pushdown automata and Büchi games. Our second major contribution is the first application of the saturation technique to parity games. We begin with a mu-calculus characterisation of the winning region. This formula alternates greatest and least fixed point operators over a kind of reachability formula. Hence, we can use a version of our reachability algorithm, and modifications of the Büchi techniques, to compute the required result. The main advantages of this approach compared to existing techniques due to Cachat, Serre and Vardi et al. are that it is direct and that it is not immediately exponential in the number of control states, although the worst-case complexity remains the same.
|
30 |
Model Checking Systems with Replicated Components using CSPMazur, Tomasz Krzysztof January 2011 (has links)
The Parameterised Model Checking Problem asks whether an implementation Impl(t) satisfies a specification Spec(t) for all instantiations of parameter t. In general, t can determine numerous entities: the number of processes used in a network, the type of data, the capacities of buffers, etc. The main theme of this thesis is automation of uniform verification of a subclass of PMCP with the parameter of the first kind, using techniques based on counter abstraction. Counter abstraction works by counting how many, rather than which, node processes are in a given state: for nodes with k local states, an abstract state (c(1), ..., c(k)) models a global state where c(i) processes are in the i-th state. We then use a threshold function z to cap the values of each counter. If for some i, counter c(i) reaches its threshold, z(i) , then this is interpreted as there being z(i) or more nodes in the i-th state. The addition of thresholds makes abstract models independent of the instantiation of the parameter. We adapt standard counter abstraction techniques to concurrent reactive systems modelled using the CSP process algebra. We demonstrate how to produce abstract models of systems that do not use node identifiers (i.e. where all nodes are indistinguishable). Every such abstraction is, by construction, refined by all instantiations of the implementation. If the abstract model satisfies the specification, then a positive answer to the particular uniform verification problem can be deduced. We show that by adding node identifiers we make the uniform verification problem undecidable. We demonstrate a sound abstraction method that extends standard counter abstraction techniques to systems that make full use of node identifiers (in specifications and implementations). However, on its own, the method is not enough to give the answer to verification problems for all parameter instantiations. This issue has led us to the development of a type reduction theory, which, for a given verification problem, establishes a function phi that maps all (sufficiently large) instantiations T of the parameter to some fixed type T and allows us to deduce that if Spec(T) is refined by phi(Impl(T)), then Spec(T) is refined by Impl(T). We can then combine this with our extended counter abstraction techniques and conclude that if the abstract model satisfies Spec(T), then the answer to the uniform verification problem is positive. We develop a symbolic operational semantics for CSP processes that satisfy certain normality requirements and we provide a set of translation rules that allow us to concretise symbolic transition graphs. The type reduction theory relies heavily on these results. One of the main advantages of our symbolic operational semantics and the type reduction theory is their generality, which makes them applicable in other settings and allows the theory to be combined with abstraction methods other than those used in this thesis. Finally, we present TomCAT, a tool that automates the construction of counter abstraction models and we demonstrate how our results apply in practice.
|
Page generated in 0.1189 seconds