• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 38
  • 38
  • 31
  • 24
  • 21
  • 19
  • 15
  • 14
  • 14
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Static analyses over weak memory

Nimal, Vincent P. J. January 2014 (has links)
Writing concurrent programs with shared memory is often not trivial. Correctly synchronising the threads and handling the non-determinism of executions require a good understanding of the interleaving semantics. Yet, interleavings are not sufficient to model correctly the executions of modern, multicore processors. These executions follow rules that are weaker than those observed by the interleavings, often leading to reorderings in the sequence of updates and readings from memory; the executions are subject to a weaker memory consistency. Reorderings can produce executions that would not be observable with interleavings, and these possible executions also depend on the architecture that the processors implement. It is therefore necessary to locate and understand these reorderings in the context of a program running, or to prevent them in an automated way. In this dissertation, we aim to automate the reasoning behind weak memory consistency and perform transformations over the code so that developers need not to consider all the specifics of the processors when writing concurrent programs. We claim that we can do automatic static analysis for axiomatically-defined weak memory models. The method that we designed also allows re-use of automated verification tools like model checkers or abstract interpreters that were not designed for weak memory consistency, by modification of the input programs. We define an abstraction in detail that allows us to reason statically about weak memory models over programs. We locate the parts of the code where the semantics could be affected by the weak memory consistency. We then provide a method to explicitly reveal the resulting reorderings so that usual verification techniques can handle the program semantics under a weaker memory consistency. We finally provide a technique that synthesises synchronisations so that the program would behave as if only interleavings were allowed. We finally test these approaches on artificial and real software. We justify our choice of an axiomatic model with the scalability of the approach and the runtime performance of the programs modified by our method.
12

Higher-order model checking with traversals

Neatherway, Robin Philip January 2014 (has links)
Higher-order recursion schemes are a powerful model of functional computation that grew out of traditional recursive program schemes and generalisations of grammars. It is common to view recursion schemes as generators of possibly-infinite trees, which Ong showed to have a decidable monadic second order theory and opened the door to applications in verification. Kobayashi later presented an intersection type characterisation of the model checking problem, on which most subsequent applied work is based. In recent work, recursion schemes have been considered to play a role similar to Boolean programs in verification of first-order imperative programs: a natural target for abstraction of programs with very large or infinite data domains. In this thesis we focus on the development of model checking algorithms for variants of recursion schemes. We start our contributions with a model checking algorithm inspired by the fully abstract game semantics of recursion schemes, but specified as a goal-directed approach to intersection type inference, that offers a unification of the views of Ong and Kobayashi. We build on this largely theoretical contribution with two orthogonal extensions and practical implementations. First, we develop a new extension of recursion schemes: higher-order recursion schemes with cases, which add non-determinism and a case construct operating over a finite data domain. These additions provide us with a more natural and succinct target for abstraction from functional programs: encoding data using functions inevitably results in an increase in the order and arity of the scheme, which have a direct impact on the worst-case complexity of the problem. We characterise the model checking problem using a novel intersection and union type system and give a practical algorithm for type inference in this system. We have carried out an empirical evaluation of the implementation --- the tool T<sub>RAV</sub>MC --- using a variety of problem instances from the literature and a new suite of problem instances derived via an abstraction-refinement procedure from functional programs. Second, we extend our approach from safety properties to all properties expressible in monadic second order logic using alternating parity tree automata as our specification language. We again provide an implementation and an empirical evaluation, which shows that despite the challenges accompanying liveness properties our tool scales beyond the current state of the art.
13

Game semantics based equivalence checking of higher-order programs

Hopkins, David G. B. January 2012 (has links)
This thesis examines the use of game semantics for the automatic equivalence checking of higher-order programs. Game semantics has proved to be a powerful method for constructing fully abstract models of logics and programming languages. Furthermore, the concrete nature of the semantics lends itself to algorithmic analysis. The game-semantic model can be used to identify fragments of languages which have a decidable observational equivalence problem. We investigate decidability results for different languages as well as the efficiency of these algorithms in practice. First we consider the call-by-value higher-order language with state, RML. This can be viewed as a canonical restriction of Standard ML to ground-type references. The O-strict fragment of RML is the largest set of type sequents for which, in the game-semantic denotation, justification pointers from O-moves are always uniquely reconstructible from the underlying move sequence. The O-strict fragment is surprisingly expressive, including higher-order types and difficult examples from the literature. By representing strategies as Visibly Pushdown Automata (VPA) we show that observational equivalence of O-strict terms is decidable (and in fact is ExpTime-complete). We then consider extensions of the O-strict fragment. Adding general recursion or using most non-O-strict types leads to undecidability. However, a limited form of recursion can be added while still preserving decidability (although the full power of DPDA is required). Next we examine languages with non-local control. This involves adding call/cc to our language and is known to correspond to dropping the game-semantic bracketing condition. In the call-by-name game-semantic model of Idealized Algol (IA), in which answers cannot justify questions, the visibility condition still implies a form of weak bracketing. By making bracketing violations explicit we show that we can still model the entire third-order fragment using VPA. We have also implemented tools based on these algorithms. Our model checkers Homer and Hector perform equivalence checking for third-order IA and O-strict RML respectively. Homer uses a naive explicit state method whereas Hector takes advantage of on-the-fly model checking. Our tools perform well on small yet challenging examples. On negative instances, the on-the-fly approach allows Hector to outperform Homer. To improve their performance, we also consider using ideas from symbolic execution. We propose a representation for finite automata using transitions labelled with formulas and guards which aims to take advantage of the symmetries of the game-semantic model so that strategies can be represented compactly. We refer to this representation as Symbolically Executed Automata (SEA). Using SEA allows much larger data types to be handled but is not as effective on larger examples with small data types.
14

Abstraction discovery and refinement for model checking by symbolic trajectory evaluation

Adams, Sara Elisabeth January 2014 (has links)
This dissertation documents two contributions to automating the formal verification of hardware – particularly memory-intensive circuits – by Symbolic Trajectory Evaluation (STE), a model checking technique based on symbolic simulation over abstract sets of states. The contributions focus on improvements to the use of BDD-based STE, which uses binary decision diagrams internally. We introduce a solution to one of the major hurdles in using STE: finding suitable abstractions. Our work has produced the first known algorithm that addresses this problem by automatically discovering good, non-trivial abstractions. These abstractions are computed from the specification, and essentially encode partial input combinations sufficient for determining the specification’s output value. They can then be used to verify whether the hardware model meets its specification using a technique based on and significantly extending previous work by Melham and Jones [2]. Moreover, we prove that our algorithm delivers correct results by construction. We demonstrate that the abstractions received by our algorithm can greatly reduce verification costs with three example hardware designs, typical of the kind of problems faced by the semiconductor design industry. We further propose a refinement method for abstraction schemes when over- abstraction occurs, i.e., when the abstraction hides too much information of the original design to determine whether it meets its specification. The refinement algorithm we present is based on previous work by Chockler et al. [3], which selects refinement candidates by approximating which abstracted input is likely the biggest cause of the abstraction being unsuitable. We extend this work substantially, concentrating on three aspects. First, we suggest how the approach can also work for much more general abstraction schemes. This enables refining any abstraction allowed in STE, rather than just a subset. Second, Chockler et al. describe how to refine an abstraction once a refinement candidate has been identified. We present three additional variants of refining the abstraction. Third, the refinement at its core depends on evaluating circuit logic gates. The previous work offered solutions for NOT- and AND-gates. We propose a general approach to evaluating arbitrary logic gates, which improves the selection process of refinement candidates. We show the effectiveness of our work by automatically refining an abstraction for a content-addressable memory that exhibits over-abstraction, and by evaluating some common logic gates. These two contributions can be used independently to help automate the hard- ware verification by STE, but they also complement each other. To show this, we combine both algorithms to create a fully automatic abstraction discovery and refinement loop. The only inputs required are the hardware design and the specification, which the design should meet. While only small circuits could be verified completely automatically, it clearly shows that our two contributions allow the construction of a verification framework that does not require any user interaction.
15

Automated quantitative software verification

Kattenbelt, Mark Alex January 2010 (has links)
Many software systems exhibit probabilistic behaviour, either added explicitly, to improve performance or to break symmetry, or implicitly, through interaction with unreliable networks or faulty hardware. When employed in safety-critical applications, it is important to rigorously analyse the behaviour of these systems. This can be done with a formal verification technique called model checking, which establishes properties of systems by algorithmically considering all execution scenarios. In the presence of probabilistic behaviour, we consider quantitative properties such as "the worst-case probability that the airbag fails to deploy within 10ms", instead of qualitative properties such as "the airbag eventually deploys". Although many model checking techniques exist to verify qualitative properties of software, quantitative model checking techniques typically focus on manually derived models of systems and cannot directly verify software. In this thesis, we present two quantitative model checking techniques for probabilistic software. The first is a quantitative adaptation of a successful model checking technique called counter-example guided abstraction refinement which uses stochastic two-player games as abstractions of probabilistic software. We show how to achieve abstraction and refinement in a probabilistic setting and investigate theoretical extensions of stochastic two-player game abstractions. Our second technique instruments probabilistic software in such a way that existing, non-probabilistic software verification methods can be used to compute bounds on quantitative properties of the original, uninstrumented software. Our techniques are the first to target real, compilable software in a probabilistic setting. We present an experimental evaluation of both approaches on a large range of case studies and evaluate several extensions and heuristics. We demonstrate that, with our methods, we can successfully compute quantitative properties of real network clients comprising approximately 1,000 lines of complex ANSI-C code — the verification of such software is far beyond the capabilities of existing quantitative model checking techniques.
16

An Automated Tool For Requirements Verification

Tekin, Yasar 01 September 2004 (has links) (PDF)
In today&amp / #8217 / s world, only those software organizations that consistently produce high quality products can succeed. This situation enforces the effective usage of defect prevention and detection techniques. One of the most effective defect detection techniques used in software development life cycle is verification of software requirements applied at the end of the requirements engineering phase. If the existing verification techniques can be automated to meet today&amp / #8217 / s work environment needs, the effectiveness of these techniques can be increased. This study focuses on the development and implementation of an automated tool that automates verification of software requirements modeled in Aris eEPC and Organizational Chart for automatically detectable defects. The application of reading techniques on a project and comparison of results of manual and automated verification techniques applied to a project are also discussed.
17

Contributions to Formal Specification and Modular Verification of Parallel and Sequential Software

Weide, Alan January 2021 (has links)
No description available.
18

Improving the Synthesis of Annotations for Partially Automated Deductive Verification / Att förbättra syntes av funktionsanteckningar för partiellt automatiserad deduktiv verifiering

Manjikian, Hovig January 2023 (has links)
This work investigates possible improvements to an existing annotation inference tool. The tool is part of a toolchain that aims to automate the process of software verification using formal methods. The purpose of the annotations is to facilitate the use of deductive verification, which is the formal method used in this project for proving that a given program meets its specifications. In the project, two categories of annotations are established. The first category is the category of functional annotations. These annotations describe the behavior of a function or a module. The other category is what we call auxiliary annotations. These annotations describe properties that are necessary for proving the correctness of the functional annotations. The tool that this work tries to improve is dedicated to inferring the auxiliary annotations. To our knowledge, this is the first tool of its kind to automatically infer auxiliary annotations for a complete module given the module’s source code and its interface specifications. The work contributed in four areas: inferring annotations from the interface specifications of a module and propagating these annotations to all the helper functions used in the module; inferring annotations for floating-point constructs; inferring annotations for pointer constructs; and finally, inferring annotations for array constructs. The improved tool was tested on production embedded code used in the heavy automotive industry. The results demonstrated a considerable improvement and were in line with earlier findings. The work confirms the feasibility and usability of auxiliary annotation inference in this scope. / Detta arbete undersöker möjliga förbättringar av ett befintligt verktyg för härledning av annoteringar (annotations). Verktyget är en komponent i en verktygskedja som syftar till att automatisera processen för mjukvaruverifiering med formella metoder. Syftet med annoteringarna är att underlätta användningen av deduktiv verifiering, vilket är den formella metod som används i detta projekt för att bevisa att ett givet program uppfyller dess specifikationer. I projektet fastställs två kategorier av annoteringar. Den första kategorin är kategorin funktionella annoteringar. Dessa annoteringar beskriver beteendet hos en funktion eller en modul. Den andra kategorin är vad vi kallar hjälp annoteringar (auxiliary annotations). Dessa annoteringar beskriver egenskaper som är nödvändiga för att bevisa korrektheten av de funktionella annoteringarna. Verktyget som detta arbete försöker förbättra är dedikerat till att härleda hjälp annoteringar. Arbetet bidrog inom fyra områden: att härleda annoteringar från gränssnittsspecifikationerna (interface specifications) för en modul och sprida dessa annoteringar till alla hjälpfunktioner som används i modulen; härleda annoteringar för flyttalskonstruktioner (floating-point constructs); härleda annoteringar för pekarkonstruktioner; och slutligen, härleda annoteringar för arraykonstruktioner. Det förbättrade verktyget testades på produktionsinbyggdad kod som används inom fordonsindustrin. Resultaten visade en avsevärd förbättring och var i linje med tidigare resultat. Arbetet bekräftar genomförbarheten och användbarheten av hjälpannoteringshärledning i projektets omfattning.
19

Repairing strings and trees

Riveros Jaeger, Cristian January 2013 (has links)
What do you do if a computational object fails a specification? An obvious approach is to repair it, namely, to modify the object minimally to get something that satisfies the constraints. In this thesis we study foundational problems of repairing regular specifications over strings and trees. Given two regular specifications R and T we aim to understand how difficult it is to transform an object satisfying R into an object satisfying T. The setting is motivated by considering R to be a restriction -- a constraint that the input object is guaranteed to satisfy -- while T is a target -- a constraint that we want to enforce. We first study which pairs of restriction and target specifications can be repaired with a ``small'' numbers of changes. We formalize this as the bounded repair problem -- to determine whether one can repair each object satisfying R into T with a uniform number of edits. We provide effective characterizations of the bounded repair problem for regular specifications over strings and trees. These characterizations are based on a good understanding of the cyclic behaviour of finite automata. By exploiting these characterizations, we give optimal algorithms to decide whether two specifications are bounded repairable or not. We also consider the impact of limitations on the editing process -- what happens when we require the repair to be done sequentially over serialized objects. We study the bounded repair problem over strings and trees restricted to this streaming setting and show that this variant can be characterized in terms of finite games. Furthermore, we use this characterization to decide whether one can repair a pair of specifications in a streaming fashion with bounded cost and how to obtain a streaming repair strategy in this case. The previous notion asks for a uniform bound on the number of edits, but having this property is a strong requirement. To overcome this limitation, we study how to calculate the maximum number of edits per character needed to repair any object in R into T. We formalize this as the asymptotic cost -- the limit of the number of edits divided by the length of the input in the worst case. Our contribution is an algorithm to compute the asymptotic cost for any pair of regular specifications over strings. We also consider the streaming variant of this cost and we show how to compute it by reducing this problem to mean-payoff games.
20

A framework of trust in service workflows

Viriyasitavat, Wattana January 2013 (has links)
The everything as a service concept enables dynamic resource provisions to be seen and delivered as services. Their proliferation nowadays leads to the creation of new value-added services composed of several sub-services in a pre-specified manner, known as service workflows. The use of service workflow appears in various domains, ranging from the basic interactions found in several e-commerce and several online interactions to the complex ones such as Virtual Organizations, Grids, and Cloud Computing. However, the dynamic nature in open environments makes a workflow constantly changing, to be adaptable to the change of new circumstances. How to determine suitable services has becomes a very important challenge. Requirements from both workflow owners and service providers play a significant role in the process of service acquisition, composition, and interoperations. From the workflow owner viewpoint, requirements can specify properties of services to be acquired for tasks in a workflow. On the other hand, requirements from service providers affect trust-based decision in workflow participation. The lack of formal languages to specify these requirements poses difficulties in the success of service collaborations in a workflow. It impedes: (1) workflow scalability that tends to be limited within a certain set of trusted domains; (2) dynamicity when each service acts in an autonomous and unpredictable manner where any change might affect existing requirements; and (3) inconsistency in dealing with the disparate representations of requirements, causing high overhead for compliance checking. This thesis focuses on developing a framework to overcome, or at least alleviate, these problems. It situates in inter-disciplinary areas including logics, workflow modelling, specification languages, trust management, decision support system, and compliance checking. Two core elements are proposed: (1) a formal logic-based requirement specification language, namely Trust Specification (TS), such that the requirements can be formally and uniformly expressed; and (2) compliance checking algorithms to automatically check for the compliance of requirements in service workflows. It is worth noting that this thesis contains some proofs of logic extension, workflow modelling, specification language, and compliance checking algorithms. These might raise a concern to people focusing deep on one particular area such as logics, or workflow modelling who might overlook the essence of the work, for example (1) the application of a formal specification language to the exclusive characteristics of service workflows, and (2) bridging the gap of the high level languages such as trust management down to the lower logic-based ones. The first contribution of the framework is to allow requirements to be independently and consistently expressed by each party where the workflow participation decision and acquisition are subject to the compliance of requirements. To increase scalability in large-scale interoperations, the second contribution centres on automatic compliance checking where TS language and compliance checking algorithms are two key components. The last contribution focuses on dynamicity. The framework allows each party to modify existing requirements and the compliance checking would be automatically activated to check for further compliance. As a result, it is anticipated that the solution will encourage the proliferation of service provisions and consumption over the Internet.

Page generated in 0.1518 seconds