Spelling suggestions: "subject:"bnormal verification"" "subject:"1normal verification""
1 |
Analýza souborového systému pomocí Verifying C Compiler / Analysis of a File System Using the Verifying C CompilerŠkorvaga, David January 2015 (has links)
Title: Analysis of a File System Using the Verifying C Compiler Author: Bc. David Škorvaga Department: Department of Distributed and Dependable Systems Supervisor: RNDr. Jan Kofroň, Ph.D. Abstract: Formal verification is a way to improve reliability of software systems. One approach of formal verification is focused on proving correctness of annotat- ed source code of an established programming language. Verifying C Compiler (VCC) is a verifier for concurrent C that accepts an annotated code in C language and automatically verifies its correctness with respect to the given annotation. There have been successful attempts to verify some critical systems, including the operating system kernel. Another critical part of operating system is its file system. In the thesis, we choose FatFs file system, a simple device-independent implementation of the FAT file system. We specify a part of it using the VCC annotation and successfully verify its correctness. Keywords: Formal Verification, File System, VCC
|
2 |
Support for C++ in GMC / Support for C++ in GMCŠebetovský, Jan January 2013 (has links)
Software is used in more and more aspects of our lives, so its correctness is more and more important. Its verification is thus a good idea. Now there are not many tools for verification of programs in the C++ language and most of them cannot verify all required properties. Because of this we decided to extend GMC, which was already able to verify C code, with support of the C++ language. However the C++ language is very vast, so the goal of this work is implementation of only the basic language features (inheritance, constructors, destructors, virtual methods and exceptions). The support of all those features have been implemented except for exceptions, which are implemented only partially. Powered by TCPDF (www.tcpdf.org)
|
3 |
Büchi Automata as Specifications for Reactive SystemsFogarty, Seth 05 June 2013 (has links)
Computation is employed to incredible success in a massive variety of applications, and yet it is difficult to formally state what our computations are. Finding a way to model computations is not only valuable to understanding them, but central to automatic manipulations and formal verification. Often the most interesting computations are not functions with inputs and outputs, but ongoing systems that continuously react to user input. In the automata-theoretic approach, computations are modeled as words, a sequence of letters representing a trace of a computation. Each automaton accepts a set of words, called its language. To model reactive computation, we use Büchi automata: automata that operate over infinite words. Although the computations we are modeling are not infinite, they are unbounded, and we are interested in their ongoing properties. For thirty years, Büchi automata have been recognized as the right model for reactive computations.
In order to formally verify computations, however, we must also be able to create specifications that embody the properties we want to prove these systems possess. To date, challenging algorithmic problems have prevented Büchi automata from being used as specifications. I address two challenges to the use of Buechi automata as specifications in formal verification. The first, complementation, is required to check program adherence to a specification. The second, determination, is used in domains such as synthesis, probabilistic verification, and module checking. I present both empirical analysis of existing complementation constructions, and a new theoretical contribution that provides more deterministic complementation and a full determination construction.
|
4 |
Büchi Automata as Specifications for Reactive SystemsFogarty, Seth 05 June 2013 (has links)
Computation is employed to incredible success in a massive variety of applications, and yet it is difficult to formally state what our computations are. Finding a way to model computations is not only valuable to understanding them, but central to automatic manipulations and formal verification. Often the most interesting computations are not functions with inputs and outputs, but ongoing systems that continuously react to user input. In the automata-theoretic approach, computations are modeled as words, a sequence of letters representing a trace of a computation. Each automaton accepts a set of words, called its language. To model reactive computation, we use Büchi automata: automata that operate over infinite words. Although the computations we are modeling are not infinite, they are unbounded, and we are interested in their ongoing properties. For thirty years, Büchi automata have been recognized as the right model for reactive computations.
In order to formally verify computations, however, we must also be able to create specifications that embody the properties we want to prove these systems possess. To date, challenging algorithmic problems have prevented Büchi automata from being used as specifications. I address two challenges to the use of Buechi automata as specifications in formal verification. The first, complementation, is required to check program adherence to a specification. The second, determination, is used in domains such as synthesis, probabilistic verification, and module checking. I present both empirical analysis of existing complementation constructions, and a new theoretical contribution that provides more deterministic complementation and a full determination construction.
|
5 |
Formal Verification of Instruction Dependencies in MicroprocessorsShehata, Hazem January 2011 (has links)
In microprocessors, achieving an efficient utilization of the execution units is a key factor in improving performance. However, maintaining an uninterrupted flow of instructions is a challenge due to the data and control dependencies between instructions of a program. Modern microprocessors employ aggressive optimizations trying to keep their execution units busy without violating inter-instruction dependencies. Such complex optimizations may cause subtle implementation flaws that can be hard to detect using conventional simulation-based verification techniques.
Formal verification is known for its ability to discover design flaws that may go undetected using conventional verification techniques. However, with formal verification come two major challenges. First, the correctness of the implementation needs to be defined formally. Second, formal verification is often hard to apply at the scale of realistic implementations.
In this thesis, we present a formal verification strategy to guarantee that a microprocessor implementation preserves both data and control dependencies among instructions. Throughout our strategy, we address the two major challenges associated with formal verification: correctness and scalability.
We address the correctness challenge by specifying our correctness in the context of generic pipelines. Unlike conventional pipeline hazard rules, we make no distinction between the data and control aspects. Instead, we describe the relationship between a producer instruction and a consumer instruction in a way such that both instructions can speculatively read their source operands, speculatively write their results, and go out of their program order during execution. In addition to supporting branch and value prediction, our correctness criteria allow the implementation to discard (squash) or replay instructions while being executed.
We address the scalability challenge in three ways: abstraction, decomposition, and induction. First, we state our inter-instruction dependency correctness criteria in terms of read and write operations without making reference to data values. Consequently, our correctness criteria can be verified for implementations with abstract datapaths. Second, we decompose our correctness criteria into a set of smaller obligations that are easier to verify. All these obligations can be expressed as properties within the Syntactically-Safe fragment of Linear Temporal Logic (SSLTL). Third, we introduce a technique to verify SSLTL properties by induction, and prove its soundness and completeness.
To demonstrate our overall strategy, we verified a term-level model of an out-of-order speculative processor. The processor model implements register renaming using a P6-style reorder buffer and branch prediction with a hybrid (discard-replay) recovery mechanism. The verification obligations (expressed in SSLTL) are checked using a tool implementing our inductive technique. Our tool, named Tahrir, is built on top of a generic interface to SMT solvers and can be generally used for verifying SSLTL properties about infinite-state systems.
|
6 |
Formal Verification of Adaptive Real-Time Systems by Extending Task AutomataHatvani, Leo January 2014 (has links)
Recently, we have seen an increase in the deployment of safety critical embedded systems in rapidly changing environments, as well as requirement for on-site customizations and rapid adaptation. To address the extended range of requirements, adaptation mechanism are added to the systems to handle large number of situations appropriately. Although necessary, adaptations can cause inconsistent and unstable configurations that must be prevented for the embedded system to remain dependable and safe. Therefore, verifying the behavior of adaptive embedded systems during the design phase of the production process is highly desirable. A hard real time embedded system and its environment can be modeled using timed automata. Such model can describe the system at various levels of abstraction. In this thesis, we model the adaptive responses of a system in terms of tasks that are executed to handle changes in the environmental or internal parameters. Schedulability, a property that all tasks complete execution within their respective deadlines, is a key element in designing hard real-time embedded systems. A system that is unschedulable immediately compromises safety and hard real-time requirements and can cause fatal failure. Given specifications of all tasks in the system, we can model the system, an abstraction of the environment, and adaptive strategies to investigate whether the system retains safety properties, including schedulability, regardless of the changes in the environment and adaptations to those changes.
|
7 |
Formal Verification of Instruction Dependencies in MicroprocessorsShehata, Hazem January 2011 (has links)
In microprocessors, achieving an efficient utilization of the execution units is a key factor in improving performance. However, maintaining an uninterrupted flow of instructions is a challenge due to the data and control dependencies between instructions of a program. Modern microprocessors employ aggressive optimizations trying to keep their execution units busy without violating inter-instruction dependencies. Such complex optimizations may cause subtle implementation flaws that can be hard to detect using conventional simulation-based verification techniques.
Formal verification is known for its ability to discover design flaws that may go undetected using conventional verification techniques. However, with formal verification come two major challenges. First, the correctness of the implementation needs to be defined formally. Second, formal verification is often hard to apply at the scale of realistic implementations.
In this thesis, we present a formal verification strategy to guarantee that a microprocessor implementation preserves both data and control dependencies among instructions. Throughout our strategy, we address the two major challenges associated with formal verification: correctness and scalability.
We address the correctness challenge by specifying our correctness in the context of generic pipelines. Unlike conventional pipeline hazard rules, we make no distinction between the data and control aspects. Instead, we describe the relationship between a producer instruction and a consumer instruction in a way such that both instructions can speculatively read their source operands, speculatively write their results, and go out of their program order during execution. In addition to supporting branch and value prediction, our correctness criteria allow the implementation to discard (squash) or replay instructions while being executed.
We address the scalability challenge in three ways: abstraction, decomposition, and induction. First, we state our inter-instruction dependency correctness criteria in terms of read and write operations without making reference to data values. Consequently, our correctness criteria can be verified for implementations with abstract datapaths. Second, we decompose our correctness criteria into a set of smaller obligations that are easier to verify. All these obligations can be expressed as properties within the Syntactically-Safe fragment of Linear Temporal Logic (SSLTL). Third, we introduce a technique to verify SSLTL properties by induction, and prove its soundness and completeness.
To demonstrate our overall strategy, we verified a term-level model of an out-of-order speculative processor. The processor model implements register renaming using a P6-style reorder buffer and branch prediction with a hybrid (discard-replay) recovery mechanism. The verification obligations (expressed in SSLTL) are checked using a tool implementing our inductive technique. Our tool, named Tahrir, is built on top of a generic interface to SMT solvers and can be generally used for verifying SSLTL properties about infinite-state systems.
|
8 |
The Specification and Refinement of Timed ProcessesMahony, Brendan Patrick Unknown Date (has links)
The use of predicate transformers to model the action of sequential programs has allowed the construction of a refinement calculus for expressing the formal verification of the top-down development of sequential programs. It is shown that predicate transformers may also be used to model real-time processes. The notions of precondition and postcondition in the sequential refinement calculus are replaced by the notions of assumption and effect. In this way the formal development of real-time processes may also be expressed within the refinement calculus. Notations are developed for the specification and implementation of real-time processes within the framework of the refinement calculus. Several case-studies are presented to demonstrate the utility of this approach.
|
9 |
Direct methods for deductive verification of temporal properties in continuous dynamical systemsSogokon, Andrew January 2016 (has links)
This thesis is concerned with the problem of formal verification of correctness specifications for continuous and hybrid dynamical systems. Our main focus will be on developing and automating general proof principles for temporal properties of systems described by non-linear ordinary differential equations (ODEs) under evolution constraints. The proof methods we consider will work directly with the differential equations and will not rely on the explicit knowledge of solutions, which are in practice rarely available. Our ultimate goal is to increase the scope of formal deductive verification tools for hybrid system designs. We give a comprehensive survey and comparison of available methods for checking set invariance in continuous systems, which provides a foundation for safety verification using inductive invariants. Building on this, we present a technique for constructing discrete abstractions of continuous systems in which spurious transitions between discrete states are entirely eliminated, thereby extending previous work. We develop a method for automatically generating inductive invariants for continuous systems by efficiently extracting reachable sets from their discrete abstractions. To reason about liveness properties in ODEs, we introduce a new proof principle that extends and generalizes methods that have been reported previously and is highly amenable to use as a rule of inference in a deductive verification calculus for hybrid systems. We will conclude with a summary of our contributions and directions for future work.
|
10 |
A Certified Core Policy LanguageSistany, Bahman January 2016 (has links)
We present the design and implementation of a Certified Core Policy Language (ACCPL) that can be used to express access-control rules and policies. Although full-blown access-control policy languages such as eXtensible Access Control Markup Language (XACML) [OAS13] already exist, because access rules in such languages are often expressed in a declarative manner using fragments of a natural language like English, it isn’t alwaysclear what the intended behaviour of the system encoded in these access rules should be. To remedy this ambiguity, formal specification of how an access-control mechanism should behave, is typically given in some sort of logic, often a subset of first order logic. To show that an access-control system actually behaves correctly with respect to its specification,
proofs are needed, however the proofs that are often presented in the literature are hard or impossible to formally verify. The verification difficulty is partly due to the fact that the language used to do the proofs while mathematical in nature, utilizes intuitive justifications to derive the proofs. Intuitive language in proofs means that the proofs could be incomplete and/or contain subtle errors.
ACCPL is small by design. By small we refer to the size of the language; the syntax,
auxiliary definitions and the semantics of ACCPL only take a few pages to describe. This compactness allows us to concentrate on the main goal of this thesis which is the ability to reason about the policies written in ACCPL with respect to specific questions. By making the language compact, we have stayed away from completeness and expressive power in several directions. For example, ACCPL uses only a single policy combinator, the conjunction policy combinator. The design of ACCPL is therefore a trade-off between ease of formal proof of correctness and expressive power. We also consider ACCPL a core policy access-control language since we have retained the core features of many access-control policy languages. For instance ACCPL employs a single condition type called a “prerequisite” where other languages may have very expressive and rich sets of conditions.
|
Page generated in 0.124 seconds