• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 8
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Executing behavioural definitions in Higher Order Logic

Camilleri, Albert John January 1988 (has links)
Over the past few years, computer scientists have been using formal verification techniques to show the correctness of digital systems. The verification process, however, is complicated and expensive. Even proofs of simple circuits can involve thousands of logical steps. Often it can be extremely difficult to find correct device specifications and it is desirable that one sets off to prove a correct specification from the start, rather than repeatedly backtrack from the verification process to modify the original definitions after discovering they were incorrect or inaccurate. The main idea presented in the thesis is to amalgamate the techniques of simulation and verification, rather than have the latter replace the former. The result is that behavioural definitions can be simulated until one is reasonably sure that the specification is correct. Furthermore, providing the correctness with respect to these simulated specifications avoids the inadequacies of simulation, where it may not be computationally feasible to demonstrate correctness by exhaustive testing. Simulation here has a different purpose: to get specifications correct as early as possible in the verification process. Its purpose is not to demonstrate the correctness of the implementation - this is done in the verification stage when the very same specifications that were simulated are proven correct. The thesis discusses the implementation of an executable subset of the HOL logic, the version of Higher Order Logic embedded in the HOL theorem prover. It is shown that hardware can be effectively described using both relations and functions; relations being suitable for abstract specification, and functions being suitable for execution. The differences between relational and functional specifications are discussed and illustrated by the verification of an <i>n</i>-bit adder. Techniques for executing functional specifications are presented and various optimisation stratagies are shown which make the execution of the logic efficient. It is further shown that the process of generating optimised functional definitions from relational definitions can be automated. Example simulations of three hardware devices (a factorial machine, a small computer and a communications chip) are presented.
2

Multi-level verification of microprocessor-based systems

Joyce, Jeffrey John January 1989 (has links)
No description available.
3

First-order lax logic : a framework for abstraction, constraints and refinement

Walton, Matthew January 1998 (has links)
No description available.
4

Approximation and Refinement Techniques for Hard Model-checking Problems

Bobaru, Mihaela 15 July 2009 (has links)
Formal verification by model checking verifies whether a system satisfies some given correctness properties, and is intractable in general. We focus on several problems originating from the usage of model checking and from the inherent complexity of model checking itself. We propose approximation and iterative refinement techniques and demonstrate that they help in making these problems tractable on practical cases. Vacuity detection is one of the problems, relating to the trivial satisfaction of properties. A similar problem is query solving, useful in model exploration, when properties of a system are not fully known and are to be discovered rather than checked. Both of these problems have solution spaces structured as lattices and can be solved by model checking using those lattices. The lattices, in the most general formulation of these problems, are too complex to be implemented efficiently. We introduce a general approximation framework for model checking with lattices and instantiate this framework for the two problems, leading to algorithms and implementations that can obtain efficiently partial answers to the problems. We also introduce refinement techniques that consider incrementally larger lattices and compute even the partial answers gradually, to further abate the size explosion of the problems. Another problem we consider is the state-space explosion of model checking. The size of system models is exponential in the number of state variables and that renders model checking intractable. We consider systems composed of several components running concurrently. For such systems, compositional verification checks components individually to avoid composing an entire system. Model checking an individual component uses assumptions about the other components. Smaller assumptions lead to smaller verification problems. We introduce iterative refinement techniques that improve the assumptions generated by previous automated approaches. One technique incrementally refines the interfaces between components in order to obtain smaller assumptions that are sufficient to prove a given property. The smaller assumptions are approximations of the assumption that would be obtained without our interface refinement. Another technique computes assumptions as abstractions of components, as an alternative to current approaches that learn assumptions from counterexamples. Our abstraction refinement has the potential to compute smaller nondeterministic assumptions, in contrast to the deterministic assumptions learned by current approaches. We confirm experimentally the benefits of our new approximation and refinement techniques.
5

Approximation and Refinement Techniques for Hard Model-checking Problems

Bobaru, Mihaela 15 July 2009 (has links)
Formal verification by model checking verifies whether a system satisfies some given correctness properties, and is intractable in general. We focus on several problems originating from the usage of model checking and from the inherent complexity of model checking itself. We propose approximation and iterative refinement techniques and demonstrate that they help in making these problems tractable on practical cases. Vacuity detection is one of the problems, relating to the trivial satisfaction of properties. A similar problem is query solving, useful in model exploration, when properties of a system are not fully known and are to be discovered rather than checked. Both of these problems have solution spaces structured as lattices and can be solved by model checking using those lattices. The lattices, in the most general formulation of these problems, are too complex to be implemented efficiently. We introduce a general approximation framework for model checking with lattices and instantiate this framework for the two problems, leading to algorithms and implementations that can obtain efficiently partial answers to the problems. We also introduce refinement techniques that consider incrementally larger lattices and compute even the partial answers gradually, to further abate the size explosion of the problems. Another problem we consider is the state-space explosion of model checking. The size of system models is exponential in the number of state variables and that renders model checking intractable. We consider systems composed of several components running concurrently. For such systems, compositional verification checks components individually to avoid composing an entire system. Model checking an individual component uses assumptions about the other components. Smaller assumptions lead to smaller verification problems. We introduce iterative refinement techniques that improve the assumptions generated by previous automated approaches. One technique incrementally refines the interfaces between components in order to obtain smaller assumptions that are sufficient to prove a given property. The smaller assumptions are approximations of the assumption that would be obtained without our interface refinement. Another technique computes assumptions as abstractions of components, as an alternative to current approaches that learn assumptions from counterexamples. Our abstraction refinement has the potential to compute smaller nondeterministic assumptions, in contrast to the deterministic assumptions learned by current approaches. We confirm experimentally the benefits of our new approximation and refinement techniques.
6

Verification-Aware Processor Design

Lungu, Anita January 2009 (has links)
<p>As technological advances enable computers to permeate many of our society's critical application domains (such as medicine, finances, transportation), the requirement for computers to always behave correctly becomes critical as well. Currently, ensuring that processor designs are correct represents a major challenge for the computing industry consuming the majority (up to 70%) of the resources allocated for the creation of a new processor. Looking towards the future, we see that with each new processor generation, even more transistors fit on the same chip area and more complex designs become possible, which makes it unlikely that the difficulty of the design verification problem will decrease by itself.</p><p>We believe that the difficulty of the design verification problem is compounded by the current processor design flow. In most design cycles, a design's verifiability is not explicitly considered at an early stage - when decisions are most influential - because that initial focus is exclusively on improving the design on more traditional metrics like performance, power, and area. It is thus possible for the resulting design to be very difficult to verify in the end, specifically because its verifiability was not ranked high on the priority list in the beginning. </p><p>In this thesis we propose to view verifiability as a critical design constraint to be considered, together with other established metrics, like performance and power, from the initial stages of design. Our high level goal is for this approach to make designs more verifiable, which would both decrease the resources invested in the verification step and lead to more robust designs. </p><p>More specifically, we make five main contributions in this thesis. The first is our proposal for a change in design perspective towards considering verifiability as a first class constraint. Second, we use formal verification (through a combination of theorem proving, model checking, and probabilistic model checking ) to quantitatively evaluate the impact on verifiability of various design choices like the organization of caches, TLBs, pipeline, operand bypass network, and dynamic power management mechanisms. Our third contribution is to evaluate design trade-offs between verifiability and other established metrics, like performance and power, in the context of multi-core dynamic power management schemes. Fourth, we re-design several components for increasing their verifiability. Finally, we propose design guidelines for increasing verifiability. In the context of single core processors our guidelines refer to the organization of caches and translation lookaside buffers (TLBs), the depth of the core's pipeline, the type of ALUs used, while for multi-core processors we refer to dynamic power management schemes (DPMs) for power capping. </p><p>Our results confirm that making design choices with verifiability as a first class design constraint has the capacity to decrease the verification effort. Furthermore, making explicit trade-offs between verifiability, performance and power helps identify better design points for given verification, performance, and power goals.</p> / Dissertation
7

Abstraction discovery and refinement for model checking by symbolic trajectory evaluation

Adams, Sara Elisabeth January 2014 (has links)
This dissertation documents two contributions to automating the formal verification of hardware – particularly memory-intensive circuits – by Symbolic Trajectory Evaluation (STE), a model checking technique based on symbolic simulation over abstract sets of states. The contributions focus on improvements to the use of BDD-based STE, which uses binary decision diagrams internally. We introduce a solution to one of the major hurdles in using STE: finding suitable abstractions. Our work has produced the first known algorithm that addresses this problem by automatically discovering good, non-trivial abstractions. These abstractions are computed from the specification, and essentially encode partial input combinations sufficient for determining the specification’s output value. They can then be used to verify whether the hardware model meets its specification using a technique based on and significantly extending previous work by Melham and Jones [2]. Moreover, we prove that our algorithm delivers correct results by construction. We demonstrate that the abstractions received by our algorithm can greatly reduce verification costs with three example hardware designs, typical of the kind of problems faced by the semiconductor design industry. We further propose a refinement method for abstraction schemes when over- abstraction occurs, i.e., when the abstraction hides too much information of the original design to determine whether it meets its specification. The refinement algorithm we present is based on previous work by Chockler et al. [3], which selects refinement candidates by approximating which abstracted input is likely the biggest cause of the abstraction being unsuitable. We extend this work substantially, concentrating on three aspects. First, we suggest how the approach can also work for much more general abstraction schemes. This enables refining any abstraction allowed in STE, rather than just a subset. Second, Chockler et al. describe how to refine an abstraction once a refinement candidate has been identified. We present three additional variants of refining the abstraction. Third, the refinement at its core depends on evaluating circuit logic gates. The previous work offered solutions for NOT- and AND-gates. We propose a general approach to evaluating arbitrary logic gates, which improves the selection process of refinement candidates. We show the effectiveness of our work by automatically refining an abstraction for a content-addressable memory that exhibits over-abstraction, and by evaluating some common logic gates. These two contributions can be used independently to help automate the hard- ware verification by STE, but they also complement each other. To show this, we combine both algorithms to create a fully automatic abstraction discovery and refinement loop. The only inputs required are the hardware design and the specification, which the design should meet. While only small circuits could be verified completely automatically, it clearly shows that our two contributions allow the construction of a verification framework that does not require any user interaction.
8

Feature-Oriented Specification of Hardware Bus Protocols

Freitas, Paul Michael 29 April 2008 (has links)
Hardware engineers frequently create formal specification documents as part of the verification process. Doing so is a time-consuming and error-prone process, as the primary documents for communications and standards use a mixture of prose, diagrams and tables. We would like this process to be partially automated, in which the engineer's role would be to refine a machine-generated skeleton of a specification's formal model. We have created a preliminary intermediate language which allows specifications to be captured using formal semantics, and allows an engineer to easily find, understand, and modify critical portions of the specification. We have converted most of ARM's AMBA AHB specification to our language; our representation is able to follow the structure of the original document.
9

Efficient Verification of Bit-Level Pipelined Machines Using Refinement

Srinivasan, Sudarshan Kumar 24 August 2007 (has links)
Functional verification is a critical problem facing the semiconductor industry: hardware designs are extremely complex and highly optimized, and even a single bug in deployed systems can cost more than $10 billion. We focus on the verification of pipelining, a key optimization that appears extensively in hardware systems such as microprocessors, multicore systems, and cache coherence protocols. Existing techniques for verifying pipelined machines either consume excessive amounts of time, effort, and resources, or are not applicable at the bit-level, the level of abstraction at which commercial systems are designed and functionally verified. We present a highly automated, efficient, compositional, and scalable refinement-based approach for the verification of bit-level pipelined machines. Our contributions include: (1) A complete compositional reasoning framework based on refinement. Our notion of refinement guarantees that pipelined machines satisfy the same safety and liveness properties as their instruction set architectures. In addition, our compositional framework can be used to decompose correctness proofs into smaller, more manageable pieces, leading to drastic reductions in verification times and a high-degree of scalability. (2) The development of ACL2-SMT, a verification system that integrates the popular ACL2 theorem prover (winner of the 2005 ACM Software System Award) with decision procedures. ACL2-SMT allows us to seamlessly take advantage of the two main approaches to hardware verification: theorem proving and decision procedures. (3) A proof methodology based on our compositional reasoning framework and ACL2-SMT that allows us to reduce the bit-level verification problem to a sequence of highly automated proof steps. (4) A collection of general-purpose refinement maps, functions that relate pipelined machine states to instruction set architecture states. These refinement maps provide more flexibility and lead to increased verification efficiency. The effectiveness of our approach is demonstrated by verifying various pipelined machine models, including a bit-level, Intel XScale inspired processor that implements 593 instructions and includes features such as branch prediction, precise exceptions, and predicated instruction execution.
10

A verified framework for symbolic execution in the ACL2 theorem prover

Swords, Sol Otis 11 February 2011 (has links)
Mechanized theorem proving is a promising means of formally establishing facts about complex systems. However, in applying theorem proving methodologies to industrial-scale hardware and software systems, a large amount of user interaction is required in order to prove useful properties. In practice, the human user tasked with such a verification must gain a deep understanding of the system to be verified, and prove numerous lemmas in order to allow the theorem proving program to approach a proof of the desired fact. Furthermore, proofs that fail during this process are a source of confusion: the proof may either fail because the conjecture was false, or because the prover required more help from the user in order to reach the desired conclusion. We have implemented a symbolic execution framework inside the ACL2 theorem prover in order to help address these issues on certain problem domains. Our framework introduces a proof strategy that applies bit-level symbolic execution using BDDs to finite-domain problems. This proof strategy is a fully verified decision procedure for such problems, and on many useful problem domains its capacity vastly exceeds that of exhaustive testing. Our framework also produces counterexamples for conjectures that it determines to be false. Our framework seeks to reduce the amount of necessary user interaction in proving theorems about industrial-scale hardware and software systems. By increasing the automation available in the prover, we allow the user to complete useful proofs while understanding less of the detailed implementation of the system. Furthermore, by producing counterexamples for falsified conjectures, our framework reduces the time spent by the user in trying to determine why a proof failed. / text

Page generated in 0.1419 seconds