Spelling suggestions: "subject:"computer software cerification."" "subject:"computer software erification.""
1 |
Deductive mechanical verification of concurrent systemsSumners, Robert W. 28 August 2008 (has links)
Not available / text
|
2 |
Concurrent verification for sequential programsWickerson, John Peter January 2013 (has links)
This dissertation makes two contributions to the field of software verification. The first explains how verification techniques originally developed for concurrency can be usefully applied to sequential programs. The second describes how sequential programs can be verified using diagrams that have a parallel nature. The first contribution involves a new treatment of stability in verification methods based on rely-guarantee. When an assertion made in one thread of a concurrent system cannot be invalidated by the actions of other threads, that assertion is said to be 'stable'. Stability is normally enforced through side-conditions on rely-guarantee proof rules. This dissertation proposes instead to encode stability information into the syntactic form of the assertion. This approach, which we call explicit stabilisation, brings several benefits. First, we empower rely-guarantee with the ability to reason about library code for the first time. Second, when the rely-guarantee method is redepleyed in a sequential setting, explicit stabilisation allows more details of a module's implementation to be hidden when verifying clients. Third, explicit stabilisation brings a more nuanced understanding of the important issue of stability in concurrent and sequential verification; such an understanding grows ever more important as verification techniques grow ever more complex. The second contribution is a new method of presenting program proofs conducted in separation logic. Building on work by Jules Bean, the ribbon proof is a diagrammatic alternative to the standard 'proof outline'. By emphasising the structure of a proof, ribbon proofs are intelligible and hence useful pedagogically. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they are highly scalable; this we illustrate with a ribbon proof of the Version 7 Unix memory manager. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. We describe the ribbon proof system, prove its soundness and completeness, and outline a prototype tool for mechanically checking the diagrams it produ1res.
|
3 |
Sequential redundancy identification using transformation-based verificationMony, Hari, 1977- 29 August 2008 (has links)
The design of complex digital hardware is challenging and error-prone. With short design cycles and increasing complexity of designs, functional verification has become the most expensive and time-consuming aspect of the digital design process. Sequential equivalence checking (SEC) has been proposed as a verification framework to perform a true sequential check of input/output equivalence between two designs. SEC provides several benefits that can enable a faster and more efficient way to design and verify large and complex digital hardware. It can be used to prove that micro-architectural optimizations needed for design closure preserve design functionality, and thus avoid the costly and incomplete functional verification regression traditionally used for such purposes. Moreover, SEC can be used to validate sequential synthesis transformations and thereby enable design and verification at a higher-level of abstraction. Use of sequential synthesis leads to shorter design cycles and can result in a significant improvement in design quality. In this dissertation, we study the problem of sequential redundancy identification to enable robust sequential equivalence checking solutions. In particular, we focus on the use of a transformation-based verification framework to synergistically leverage various transformations to simplify and decompose large problems which arise during sequential redundancy identification to enable an efficient and highly scalable SEC solution. We make five main contributions in this dissertation. First, we introduce a novel sequential redundancy identification framework that dramatically increases the scalability of SEC. Second, we propose the use of a flexible and synergistic set of transformation and verification algorithms for sequential redundancy identification. This more general approach enables greater speed and scalability and identifies a significantly greater degree of redundancy than previous approaches. Third, we introduce the theory and practice of transformation-based verification in the presence of constraints. Constraints are pervasively used in verification testbenches to specify environmental assumptions to prevent illegal input scenarios. Fourth, we develop the theoretical framework with corresponding efficient implementation for optimal sequential redundancy identification in the presence of constraints. Fifth, we address the scalability of transformation-based verification by proposing two new structural abstraction techniques. We also study the synergies between various transformation algorithms and propose new strategies for using these transformations to enable scalable sequential redundancy identification. / text
|
4 |
Knowledge-based support for software selection in information centers: Design criteria, development issues, and empirical evaluation.Vinze, Ajay Shreekrishna. January 1988 (has links)
An information center (IC) is described as an organization designed to help end users help themselves. ICs are expected to provide several services to end users. The services can be summarized as: consultation, distribution and trouble-shooting. The research is focused on a specific consultation activity: software selection. Providing support for selection and evaluation of software for users constitutes 91.5 percent of a typical IC's daily workload. In the last decade, ICs have proved successful in managing software resources for organizations. The initial success of ICs has increased user expectations and demand for the services offered but, because ICs are considered cost centers in most organizations, there is growing pressure for them to accomplish more with fewer resources. The research hypothesis is that the knowledge and methodologies of IC consultants, concerning software selection, as well as relevant institutional policies, can be represented in a knowledge base. A knowledge-based system ICE (Information Center Expert) to assist users with software selection has been developed and evaluated in the study reported here. The development of ICE used two main design criteria: maintainability and transportability. Maintainability was defined as the ability to support frequent updating of the software supported by an IC. This is important because new software tools are introduced in the market at a very rapid rate; to stay competitive an IC must be able continually to adapt to this dynamic environment. Transportability was considered necessary to make ICE usable in many different ICs, each supporting a different set of software. The transportability feature allows different ICs to individualize the system to meet their own site-specific needs. Validation studies were conducted to test the appropriateness of the recommendations made by ICE, using "blind" validation procedures in which scenarios (in case form) were presented to consultants. The cases were selected to represent problems frequently taken to an IC. Two sets of solutions, those offered by consultants and those provided by ICE, were then presented to experts who were asked to judge the appropriateness of each solution to a case without knowing its source. To test the comparative advantages of using ICE or IC consultants to obtain assistance with software selection a laboratory experiment was conducted. A hypothetical construct called "Consultation Effectiveness" was used, which included measures for "user satisfaction" with the process, as well as measures for the "task basis" and the "recommendation basis" for evaluating a consultation session.
|
5 |
Interactive program verification using virtual programsTopor, Rodney W. January 1975 (has links)
This thesis is concerned with ways of proving the correctness of computer programs. The first part of the thesis presents a new method for doing this. The method, called continuation induction, is based on the ideas of symbolic execution, the description of a given program by a virtual program, and the demonstration that these two programs are equivalent whenever the given program terminates. The main advantage of continuation induction over other methods is that it enables programs using a wide variety of programming constructs such as recursion, iteration, non-determinism, procedures with side-effects and jumps out of blocks to be handled in a natural and uniform way. In the second part of the thesis a program verifier which uses both this method and Floyd's inductive assertion method is described. The significance of this verifier is that it is designed to be extensible, and to this end the user can declare new functions and predicates to be used in giving a natural description of the program's intention. Rules describing these new functions can then be used when verifying the program. To actually prove the verification conditions, the system employs automatic simplification, a relatively clever matcher, a simple natural deduction system and, most importantly, the user's advice. A large number of commands are provided for the user in guiding the system to a proof of the program's correctness. The system has been used to verify various programs including two sorting programs and a program to invert a permutation 'in place' the proofs of the sorting programs included a proof of the fact that the final array was a permutation of the original one. Finally, some observations and suggestions are made concerning the continued development of such interactive verification systems.
|
6 |
Formal symbolic verification using heuristic search and abstraction techniquesQian, Kairong, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Computing devices are pervading our everyday life and imposing challenges for designers that have the responsibility of producing reliable hardware and software systems. As systems grow in size and complexity, it becomes increasingly difficult to verify whether a design works as intended. Conventional verification methods, such as simulation and testing, exercise only parts of the system and from these parts, draw conclusions about the correctness of the total design. For complex designs, the parts of the system that can be verified are relatively small. Formal verification aims to overcome this problem. Instead of exercising the system, formal verification builds mathematical models of designs and proves whether properties hold in these models. In doing so, it at least aims to cover the complete design. Model checking is a formal verification method that automatically verifies a model of a design, or generates diagnostic information if the model cannot be verified. It is because of this usability and level of automation that model checking has gained a high degree of success in verifying circuit designs. The major disadvantage of model checking is its poor scalability. This is due to its algorithmic nature: namely, every state of the model needs to be enumerated. In practice, properties of interest may not need the exhaustive enumeration of the model state space. Many properties can be verified (or falsified) by examining a small number of states. In such cases, exhaustive algorithms can be replaced with search algorithms that are directed by heuristics. Methods based on heuristics generally scale well. This thesis investigates non-exhaustive model checking algorithms and focuses on error detection in system verification. The approach is based on a novel integration of symbolic model checking, heuristic search and abstraction techniques to produce a framework that we call abstractiondirected model checking. There are 3 main components in this framework. First, binary decision diagrams (BDDs) and heuristic search are combined to develop a symbolic heuristic search algorithm. This algorithm is used to detect errors. Second, abstraction techniques are applied in an iterative way. In the initial phase, the model is abstracted, and this model is verified using exhaustive algorithms. If a definitive verification result cannot be obtained, the same abstraction is re-used to generate a search heuristic. The heuristic in turn is used to direct a search algorithm that searches for error states in the concrete model. Third, a model transformation mechanism converts an arbitrary branching-time property to a reachability property. Essentially, this component allows this framework to be applied to a more general class of temporal property. By amalgamating these three components, the framework offers a new verification methodology that speeds up error detection in formal verification. The current implementation of this framework indicates that it can outperform existing standard techniques both in run-time and memory consumption, and scales much better than conventional model checking.
|
7 |
A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcementLarkin, James Unknown Date (has links)
Program verification is vital as more and more users are creating, downloading and executing foreign computer programs. Software verification tools provide a means for determining if a program adheres to a user’s security requirements, or security policy. There are many verification tools that exist for checking different types of policies on different types of programs. Currently however, there is no verification tool capable of determining if all types of programs satisfy all types of policies. This thesis describes a framework for supporting multiple verification tools to determine program satisfaction. A user’s security requirements are represented at multiple levels of abstraction as Intermediate Execution Environments. Using a sequence of configurations, a user’s security requirements are transformed from the abstract level to the tool level, possibly for multiple verification tools. Using a number of case studies, the validity of the framework is shown.
|
8 |
Advances in space and time efficient model checking of finite state systemsParashkevov, Atanas. January 2002 (has links) (PDF)
Bibliography: leaves 211-220 This thesis examines automated formal verification techniques and their associated space and time implementation complexity when applied to finite state concurrent systems. The focus is on concurrent systems expressed in the Communicating Sequential Processes (CSP) framework. An approach to the compilation of CSP system descriptions into boolean formulae in the form of Ordered Binary Decision Diagrams (OBDD) is presented, further utilised by a basic algorithm that checks a refinement or equivalence relation between a pair of processes in any of the three CSP semantic models. The performance bottlenecks of the basic refinement checking algorithms are identified and addressed with the introduction of a number of novel techniques and algorithms. Algorithms described in this thesis are implemented in the Adelaide Tefinement Checking Tool.
|
9 |
Advances in space and time efficient model checking of finite state systems / Atanas Nikolaev Parashkevov.Parashkevov, Atanas January 2002 (has links)
Bibliography: leaves 211-220 / xviii, 220 leaves : charts ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / This thesis examines automated formal verification techniques and their associated space and time implementation complexity when applied to finite state concurrent systems. The focus is on concurrent systems expressed in the Communicating Sequential Processes (CSP) framework. An approach to the compilation of CSP system descriptions into boolean formulae in the form of Ordered Binary Decision Diagrams (OBDD) is presented, further utilised by a basic algorithm that checks a refinement or equivalence relation between a pair of processes in any of the three CSP semantic models. The performance bottlenecks of the basic refinement checking algorithms are identified and addressed with the introduction of a number of novel techniques and algorithms. Algorithms described in this thesis are implemented in the Adelaide Tefinement Checking Tool. / Thesis (Ph.D.)--University of Adelaide, Dept. of Computer Science, 2002
|
10 |
Integration of model checking into software development processesXie, Fei 28 August 2008 (has links)
Not available / text
|
Page generated in 0.3623 seconds