Spelling suggestions: "subject:"5oftware cerification"" "subject:"5oftware erification""
1 |
A structured approach to the verification of low level microcodeCurzon, Paul January 1990 (has links)
No description available.
|
2 |
Deductive mechanical verification of concurrent systemsSumners, Robert W. 28 August 2008 (has links)
Not available / text
|
3 |
Concurrent verification for sequential programsWickerson, John Peter January 2013 (has links)
This dissertation makes two contributions to the field of software verification. The first explains how verification techniques originally developed for concurrency can be usefully applied to sequential programs. The second describes how sequential programs can be verified using diagrams that have a parallel nature. The first contribution involves a new treatment of stability in verification methods based on rely-guarantee. When an assertion made in one thread of a concurrent system cannot be invalidated by the actions of other threads, that assertion is said to be 'stable'. Stability is normally enforced through side-conditions on rely-guarantee proof rules. This dissertation proposes instead to encode stability information into the syntactic form of the assertion. This approach, which we call explicit stabilisation, brings several benefits. First, we empower rely-guarantee with the ability to reason about library code for the first time. Second, when the rely-guarantee method is redepleyed in a sequential setting, explicit stabilisation allows more details of a module's implementation to be hidden when verifying clients. Third, explicit stabilisation brings a more nuanced understanding of the important issue of stability in concurrent and sequential verification; such an understanding grows ever more important as verification techniques grow ever more complex. The second contribution is a new method of presenting program proofs conducted in separation logic. Building on work by Jules Bean, the ribbon proof is a diagrammatic alternative to the standard 'proof outline'. By emphasising the structure of a proof, ribbon proofs are intelligible and hence useful pedagogically. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they are highly scalable; this we illustrate with a ribbon proof of the Version 7 Unix memory manager. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. We describe the ribbon proof system, prove its soundness and completeness, and outline a prototype tool for mechanically checking the diagrams it produ1res.
|
4 |
Machine-Learning-Assisted Test Generation to Characterize Failures for Cyber-Physical SystemsChandar, Abhishek 17 July 2023 (has links)
With the advancements in Internet of Things (IoT) and innovations in the networking domain, Cyber-Physical Systems (CPS) are rapidly adopted in various domains from autonomous vehicles to manufacturing systems to improve the efficiency of the overall development of complex physical systems. CPS models allow an easy and cost-effective approach to alter the architecture of the system that yields optimal performance. This is especially crucial in the early stages of development of a physical system. Developing effective testing strategies to test CPS models is necessary to ensure that there are no defects during the execution of the system. Typically, a set of requirements are defined from the domain expertise to assert the system's behavior on different possible inputs. To effectively test CPS, a large number of test inputs is required to observe their performance on a variety of test inputs. But real-world CPS models are compute-intensive (i.e. takes a significant amount of time to execute the CPS for a given test input). Therefore, it is almost impossible to execute CPS models over a large number of test inputs. This leads to sub-optimal fixes based on the identified defects which may lead to costly issues at later stages of development.
In this thesis, we aim to improve the efficiency of existing search-based software testing approaches to test compute-intensive CPS by combining them with ML. We call these ML-assisted test generation. In this work, we investigate two alternate ML-assisted test generation techniques: (1) surrogate-assisted and (2) ML-guided test generation, to efficiently test a given CPS model. Both the surrogate-assisted and ML-guided test generation can generate many test inputs. Therefore, we propose to build failure models that generate explainable rules on failure-inducing test inputs of the CPS model. Surrogate-assisted test generation involves using ML as a replacement to CPS under test so that the fitness value of some test inputs are predicted rather than executing them using CPS. A large number of test inputs are generated by combining cheap surrogate predictions and compute-intensive execution of CPS model to find the labels of the test inputs. Specifically, we propose a new surrogate-assisted test generation technique that leverages multiple surrogate models simultaneously and dynamically selects the prediction from the most accurate label. Alternatively, ML-assisted test generation aims to estimate the boundary regions that separate test inputs that pass the requirements and test inputs that fail the requirements and subsequently guide the sampling of test inputs from these boundary regions. Further, the test data generated by the ML-assisted test generation techniques are used to infer two alternative failure models namely the Decision Rule Model (DRM) and Decision Tree Model (DTM) that characterizes the failure circumstances of the CPS model. We conduct an empirical evaluation of the accuracy of failure models inferred from test data generated by both ML-assisted test generation techniques. Using a total of 15 different functional requirements from 5 Simulink-based benchmarks CPS, we observed that the proposed dynamic surrogate-assisted test generation technique generates failure models with an average accuracy of 83% for DRM and 90% for DTM. The average accuracy of the dynamic surrogate-assisted technique has a 16.9% improvement in the average accuracy of DRM and a 7.1% improvement in the average accuracy of DTM compared to the random search baseline.
|
5 |
Sequential redundancy identification using transformation-based verificationMony, Hari, 1977- 29 August 2008 (has links)
The design of complex digital hardware is challenging and error-prone. With short design cycles and increasing complexity of designs, functional verification has become the most expensive and time-consuming aspect of the digital design process. Sequential equivalence checking (SEC) has been proposed as a verification framework to perform a true sequential check of input/output equivalence between two designs. SEC provides several benefits that can enable a faster and more efficient way to design and verify large and complex digital hardware. It can be used to prove that micro-architectural optimizations needed for design closure preserve design functionality, and thus avoid the costly and incomplete functional verification regression traditionally used for such purposes. Moreover, SEC can be used to validate sequential synthesis transformations and thereby enable design and verification at a higher-level of abstraction. Use of sequential synthesis leads to shorter design cycles and can result in a significant improvement in design quality. In this dissertation, we study the problem of sequential redundancy identification to enable robust sequential equivalence checking solutions. In particular, we focus on the use of a transformation-based verification framework to synergistically leverage various transformations to simplify and decompose large problems which arise during sequential redundancy identification to enable an efficient and highly scalable SEC solution. We make five main contributions in this dissertation. First, we introduce a novel sequential redundancy identification framework that dramatically increases the scalability of SEC. Second, we propose the use of a flexible and synergistic set of transformation and verification algorithms for sequential redundancy identification. This more general approach enables greater speed and scalability and identifies a significantly greater degree of redundancy than previous approaches. Third, we introduce the theory and practice of transformation-based verification in the presence of constraints. Constraints are pervasively used in verification testbenches to specify environmental assumptions to prevent illegal input scenarios. Fourth, we develop the theoretical framework with corresponding efficient implementation for optimal sequential redundancy identification in the presence of constraints. Fifth, we address the scalability of transformation-based verification by proposing two new structural abstraction techniques. We also study the synergies between various transformation algorithms and propose new strategies for using these transformations to enable scalable sequential redundancy identification. / text
|
6 |
Secure and Trusted Partial White-box Verification Based on Garbled CircuitsZhong, Hongsheng January 2016 (has links)
Verification is a process that checks whether a program G, implemented by a devel- oper, correctly complies with the corresponding requirement specifications. A verifier, whose interests may be different from the developer, will conduct such verification on G. However, as the developer and the verifier distrust each other probably, either of them may exhibit harmful behavior and take advantage of the verification. Generally, the developer hopes to protect the content privacy of the program, while the verifier wants to conduct effective verification to detect the possible errors. Therefore, a ques- tion inevitably arises: How to conduct an effective and efficient kind of verification, without breaking the security requirements of the two parties?
We treat verification as a process akin to testing, i.e. verifying the design with test cases and checking the results. In order to make the verification more effective, we get rid of the limitations in traditional testing approaches, like black-box and white-box testing, and propose the “partial white-box verification”.
Taking circuits as the description means, we regard the program as a circuit graph. Making the structure of the graph public, we manage to make the verification process in such a graph partially white-box. Via garbled circuits, commitment schemes and other techniques, the security requirements in such verification are guaranteed. / Thesis / Master of Science (MSc)
|
7 |
Knowledge-based support for software selection in information centers: Design criteria, development issues, and empirical evaluation.Vinze, Ajay Shreekrishna. January 1988 (has links)
An information center (IC) is described as an organization designed to help end users help themselves. ICs are expected to provide several services to end users. The services can be summarized as: consultation, distribution and trouble-shooting. The research is focused on a specific consultation activity: software selection. Providing support for selection and evaluation of software for users constitutes 91.5 percent of a typical IC's daily workload. In the last decade, ICs have proved successful in managing software resources for organizations. The initial success of ICs has increased user expectations and demand for the services offered but, because ICs are considered cost centers in most organizations, there is growing pressure for them to accomplish more with fewer resources. The research hypothesis is that the knowledge and methodologies of IC consultants, concerning software selection, as well as relevant institutional policies, can be represented in a knowledge base. A knowledge-based system ICE (Information Center Expert) to assist users with software selection has been developed and evaluated in the study reported here. The development of ICE used two main design criteria: maintainability and transportability. Maintainability was defined as the ability to support frequent updating of the software supported by an IC. This is important because new software tools are introduced in the market at a very rapid rate; to stay competitive an IC must be able continually to adapt to this dynamic environment. Transportability was considered necessary to make ICE usable in many different ICs, each supporting a different set of software. The transportability feature allows different ICs to individualize the system to meet their own site-specific needs. Validation studies were conducted to test the appropriateness of the recommendations made by ICE, using "blind" validation procedures in which scenarios (in case form) were presented to consultants. The cases were selected to represent problems frequently taken to an IC. Two sets of solutions, those offered by consultants and those provided by ICE, were then presented to experts who were asked to judge the appropriateness of each solution to a case without knowing its source. To test the comparative advantages of using ICE or IC consultants to obtain assistance with software selection a laboratory experiment was conducted. A hypothetical construct called "Consultation Effectiveness" was used, which included measures for "user satisfaction" with the process, as well as measures for the "task basis" and the "recommendation basis" for evaluating a consultation session.
|
8 |
Interactive program verification using virtual programsTopor, Rodney W. January 1975 (has links)
This thesis is concerned with ways of proving the correctness of computer programs. The first part of the thesis presents a new method for doing this. The method, called continuation induction, is based on the ideas of symbolic execution, the description of a given program by a virtual program, and the demonstration that these two programs are equivalent whenever the given program terminates. The main advantage of continuation induction over other methods is that it enables programs using a wide variety of programming constructs such as recursion, iteration, non-determinism, procedures with side-effects and jumps out of blocks to be handled in a natural and uniform way. In the second part of the thesis a program verifier which uses both this method and Floyd's inductive assertion method is described. The significance of this verifier is that it is designed to be extensible, and to this end the user can declare new functions and predicates to be used in giving a natural description of the program's intention. Rules describing these new functions can then be used when verifying the program. To actually prove the verification conditions, the system employs automatic simplification, a relatively clever matcher, a simple natural deduction system and, most importantly, the user's advice. A large number of commands are provided for the user in guiding the system to a proof of the program's correctness. The system has been used to verify various programs including two sorting programs and a program to invert a permutation 'in place' the proofs of the sorting programs included a proof of the fact that the final array was a permutation of the original one. Finally, some observations and suggestions are made concerning the continued development of such interactive verification systems.
|
9 |
Formal symbolic verification using heuristic search and abstraction techniquesQian, Kairong, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Computing devices are pervading our everyday life and imposing challenges for designers that have the responsibility of producing reliable hardware and software systems. As systems grow in size and complexity, it becomes increasingly difficult to verify whether a design works as intended. Conventional verification methods, such as simulation and testing, exercise only parts of the system and from these parts, draw conclusions about the correctness of the total design. For complex designs, the parts of the system that can be verified are relatively small. Formal verification aims to overcome this problem. Instead of exercising the system, formal verification builds mathematical models of designs and proves whether properties hold in these models. In doing so, it at least aims to cover the complete design. Model checking is a formal verification method that automatically verifies a model of a design, or generates diagnostic information if the model cannot be verified. It is because of this usability and level of automation that model checking has gained a high degree of success in verifying circuit designs. The major disadvantage of model checking is its poor scalability. This is due to its algorithmic nature: namely, every state of the model needs to be enumerated. In practice, properties of interest may not need the exhaustive enumeration of the model state space. Many properties can be verified (or falsified) by examining a small number of states. In such cases, exhaustive algorithms can be replaced with search algorithms that are directed by heuristics. Methods based on heuristics generally scale well. This thesis investigates non-exhaustive model checking algorithms and focuses on error detection in system verification. The approach is based on a novel integration of symbolic model checking, heuristic search and abstraction techniques to produce a framework that we call abstractiondirected model checking. There are 3 main components in this framework. First, binary decision diagrams (BDDs) and heuristic search are combined to develop a symbolic heuristic search algorithm. This algorithm is used to detect errors. Second, abstraction techniques are applied in an iterative way. In the initial phase, the model is abstracted, and this model is verified using exhaustive algorithms. If a definitive verification result cannot be obtained, the same abstraction is re-used to generate a search heuristic. The heuristic in turn is used to direct a search algorithm that searches for error states in the concrete model. Third, a model transformation mechanism converts an arbitrary branching-time property to a reachability property. Essentially, this component allows this framework to be applied to a more general class of temporal property. By amalgamating these three components, the framework offers a new verification methodology that speeds up error detection in formal verification. The current implementation of this framework indicates that it can outperform existing standard techniques both in run-time and memory consumption, and scales much better than conventional model checking.
|
10 |
A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcementLarkin, James Unknown Date (has links)
Program verification is vital as more and more users are creating, downloading and executing foreign computer programs. Software verification tools provide a means for determining if a program adheres to a user’s security requirements, or security policy. There are many verification tools that exist for checking different types of policies on different types of programs. Currently however, there is no verification tool capable of determining if all types of programs satisfy all types of policies. This thesis describes a framework for supporting multiple verification tools to determine program satisfaction. A user’s security requirements are represented at multiple levels of abstraction as Intermediate Execution Environments. Using a sequence of configurations, a user’s security requirements are transformed from the abstract level to the tool level, possibly for multiple verification tools. Using a number of case studies, the validity of the framework is shown.
|
Page generated in 0.1094 seconds