31 |
Clausal resolution for branching-time temporal logicBolotov, Alexander January 2000 (has links)
No description available.
|
32 |
Early Verification of the Power Delivery Network in Integrated CircuitsAbdul Ghani, Nahi 05 January 2012 (has links)
The verification of power grids in modern integrated circuits must start early in the design process when adjustments can be most easily incorporated. We adopt an existing early verification framework. The
framework is vectorless, i.e., it does not require input test patterns and does not rely on simulating the power grid subject to these patterns. In this framework, circuit uncertainty is captured via a set of current constraints that capture what may be known or
specified from circuit behavior. Grid verification becomes a question of finding the worst-case grid behavior which, in turn, entails the solution of linear programs (LPs) whose size and number is proportional to the size of the grids. The thesis builds on this systematic framework for dealing with circuit uncertainty with the aim of improving efficiency and expanding the capabilities handled within.
One contribution introduces an efficient method based on a sparse approximate inverse technique to greatly reduce the size of the required linear programs while ensuring a user-specified over-estimation margin on the exact solution. The application of the
method is exhibited under both R and RC grid models. Another contribution first extends grid verification under RC grid models to
also check for the worst-case branch currents. This would require as many LPs as there are branches. Then, it shows how to adapt the approximate inverse technique to speed up the branch current verification process. A third contribution proposes a novel approach to reduce the number of LPs in the voltage drop and branch current
verification problems. This is achieved by examining dominance relations among node voltage drops and among branch currents. This
allows us to replace a group of LPs by one conservative and tight LP. A fourth contribution proposes an efficient verification technique under RLC models. The proposed approach provides tight conservative
bounds on the maximum and minimum worst-case voltage drops at every node on the grid.
|
33 |
Early Verification of the Power Delivery Network in Integrated CircuitsAbdul Ghani, Nahi 05 January 2012 (has links)
The verification of power grids in modern integrated circuits must start early in the design process when adjustments can be most easily incorporated. We adopt an existing early verification framework. The
framework is vectorless, i.e., it does not require input test patterns and does not rely on simulating the power grid subject to these patterns. In this framework, circuit uncertainty is captured via a set of current constraints that capture what may be known or
specified from circuit behavior. Grid verification becomes a question of finding the worst-case grid behavior which, in turn, entails the solution of linear programs (LPs) whose size and number is proportional to the size of the grids. The thesis builds on this systematic framework for dealing with circuit uncertainty with the aim of improving efficiency and expanding the capabilities handled within.
One contribution introduces an efficient method based on a sparse approximate inverse technique to greatly reduce the size of the required linear programs while ensuring a user-specified over-estimation margin on the exact solution. The application of the
method is exhibited under both R and RC grid models. Another contribution first extends grid verification under RC grid models to
also check for the worst-case branch currents. This would require as many LPs as there are branches. Then, it shows how to adapt the approximate inverse technique to speed up the branch current verification process. A third contribution proposes a novel approach to reduce the number of LPs in the voltage drop and branch current
verification problems. This is achieved by examining dominance relations among node voltage drops and among branch currents. This
allows us to replace a group of LPs by one conservative and tight LP. A fourth contribution proposes an efficient verification technique under RLC models. The proposed approach provides tight conservative
bounds on the maximum and minimum worst-case voltage drops at every node on the grid.
|
34 |
A methodology for modeling the verification, validation, and testing process for launch vehiclesSudol, Alicia 07 January 2016 (has links)
Completing the development process and getting to first flight has become a difficult hurdle for launch vehicles. Program cancellations in the last 30 years were largely due to cost overruns and schedule slips during the design, development, testing and evaluation (DDT&E) process. Unplanned rework cycles that occur during verification, validation, and testing (VVT) phases of development contribute significantly to these overruns, accounting for up to 75% of development cost. Current industry standard VVT planning is largely subjective with no method for evaluating the impact of rework. The goal of this research is to formulate and implement a method that will quantitatively capture the impact of unplanned rework by assessing the reliability, cost, schedule, and risk of VVT activities. First, the fidelity level of each test is defined and the probability of rework between activities is modeled using a dependency structure matrix. Then, a discrete event simulation projects the occurrence of rework cycles and evaluates the impact on reliability, cost, and schedule for a set of VVT activities. Finally, a quadratic risk impact function is used to calculate the risk level of the VVT strategy based on the resulting output distributions.
This method is applied to alternative VVT strategies for the Space Shuttle Main Engine to demonstrate how the impact of rework can be mitigated, using the actual test history as a baseline. Results indicate rework cost to be the primary driver in overall project risk, and yield interesting observations regarding the trade-off between the upfront cost of testing and the associated cost of rework. Ultimately, this final application problem demonstrates the merits of this methodology in evaluating VVT strategies and providing a risk-informed decision making framework for the verification, validation, and testing process of launch vehicle systems.
|
35 |
On verification and controller synthesis for probabilistic systems at runtimeUjma, Mateusz January 2015 (has links)
Probabilistic model checking is a technique employed for verifying the correctness of computer systems that exhibit probabilistic behaviour. A related technique is controller synthesis, which generates controllers that guarantee the correct behaviour of the system. Not all controllers can be generated offline, as the relevant information may only be available when the system is running, for example, the reliability of services may vary over time. In this thesis, we propose a framework based on controller synthesis for stochastic games at runtime. We model systems using stochastic two-player games parameterised with data obtained from monitoring of the running system. One player represents the controllable actions of the system, while the other player represents the hostile uncontrollable environment. The goal is to synthesize, for a given property specification, a controller for the first player that wins against all possible actions of the environment player. Initially, controller synthesis is invoked for the parameterised model and the resulting controller is applied to the running system. The process is repeated at runtime when changes in the monitored parameters are detected, whereby a new controller is generated and applied. To ensure the practicality of the framework, we focus on its three important aspects: performance, robustness, and scalability. We propose an incremental model construction technique to improve performance of runtime synthesis. In many cases, changes in monitored parameters are small and models built for consecutive parameter values are similar. We exploit this and incrementally build a model for the updated parameters reusing the previous model, effectively saving time. To address robustness, we develop a technique called permissive controller synthesis. Permissive controllers generalise the classical controllers by allowing the system to choose from a set of actions instead of just one. By using a permissive controller, a computer system can quickly adapt to a situation where an action becomes temporarily unavailable while still satisfying the property of interest. We tackle the scalability of controller synthesis with a learning-based approach. We develop a technique based on real-time dynamic programming which, by generating random trajectories through a model, synthesises an approximately optimal controller. We guide the generation using heuristics and can guarantee that, even in the cases where we only explore a small part of the model, we still obtain a correct controller. We develop a full implementation of these techniques and evaluate it on a large set of case studies from the PRISM benchmark suite, demonstrating significant performance gains in most cases. We also illustrate the working of the framework on a new case study of an open-source stock monitoring application.
|
36 |
Mechanization of program construction in Martin-Loef's theory of typesIreland, Andrew January 1989 (has links)
No description available.
|
37 |
Concurrent verification for sequential programsWickerson, John Peter January 2013 (has links)
This dissertation makes two contributions to the field of software verification. The first explains how verification techniques originally developed for concurrency can be usefully applied to sequential programs. The second describes how sequential programs can be verified using diagrams that have a parallel nature. The first contribution involves a new treatment of stability in verification methods based on rely-guarantee. When an assertion made in one thread of a concurrent system cannot be invalidated by the actions of other threads, that assertion is said to be 'stable'. Stability is normally enforced through side-conditions on rely-guarantee proof rules. This dissertation proposes instead to encode stability information into the syntactic form of the assertion. This approach, which we call explicit stabilisation, brings several benefits. First, we empower rely-guarantee with the ability to reason about library code for the first time. Second, when the rely-guarantee method is redepleyed in a sequential setting, explicit stabilisation allows more details of a module's implementation to be hidden when verifying clients. Third, explicit stabilisation brings a more nuanced understanding of the important issue of stability in concurrent and sequential verification; such an understanding grows ever more important as verification techniques grow ever more complex. The second contribution is a new method of presenting program proofs conducted in separation logic. Building on work by Jules Bean, the ribbon proof is a diagrammatic alternative to the standard 'proof outline'. By emphasising the structure of a proof, ribbon proofs are intelligible and hence useful pedagogically. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they are highly scalable; this we illustrate with a ribbon proof of the Version 7 Unix memory manager. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. We describe the ribbon proof system, prove its soundness and completeness, and outline a prototype tool for mechanically checking the diagrams it produ1res.
|
38 |
Timed Refinement for Verification of Real-Time Object Code ProgramsDubasi, Mohana Asha Latha January 2018 (has links)
Real-time systems such as medical devices, surgical robots, and microprocessors are safety- critical applications that have hard timing constraint. The correctness of real-time systems is important as the failure may result in severe consequences such as loss of money, time and human life. These real-time systems have software to control their behavior. Typically, these software have source code which is converted to object code and then executed in safety-critical embedded devices. Therefore, it is important to ensure that both source code and object code are error-free. When dealing with safety-critical systems, formal verification techniques have laid the foundation for ensuring software correctness. Refinement based technique in formal verification can be used for the verification of real- time interrupt-driven object code. This dissertation presents an automated tool that verifies the functional and timing correctness of real-time interrupt-driven object code programs. The tool has been developed in three stages. In the first stage, a novel timed refinement procedure that checks for timing properties has been developed and applied on six case studies. The required model and an abstraction technique were generated manually. The results indicate that the proposed abstraction technique reduces the size of the implementation model by at least four orders of magnitude. In the second stage, the proposed abstraction technique has been automated. This technique has been applied to thirty different case studies. The results indicate that the automated abstraction technique can easily reduce the model size, which would in turn significantly reduce the verification time. In the final stage, two new automated algorithms are proposed which would check the functional properties through safety and liveness. These algorithms were applied to the same thirty case studies. The results indicate that the functional verification can be performed in less than a second for the reduced model. The benefits of automating the verification process for real-time interrupt-driven object code include: 1) the overall size of the implementation model has reduced significantly; 2) the verification is within a reasonable time; 3) can be applied multiple times in the system development process. / Several parts of this dissertation was funded by a grant from the United States Government and the generous support of the American people through the United States Department of State and the United States Agency for International Development (USAID) under the Pakistan – U.S. Science & Technology Cooperation Program. The contents do not necessarily reflect the views of the United States Government.
|
39 |
PARALLELIZED ROBUSTNESS COMPUTATION FOR CYBER PHYSICALSYSTEMS VERIFICATIONCralley, Joseph 01 May 2020 (has links)
Failures in cyber physical systems can be costly in terms of money and lives. The marsclimate orbiter alone had a mission cost of 327.6 million USD which was almostcompletely wasted do to an uncaught design flaw. This shows the importance of beingable to define formal requirements as well as being able to test the design against theserequirements. One way to define requirements is in Metric Temporal Logic (MTL), whichallows for constraints that also have a time component. MTL can also have a distancemetric defined that allows for the calculation of how close the MTL constraint is to beingfalsified. This is termed robustness.Being able to calculate MTL robustness quickly can help reduce development time andcosts for a cyber physical system. In this thesis, improvements to the current method ofcomputing MTL robustness are proposed. These improvements lower the timecomplexity, allows parallel processing to be used, and lowers the memory foot print forMTL robustness calculation. These improvements will hopefully increase the likelihood ofMTL robustness being used in systems that were previously inaccessible do to timeconstraints, data resolution or real time systems that need results quickly. Theseimprovements will also open the possibility of using MTL in systems that operate for alarge amount of time and produce a large amount of signal data
|
40 |
Secure and Trusted VerificationCai, Yixian 06 1900 (has links)
In our setting, verification is a process that checks whether a device's program (implementation) has been produced according to its corresponding requirements specification. Ideally a client builds the requirements specification of a program and asks a developer to produce the actual program according to the requirements specification it provides. After the program is built, a verifier is asked to verify the program. However, nowadays verification methods usually require good knowledge of the program to be verified and thus sensitive information about the program itself can be easily leaked during the process.
In this thesis, we come up with the notion of secure and trusted verification which allows the developer to hide non-disclosed information about the program from the verifier during the verification process and a third party to check the correctness of the verification result. Moreover, we formally study the mutual trust between the verifier and the developer and define the above notion in the context of both an honest and a malicious developer.
Besides, we implement the notion above both in the setting of an honest and a malicious developer using cryptographic primitives and tabular expressions. Our construction allows the developer to hide the modules of a program and the verifier to do some-what white box verification. The security properties of the implementation are also formally discussed and strong security results are proved to be achieved. / Thesis / Master of Science (MSc)
|
Page generated in 0.1377 seconds