Spelling suggestions: "subject:"5oftware cerification"" "subject:"5oftware erification""
61 |
Formally certified satisfiability solvingOe, Duck Ki 01 July 2012 (has links)
Satisfiability (SAT) and satisfiability modulo theories (SMT) solvers are high-performance automated propositional and first-order theorem provers, used as underlying tools in many formal verification and artificial intelligence systems. Theoretic and engineering advancement of solver technologies improved the performance of modern solvers; however, the increased complexity of those solvers calls for formal verification of those tools themselves. This thesis discusses two methods to formally certify SAT/SMT solvers. The first method is generating proofs from solvers and certifying those proofs. Because new theories are constantly added to SMT solvers, a flexible framework to safely add new inference rules is necessary. The proposal is to use a meta-language called LFSC, which is based on Edinburgh Logical Framework. SAT/SMT logics have been encoded in LFSC, and the encoding can be easily and modularly extended for new logics. It is shown that an optimized LFSC checker can certify SMT proofs efficiently. The second method is using a verified programming language to implement a SAT solver and verify the code statically. Guru is a pure functional programming language with support for dependent types and theorem proving; Guru also allows for efficient code generation by means of resource typing. A modern SAT solver, called versat, has been implemented and verified to be correct in Guru. The performance of versat is shown to be comparable with that of the current proof checking technology.
|
62 |
Sound Modular Extraction of Control Flow Graphs from Java Bytecodede Carvalho Gomes, Pedro January 2012 (has links)
Control flow graphs (CFGs) are abstract program models that preserve the control flow information. They have been widely utilized for many static analyses in the past decades. Unfortunately, previous studies about the CFG construction from modern languages, such as Java, have either neglected advanced features that influence the control flow, or do not provide a correctness argument. This is a bearable issue for some program analyses, but not for formal methods, where the soundness of CFGs is a mandatory condition for the verification of safety-critical properties. Moreover, when developing open systems, i.e., systems in which at least one component is missing, one may want to extract CFGs to verify the available components. Soundness is even harder to achieve in this scenario, because of the unknown inter-dependencies involving missing components. In this work we present two variants of a CFG extraction algorithm from Java bytecode considering precise exceptional flow, which are sound w.r.t to the JVM behavior. The first algorithm extracts CFGs from fully-provided (closed) programs only. It proceeds in two phases. Initially the Java bytecode is translated into a stack-less intermediate representation named BIR, which provides explicit representation of exceptions, and is more compact than the original bytecode. Next, we define the transformation from BIR to CFGs, which, among other features, considers the propagation of uncaught exceptions within method calls. We then establish its correctness: the behavior of the extracted CFGs is shown to be a sound over-approximation of the behavior of the original programs. Thus, temporal safety properties that hold for the CFGs also hold for the program. We prove this by suitably combining the properties of the two transformations with those of a previous idealized CFG extraction algorithm, whose correctness has been proven directly. The second variant of the algorithm is defined for open systems. We generalize the extraction algorithm for closed systems for a modular set-up, and resolve inter-dependencies involving missing components by using user-provided interfaces. We establish its correctness by defining a refinement relation between open systems, which constrains the instantiation of missing components. We prove that if the relation holds, then the CFGs extracted from the components of the original open system are sound over-approximations of the CFGs for the same components in the refined system. Thus, temporal safety properties that hold for an open system also hold for closed systems that refine it. We have implemented both algorithms as the ConFlEx tool. It uses Sawja, an external library for the static analysis of Java bytecode, to transform bytecode into BIR, and to resolve virtual method calls. We have extended Sawja to support open systems, and improved its exception type analysis. Experimental results have shown that the algorithm for closed systems generates more precise CFGs than the modular algorithm. This was expected, due to the heavy over-approximations the latter has to perform to be sound. Also, both algorithms are linear in the number of bytecode instructions. Therefore, ConFlEx is efficient for the extraction of CFGs from either open, or closed Java bytecode programs. / <p>QC 20121122</p>
|
63 |
Efficient Neural Network Verification Using Branch and BoundWang, Shiqi January 2022 (has links)
Neural networks have demonstrated great success in modern machine learning systems. However, they remain susceptible to incorrect corner-case behaviors, often behaving unpredictably and producing surprisingly wrong results. Therefore, it is desirable to formally guarantee their trustworthiness for certain robustness properties when applied to safety-/security-sensitive systems like autonomous vehicles and aircraft. Unfortunately, the task is extremely challenging due to the complexity of neural networks, and traditional formal methods were not efficient enough to verify practical properties. Recently, a Branch and Bound (BaB) framework is generally extended for neural network verification and shows great success in accelerating the verification.
This dissertation focuses on state-of-the-art neural network verifiers using BaB. We will first introduce two efficient neural network verifiers ReluVal and Neurify using basic BaB approaches involving two main steps: (1) They will recursively split the original verification problem into easier independent subproblems by splitting input or hidden neurons; (2) For each split subproblem, we propose an efficient and tight bound propagation method called symbolic interval analysis, producing sound estimated bounds for outputs using convex linear relaxations. Both ReluVal and Neurify are three orders of magnitude faster than previously state-of-the-art formal analysis systems on standard verification benchmarks.
However, basic BaB approaches like Neurify have to construct each subproblem into a Linear Programming (LP) problem and solve it using expensive LP solvers, significantly limiting the overall efficiency. This is because each step of BaB will introduce neuron split constraints (e.g., a ReLU neuron larger or smaller than 0), which are hard to be handled by existing efficient bound propagation methods. We propose novel designs of bound propagation method 𝛼-CROWN and its improved variance 𝛽-CROWN, solving the verification problem by optimizing Lagrangian multipliers 𝛼 and 𝛽 with gradient ascent without requiring to call any expensive LP solvers. They were built based on previous work CROWN, a generalized efficient bound propagation method using linear relaxation. BaB verification using 𝛼-CROWN and 𝛽-CROWN cannot only provide tighter output estimations than most of the bound propagation methods but also can fully leverage the accelerations by GPUs with massive parallelization.
Combining our methods with BaB empowers the state-of-the-art verifier 𝛼,𝛽-CROWN (alpha-beta-CROWN), the winning tool in the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score. Our $\alpha,𝛽-CROWN can be three orders of magnitude faster than LP solver based BaB verifiers and is notably faster than all existing approaches on GPUs. Recently, we further generalize 𝛽-CROWN and propose an efficient iterative approach that can tighten all intermediate layer bounds under neuron split constraints and strengthen the bound tightness without LP solvers. This new approach in BaB can greatly improve the efficiency of 𝛼,𝛽-CROWN, especially on several challenging benchmarks.
Lastly, we study verifiable training that incorporates verification properties in training procedures to enhance the verifiable robustness of trained models and scale verification to larger models and datasets. We propose two general verifiable training frameworks: (1) MixTrain that can significantly improve verifiable training efficiency and scalability and (2) adaptive verifiable training that can improve trained verifiable robustness accounting for label similarity. The combination of verifiable training and BaB based verifiers opens promising directions for more efficient and scalable neural network verification.
|
64 |
Formalized Generalization Bounds for Perceptron-Like AlgorithmsKelby, Robin J. 22 September 2020 (has links)
No description available.
|
65 |
Verifying Value Iteration and Policy Iteration in CoqMasters, David M. 01 June 2021 (has links)
No description available.
|
66 |
A Formal Method to Analyze Framework-Based SoftwareLarson, Trent N. 01 August 2002 (has links) (PDF)
Software systems are frequently designed using abstractions that make software verification tractable. Specifically, by choosing meaningful, formal abstractions for interfaces and then designing according to those interfaces, one can verify entire systems according to behavioral predicates. While impractical for systems in general, framework-based software architectures are a type of system for which formal analysis can be beneficial and practical over the life of the system. We present a method to formally analyze behavioral properties of framework-based software with higher-order logic and then demonstrate its utility for a significant, modern system.
|
67 |
Verification of Digital Controller VerificationsWang, Xuan 10 November 2005 (has links) (PDF)
This thesis presents an analysis framework to verify the stablility property of a closed-loop control system with a software controller implementation. The usual approach to verifying stability for software uses experiments which are costly and can be dangerous. More recently, mathematical models of software have been proposed which can be used to reason about the correctness of controllers. However, these mathematical models ignore computational details that may be important in verification. We propose a method to determine the instability of a closed-loop system with a software controller implementation under l^2 inputs using simulation. This method avoids the cost of experimentation and the loss of precision inherent in mathematical modeling. The method uses the small gain theorem to compute a lower bound on the 2-induced norm of the uncertainty in the software implementation; if the lower bound is greater than 1/(2-induced norm of G), where G is the feedback system consisting of the mathematical model of the plant and the mathematical model of the controller, the closed-loop system is unsafe in a certain sense. The resulting method can not determine if the closed-loop system is stable, but can only suggest instability.
|
68 |
Formalization Of Input And Output In Modern Operating Systems: The Hadley ModelGerber, Matthew 01 January 2005 (has links)
We present the Hadley model, a formal descriptive model of input and output for modern computer operating systems. Our model is intentionally inspired by the Open Systems Interconnection model of networking; I/O as a process is defined as a set of translations between a set of computer-sensible forms, or layers, of information. To illustrate an initial application domain, we discuss the utility of the Hadley model and a potential associated I/O system as a tool for digital forensic investigators. To illustrate practical uses of the Hadley model we present the Hadley Specification Language, an essentially functional language designed to allow the translations that comprise I/O to be written in a concise format allowing for relatively easy verifiability. To further illustrate the utility of the language we present a read/write Microsoft DOS FAT12 and read-only Linux ext2 file system specification written in the new format. We prove the correctness of the read-only side of these descriptions. We present test results from operation of our HSL-driven system both in user mode on stored disk images and as part of a Linux kernel module allowing file systems to be read. We conclude by discussing future directions for the research.
|
69 |
RAUK: Automatic Schedulability Analysis of RTIC Applications Using Symbolic ExecutionHåkansson, Mark January 2022 (has links)
In this thesis, the proof-of-concept tool RAUK for automatically analyzing RTIC applications for schedulability using symbolic execution is presented. The RTIC framework provides a declarative executable model for building embedded applications, which behavior is based on established formal methods and policies. Because of this, RTIC applications are amenable for both worst-case execution time (WCET) and scheduling analysis techniques. Internally, RAUK utilizes the symbolic execution tool KLEE to generate test vectors covering all feasible execution paths in all user tasks in the RTIC application. Since KLEE also checks for possible program errors e.g. arithmetic or array indexing errors, it can be used via RAUK to verify the robustness of the application in terms of program errors. The test vectors are replayed on the target hardware to record a WCET estimation for all user tasks. These WCET measurements are used to derive a worst-case response time (WCRT) for each user task, which in turn is used to determine if the system is schedulable using formal scheduling analysis techniques. The evaluation of this tool shows a good correlation between the results from RAUK and manual measurements of the same tasks, which showcases the viability of this approach. However, the current implementation can add some substantial overhead to the measurements, and sometimes certain types of paths in the application can be completely absent from the analysis. The work in this thesis is based on previous research in this field for WCET estimation using KLEE on an older iteration of the RTIC framework. Our contributions include a focus on an RTIC 1.0 pre-release, a seamless integration with the Rust ecosystem, minimal changes required to the application itself, as well as an included automatic schedulability analyzer. Currently, RAUK can verify simple RTIC applications for both program errors and schedulability with minimal changes to the application source code. The groundwork is laid out for further improvements that are required to function on larger and more complex applications. Solutions for known problems and future work are discussed in Chapters 6, 7 respectively.
|
70 |
Working Towards the Verified Software ProcessAdcock, Bruce M. January 2010 (has links)
No description available.
|
Page generated in 0.1003 seconds