Spelling suggestions: "subject:"model checking."" "subject:"model hecking.""
271 |
A Domain Specific Language Based Approach for Generating Deadlock-Free Parallel Load Scheduling Protocols for Distributed SystemsAdhikari, Pooja 11 May 2013 (has links)
In this dissertation, the concept of using domain specific language to develop errorree parallel asynchronous load scheduling protocols for distributed systems is studied. The motivation of this study is rooted in addressing the high cost of verifying parallel asynchronous load scheduling protocols. Asynchronous parallel applications are prone to subtle bugs such as deadlocks and race conditions due to the possibility of non-determinism. Due to this non-deterministic behavior, traditional testing methods are less effective at finding software faults. One approach that can eliminate these software bugs is to employ model checking techniques that can verify that non-determinism will not cause software faults in parallel programs. Unfortunately, model checking requires the development of a verification model of a program in a separate verification language which can be an error-prone procedure and may not properly represent the semantics of the original system. The model checking approach can provide true positive result if the semantics of an implementation code and a verification model is represented under a single framework such that the verification model closely represents the implementation and the automation of a verification process is natural. In this dissertation, a domain specific language based verification framework is developed to design parallel load scheduling protocols and automatically verify their behavioral properties through model checking. A specification language, LBDSL, is introduced that facilitates the development of parallel load scheduling protocols. The LBDSL verification framework uses model checking techniques to verify the asynchronous behavior of the protocol. It allows the same protocol specification to be used for verification and the code generation. The support to automatic verification during protocol development reduces the verification cost post development. The applicability of LBDSL verification framework is illustrated by performing case study on three different types of load scheduling protocols. The study shows that the LBDSL based verification approach removes the need of debugging for deadlocks and race bugs which has potential to significantly lower software development costs.
|
272 |
Synthesis of Specifications and Refinement Maps for Real-Time Object Code VerificationAl-Qtiemat, Eman Mohammad January 2020 (has links)
Formal verification methods have been shown to be very effective in finding corner-case bugs and ensuring the safety of embedded software systems. The use of formal verification requires a specification, which is typically a high-level mathematical model that defines the correct behavior of the system to be verified. However, embedded software requirements are typically described in natural language. Transforming these requirements into formal specifications is currently a big gap. While there is some work in this area, we proposed solutions to address this gap in the context of refinement-based verification, a class of formal methods that have shown to be effective for embedded object code verification. The proposed approach also addresses both functional and timing requirements and has been demonstrated in the context of safety requirements for software control of infusion pumps. The next step in the verification process is to develop the refinement map, which is a mapping function that can relate an implementation state (in this context, the state of the object code program to be verified) with the specification state. Actually, constructing refinement maps often requires deep understanding and intuitions about the specification and implementation, it is shown very difficult to construct refinement maps manually. To go over this obstacle, the construction of refinement maps should be automated. As a first step toward the automation process, we manually developed refinement maps for various safety properties concerning the software control operation of infusion pumps. In addition, we identified possible generic templates for the construction of refinement maps. Recently, synthesizing procedures of refinement maps for functional and timing specifications are proposed. The proposed work develops a process that significantly increases the automation in the generation of these refinement maps. The refinement maps can then be used for refinement-based verification. This automation procedure has been successfully applied on the transformed safety requirements in the first part of our work. This approach is based on the identified generic refinement map templates which can be increased in the future as the application required.
|
273 |
Dynamic Dead Variable AnalysisLewis, Micah S. 18 August 2005 (has links) (PDF)
Dynamic dead variable analysis (DDVA) extends traditional static dead variable analysis (SDVA) in the context of model checking through the use of run-time information. The analysis is run multiple times during the course of model checking to create a more precise set of dead variables. The DDVA is evaluated based on the amount of memory used to complete model checking relative to SDVA while considering the extra overhead required to implement DDVA. On several models with a complex control flow graph, DDVA reduces the amount of memory needed by 38-88MB compared to SDVA with a cost of 36 bytes of memory verhead. On several models with loops, DDVA achieved no additional reduction compared to SDVA while requiring no more memory than SDVA.
|
274 |
Efficient Logic Encryption Techniques for Sequential CircuitsKasarabada, Yasaswy V. 15 July 2021 (has links)
No description available.
|
275 |
Assumption-Based Runtime Verification of Finite- and Infinite-State SystemsTian, Chun 23 November 2022 (has links)
Runtime Verification (RV) is usually considered as a lightweight automatic verification technique for the dynamic analysis of systems, where a monitor observes executions produced by a system and analyzes its executions against a formal specification. If the monitor were synthesized, in addition to the monitoring specification, also from extra assumptions on the system behavior (typically described by a model as transition systems), then it may output more precise verdicts or even be predictive, meanwhile it may no longer be lightweight, since monitoring under assumptions has the same computation complexity with model checking. When suitable assumptions come into play, the monitor may also support partial observability, where non-observable variables in the specification can be inferred from observables, either present or historical ones. Furthermore, the monitors are resettable, i.e. being able to evaluate the specification at non-initial time of the executions while keeping memories of the input history. This helps in breaking the monotonicity of monitors, which, after reaching conclusive verdicts, can still change its future outputs by resetting its reference time. The combination of the above three characteristics (assumptions, partial observability and resets) in the monitor synthesis is called the Assumption-Based Runtime Verification, or ABRV. In this thesis, we give the formalism of the ABRV approach and a group of monitoring algorithms based on specifications expressed in Linear Temporal Logic with both future and past operators, involving Boolean and possibly other types of variables. When all involved variables have finite domain, the monitors can be synthesized as finite-state machines implemented by Binary Decision Diagrams. With infinite-domain variables, the infinite-state monitors are based on satisfiability modulo theories, first-order quantifier elimination and various model checking techniques. In particular, Bounded Model Checking is modified to do its work incrementally for efficiently obtaining inconclusive verdicts, before IC3-based model checkers get involved. All the monitoring algorithms in this thesis are implemented in a tool called NuRV. NuRV support online and offline monitoring, and can also generate standalone monitor code in various programming languages. In particular, monitors can be synthesized as SMV models, whose behavior correctness and some other properties can be further verified by model checking.
|
276 |
Towards Formal Verification in a Component-based Reuse MethodologyKarlsson, Daniel January 2003 (has links)
Embedded systems are becoming increasingly common in our everyday lives. As techonology progresses, these systems become more and more complex. Designers handle this increasing complexity by reusing existing components (Intellectual Property blocks). At the same time, the systems must still fulfill strict requirements on reliability and correctness. This thesis proposes a formal verification methodology which smoothly integrates with component-based system-level design using a divide and conquer approach. The methodology assumes that the system consists of several reusable components. Each of these components are already formally verified by their designers and are considered correct given that the environment satisfies certain properties imposed by the component. What remains to be verified is the glue logic inserted between the components. Each such glue logic is verified one at a time using model checking techniques. The verification methodology as well as the underlying theoretical framework and algorithms are presented in the thesis. Experimental results have shown the efficiency of the proposed methodology and demonstrated that it is feasible to apply it on real-life examples. / <p>Report code: LiU-Tek-Lic-2003:57.</p>
|
277 |
A review of modelling and verification approaches for computational biologyKonur, Savas January 2020 (has links)
This paper reviews most frequently used computational modelling approaches and formal verification techniques in computational biology. The paper also compares a number of model checking tools and software suits used in analysing biological systems and biochemical networks and verifiying a wide range of biological properties.
|
278 |
Bayesian Model Checking Strategies for Dichotomous Item Response Theory ModelsToribio, Sherwin G. 16 June 2006 (has links)
No description available.
|
279 |
SYMBOLIC ANALYSIS OF WEAK CONCURRENCY SEMANTICS IN MODERN DATABASE PROGRAMSKiarash Rahmani (13171128) 28 July 2022 (has links)
<p>The goal of this dissertation is to design a collection of techniques and tools that enable<br>
the ease of programming under the traditional strong concurrency guarantees, without sacrificing the performance offered by modern distributed database systems. Our main thesis<br>
is that language-centric reasoning can help developers efficiently identify and eliminate con-<br>
currency anomalies in modern database programs, and we have demonstrated that it results<br>
in faster and safer database programs</p>
|
280 |
Search State Extensibility based Learning Framework for Model Checking and Test GenerationChandrasekar, Maheshwar 20 September 2010 (has links)
The increasing design complexity and shrinking feature size of hardware designs have created resource intensive design verification and manufacturing test phases in the product life-cycle of a digital system. On the contrary, time-to-market constraints require faster verification and test phases; otherwise it may result in a buggy design or a defective product. This trend in the semiconductor industry has considerably increased the complexity and importance of Design Verification, Manufacturing Test and Silicon Diagnosis phases of a digital system production life-cycle. In this dissertation, we present a generalized learning framework, which can be customized to the common solving technique for problems in these three phases.
During Design Verification, the conformance of the final design to its specifications is verified. Simulation-based and Formal verification are the two widely known techniques for design verification. Although the former technique can increase confidence in the design, only the latter can ensure the correctness of a design with respect to a given specification. Originally, Design Verification techniques were based on Binary Decision Diagram (BDD) but now such techniques are based on branch-and-bound procedures to avoid space explosion. However, branch-and-bound procedures may explode in time; thus efficient heuristics and intelligent learning techniques are essential. In this dissertation, we propose a novel extensibility relation between search states and a learning framework that aids in identifying non-trivial redundant search states during the branch-and-bound search procedure. Further, we also propose a probability based heuristic to guide our learning technique. First, we utilize this framework in a branch-and-bound based preimage computation engine. Next, we show that it can be used to perform an upper-approximation based state space traversal, which is essential to handle industrial-scale hardware designs. Finally, we propose a simple but elegant image extraction technique that utilizes our learning framework to compute over-approximate image space. This image computation is later leveraged to create an abstraction-refinement based model checking framework.
During Manufacturing Test, test patterns are applied to the fabricated system, in a test environment, to check for the existence of fabrication defects. Such patterns are usually generated by Automatic Test Pattern Generation (ATPG) techniques, which assume certain fault types to model arbitrary defects. The size of fault list and test set has a major impact on the economics of manufacturing test. Towards this end, we propose a fault col lapsing approach to compact the size of target fault list for ATPG techniques. Further, from the very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain. However, ATPG is a problem in the multi-valued domain; thus we propose a multi-valued ATPG framework to utilize this underlying nature. We also employ our learning technique for branch-and-bound procedures in this multi-valued framework.
To improve the yield for high-volume manufacturing, silicon diagnosis identifies a set of candidate defect locations in a faulty chip. Subsequently physical failure analysis - an extremely time consuming step - utilizes these candidates as an aid to locate the defects. To reduce the number of candidates returned to the physical failure analysis step, efficient diagnostic patterns are essential. Towards this objective, we propose an incremental framework that utilizes our learning technique for a branch-and-bound procedure. Further, it learns from the ATPG phase where detection-patterns are generated and utilizes this information during diagnostic-pattern generation. Finally, we present a probability based heuristic for X-filling of detection-patterns with the objective of enhancing the diagnostic resolution of such patterns. We unify these techniques into a framework for test pattern generation with good detection and diagnostic ability. Overall, we propose a learning framework that can speed up design verification, test and diagnosis steps in the life cycle of a hardware system. / Ph. D.
|
Page generated in 0.0789 seconds