• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 870
  • 125
  • 116
  • 106
  • 63
  • 24
  • 24
  • 20
  • 12
  • 9
  • 8
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1763
  • 421
  • 360
  • 297
  • 270
  • 262
  • 254
  • 223
  • 211
  • 193
  • 179
  • 171
  • 129
  • 123
  • 122
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Formal Verification of Instruction Dependencies in Microprocessors

Shehata, Hazem January 2011 (has links)
In microprocessors, achieving an efficient utilization of the execution units is a key factor in improving performance. However, maintaining an uninterrupted flow of instructions is a challenge due to the data and control dependencies between instructions of a program. Modern microprocessors employ aggressive optimizations trying to keep their execution units busy without violating inter-instruction dependencies. Such complex optimizations may cause subtle implementation flaws that can be hard to detect using conventional simulation-based verification techniques. Formal verification is known for its ability to discover design flaws that may go undetected using conventional verification techniques. However, with formal verification come two major challenges. First, the correctness of the implementation needs to be defined formally. Second, formal verification is often hard to apply at the scale of realistic implementations. In this thesis, we present a formal verification strategy to guarantee that a microprocessor implementation preserves both data and control dependencies among instructions. Throughout our strategy, we address the two major challenges associated with formal verification: correctness and scalability. We address the correctness challenge by specifying our correctness in the context of generic pipelines. Unlike conventional pipeline hazard rules, we make no distinction between the data and control aspects. Instead, we describe the relationship between a producer instruction and a consumer instruction in a way such that both instructions can speculatively read their source operands, speculatively write their results, and go out of their program order during execution. In addition to supporting branch and value prediction, our correctness criteria allow the implementation to discard (squash) or replay instructions while being executed. We address the scalability challenge in three ways: abstraction, decomposition, and induction. First, we state our inter-instruction dependency correctness criteria in terms of read and write operations without making reference to data values. Consequently, our correctness criteria can be verified for implementations with abstract datapaths. Second, we decompose our correctness criteria into a set of smaller obligations that are easier to verify. All these obligations can be expressed as properties within the Syntactically-Safe fragment of Linear Temporal Logic (SSLTL). Third, we introduce a technique to verify SSLTL properties by induction, and prove its soundness and completeness. To demonstrate our overall strategy, we verified a term-level model of an out-of-order speculative processor. The processor model implements register renaming using a P6-style reorder buffer and branch prediction with a hybrid (discard-replay) recovery mechanism. The verification obligations (expressed in SSLTL) are checked using a tool implementing our inductive technique. Our tool, named Tahrir, is built on top of a generic interface to SMT solvers and can be generally used for verifying SSLTL properties about infinite-state systems.
162

Parallel Run-Time Verification

Berkovich, Shay January 2013 (has links)
Run-time verification is a technique to reason about a program correctness. Given a set of desirable properties and a program trace from the inspected program as an input, the monitor module verifies that properties hold on this trace. As this process is taking place at a run time, one of the major drawbacks of run-time verification is the execution overhead caused by a monitoring activity. In this thesis, we intend to minimize this overhead by presenting a collection of parallel verification algorithms. The algorithms verify properties correctness in a parallel fashion, decreasing the verification time by dispersion of computationally intensive calculations over multiple cores (first level of parallelism). We designed the algorithms with the intention to exploit a data-level parallelism, thus specifically suitable to run on Graphics Processing Units (GPUs), although can be utilized on multi-core platforms as well. Running the inspected program and the monitor module on separate platforms (second level of parallelism) results in several advantages: minimization of interference between the monitor and the program, faster processing for non-trivial computations, and even significant reduction in power consumption (when the monitor is running on GPU). This work also aims to provide a solution to automated run-time verification of C programs by implementing the aforementioned set of algorithms in the monitoring tool called GPU-based online and offline Monitoring Framework (GooMF). The ultimate goal of GooMF is to supply developers with an easy-to-use and flexible verification API that requires minimal knowledge of formal languages and techniques.
163

Advances in space and time efficient model checking of finite state systems / Atanas Nikolaev Parashkevov.

Parashkevov, Atanas January 2002 (has links)
Bibliography: leaves 211-220 / xviii, 220 leaves : charts ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / This thesis examines automated formal verification techniques and their associated space and time implementation complexity when applied to finite state concurrent systems. The focus is on concurrent systems expressed in the Communicating Sequential Processes (CSP) framework. An approach to the compilation of CSP system descriptions into boolean formulae in the form of Ordered Binary Decision Diagrams (OBDD) is presented, further utilised by a basic algorithm that checks a refinement or equivalence relation between a pair of processes in any of the three CSP semantic models. The performance bottlenecks of the basic refinement checking algorithms are identified and addressed with the introduction of a number of novel techniques and algorithms. Algorithms described in this thesis are implemented in the Adelaide Tefinement Checking Tool. / Thesis (Ph.D.)--University of Adelaide, Dept. of Computer Science, 2002
164

A flexible framework for leveraging verification tools to enhance the verification technologies available for policy enforcement

Larkin, James Unknown Date (has links)
Program verification is vital as more and more users are creating, downloading and executing foreign computer programs. Software verification tools provide a means for determining if a program adheres to a user’s security requirements, or security policy. There are many verification tools that exist for checking different types of policies on different types of programs. Currently however, there is no verification tool capable of determining if all types of programs satisfy all types of policies. This thesis describes a framework for supporting multiple verification tools to determine program satisfaction. A user’s security requirements are represented at multiple levels of abstraction as Intermediate Execution Environments. Using a sequence of configurations, a user’s security requirements are transformed from the abstract level to the tool level, possibly for multiple verification tools. Using a number of case studies, the validity of the framework is shown.
165

The Specification and Refinement of Timed Processes

Mahony, Brendan Patrick Unknown Date (has links)
The use of predicate transformers to model the action of sequential programs has allowed the construction of a refinement calculus for expressing the formal verification of the top-down development of sequential programs. It is shown that predicate transformers may also be used to model real-time processes. The notions of precondition and postcondition in the sequential refinement calculus are replaced by the notions of assumption and effect. In this way the formal development of real-time processes may also be expressed within the refinement calculus. Notations are developed for the specification and implementation of real-time processes within the framework of the refinement calculus. Several case-studies are presented to demonstrate the utility of this approach.
166

Verificationism reconsidered /

Forster, Ann Owens. January 1992 (has links)
Thesis (Ph. D.)--University of Washington, 1992. / Vita. Includes bibliographical references (leaves [261]-267).
167

Groundwork for a concept-based theory of confirmation

Thompson, Adam R. January 2007 (has links)
Thesis (M.A.)--University of Wyoming, 2007. / Title from PDF title page (viewed on Dec. 3, 2008). Includes bibliographical references (p. 76-79).
168

Verificare: a platform for composable verification with application to SDN-Enabled systems

Skowyra, Richard William 22 January 2016 (has links)
Software-Defined Networking (SDN) has become increasing prevalent in both the academic and industrial communities. A new class of system built on SDNs, which we refer to as SDN-Enabled, provide programmatic interfaces between the SDN controller and the larger distributed system. Existing tools for SDN verification and analysis are insufficiently expressive to capture this composition of a network and a larger distributed system. Generic verification systems are an infeasible solution, due to their monolithic approach to modeling and rapid state-space explosion. In this thesis we present a new compositional approach to system modeling and verification that is particularly appropriate for SDN-Enabled systems. Compositional models may have sub-components (such as switches and end-hosts) modified, added, or removed with only minimal, isolated changes. Furthermore, invariants may be defined over the composed system that restrict its behavior, allowing assumptions to be added or removed and for components to be abstracted away into the service guarantee that they provide (such as guaranteed packet arrival). Finally, compositional modeling can minimize the size of the state space to be verified by taking advantage of known model structure. We also present the Verificare platform, a tool chain for building compositional models in our modeling language and automatically compiling them to multiple off-the-shelf verification tools. The compiler outputs a minimal, calculus-oblivious formalism, which is accessed by plugins via a translation API. This enables a wide variety of requirements to be verified. As new tools become available, the translator can easily be extended with plugins to support them.
169

Semantic refactorings

Kesseli, Pascal January 2017 (has links)
Refactorings are structured changes to existing software that leave its externally observable behaviour unchanged. The intent is to improve readability, performance or other non-behavioural properties of a program. Agile software engineering processes stress the importance of refactoring to keep program code extensible and maintainable. Despite their apparent benefits, manual refactorings are time-consuming and prone to introducing unintended side effects. Research efforts seek to support and automate refactoring tasks to overcome these limitations. Current research in automatic refactoring, as well as state-of-the-art automated refactoring tools, frequently rely on syntax-driven approaches. They focus on transformations which can be safely performed using only syntactic information about a program or act overly conservative when knowledge about program semantics is required. In this thesis we explore semantics-driven refactoring, which enables much more sophisticated refactoring schemata. Our semantics-driven refactorings rely on formal verification algorithms to reason over a program's behaviour, and we conjecture they are more precise and can handle more complex code scenarios than syntax-driven ones. For this purpose, we present and implement a program synthesis algorithm based on the CEGIS paradigm and demonstrate that it can be applied to a diverse set of applications. Our synthesiser relies on the bounded model checker CBMC as an oracle and is based on an earlier research prototype called Kalashnikov. We further define our Java Stream Theory (JST) which allows us to reason over a set of interesting semantic refactorings. Both solutions are combined into an automated semantic refactoring decision procedure, reasoning over program behaviours, and searching the space of possible refactorings using program synthesis. We provide experimental evidence to support our conjecture that semanticsdriven refactorings exceed syntax-driven approaches in precision and scope.
170

Syntactic complexity in the modal μ calculus

Lehtinen, Maria Karoliina January 2017 (has links)
This thesis studies how to eliminate syntactic complexity in Lμ, the modal μ calculus. Lμ is a verification logic in which a least fixpoint operator μ, and its dual v, add recursion to a simple modal logic. The number of alternations between μ and v is a measure of complexity called the formula’s index: the lower the index, the easier a formula is to model-check. The central question of this thesis is a long standing one, the Lμ index problem: given a formula, what is the least index of any equivalent formula, that is to say, its semantic index? I take a syntactic approach, focused on simplifying formulas. The core decidability results are (i) alternative, syntax-focused decidability proofs for ML and Pμ 1 , the low complexity classes of μ; and (ii) a proof that Ʃμ 2 , the fragment of Lμ with one alternation, is decidable for formulas in the dual class Pμ 2 . Beyond its algorithmic contributions, this thesis aims to deepen our understanding of the index problem and the tools at our disposal. I study disjunctive form and related syntactic restrictions, and how they affect the index problem. The main technical results are that the transformation into disjunctive form preserves Pμ 2 -indices but not μ 2 -indices, and that some properties of binary trees are expressible with a lower index using disjunctive formulas than non-deterministic automata. The latter is part of a thorough account of how the Lμ index problem and the Rabin–Mostowski index problem for parity automata are related. In the final part of the thesis, I revisit the relationship between the index problem and parity games. The syntactic index of a formula is an upper bound on the descriptive complexity of its model-checking parity games. I show that the semantic index of a formula Ψ is bounded above by the descriptive complexity of the model-checking games for Ψ. I then study whether this bound is strict: if a formula Ψ is equivalent to a formula in an alternation class C, does a formula of C suffice to describe the winning regions of the model-checking games of Ψ? I prove that this is the case for ML, Pμ 1 , Ʃμ 2 , and the disjunctive fragment of any alternation class. I discuss the practical implications of these results and propose a uniform approach to the index problem, which subsumes the previously described decision procedures for low alternation classes. In brief, this thesis can be read as a guide on how to approach a seemingly complex Lμ formula. Along the way it studies what makes this such a difficult problem and proposes novel approaches to both simplifying individual formulas and deciding further fragments of the alternation hierarchy.

Page generated in 0.0702 seconds