• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 37
  • 33
  • 14
  • 12
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 337
  • 337
  • 100
  • 95
  • 93
  • 82
  • 78
  • 72
  • 71
  • 70
  • 66
  • 48
  • 37
  • 33
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Approximation and Refinement Techniques for Hard Model-checking Problems

Bobaru, Mihaela 15 July 2009 (has links)
Formal verification by model checking verifies whether a system satisfies some given correctness properties, and is intractable in general. We focus on several problems originating from the usage of model checking and from the inherent complexity of model checking itself. We propose approximation and iterative refinement techniques and demonstrate that they help in making these problems tractable on practical cases. Vacuity detection is one of the problems, relating to the trivial satisfaction of properties. A similar problem is query solving, useful in model exploration, when properties of a system are not fully known and are to be discovered rather than checked. Both of these problems have solution spaces structured as lattices and can be solved by model checking using those lattices. The lattices, in the most general formulation of these problems, are too complex to be implemented efficiently. We introduce a general approximation framework for model checking with lattices and instantiate this framework for the two problems, leading to algorithms and implementations that can obtain efficiently partial answers to the problems. We also introduce refinement techniques that consider incrementally larger lattices and compute even the partial answers gradually, to further abate the size explosion of the problems. Another problem we consider is the state-space explosion of model checking. The size of system models is exponential in the number of state variables and that renders model checking intractable. We consider systems composed of several components running concurrently. For such systems, compositional verification checks components individually to avoid composing an entire system. Model checking an individual component uses assumptions about the other components. Smaller assumptions lead to smaller verification problems. We introduce iterative refinement techniques that improve the assumptions generated by previous automated approaches. One technique incrementally refines the interfaces between components in order to obtain smaller assumptions that are sufficient to prove a given property. The smaller assumptions are approximations of the assumption that would be obtained without our interface refinement. Another technique computes assumptions as abstractions of components, as an alternative to current approaches that learn assumptions from counterexamples. Our abstraction refinement has the potential to compute smaller nondeterministic assumptions, in contrast to the deterministic assumptions learned by current approaches. We confirm experimentally the benefits of our new approximation and refinement techniques.
52

Predicative Quantum Programming

Tafliovich, Anya 01 September 2010 (has links)
This work presents Quantum Predicative Programming --- a theory ofquantum programming that encompasses many aspects of quantum computation and quantum communication. The theory provides a methodology to specify, implement, and analyse quantum algorithms, the paradigm of quantum non-locality, quantum pseudotelepathy games, computing with mixed states, and quantum communication protocols that use both quantum and classical communication channels.
53

Temporal Logic Motion Planning in Partially Unknown Environments

Maly, Matthew 16 September 2013 (has links)
This thesis considers the problem of a robot with complex dynamics navigating a partially discovered environment to satisfy a temporal logic formula consisting of both a co-safety formula component and a safety formula component. We employ a multi-layered synergistic framework for planning motions to satisfy a temporal logic formula, and we combine with it an iterative replanning strategy to locally patch the robot's discretized internal representation of the workspace whenever a new obstacle is discovered. Furthermore, we introduce a notion of ``closeness'' of satisfaction of a linear temporal logic formula, defined by a metric over the states of the corresponding automaton. We employ this measure to maximize partial satisfaction of the co-safety component of the temporal logic formula when obstacles render it unsatisfiable. For the safety component of the specification, we do not allow partial satisfaction. This introduces a general division between ``soft'' and ``hard'' constraints in the temporal logic specification, a concept we illustrate in our discussion of future work. The novel contributions of this thesis include (1) the iterative replanning strategy, (2) the support for safety formulas in the temporal logic specification, (3) the method to locally patch the discretized workspace representation, and (4) support for partial satisfaction of unsatisfiable co-safety formulas. As our experimental results show, these methods allow us to quickly compute motion plans for robots with complex dynamics to satisfy rich temporal logic formulas in partially unknown environments.
54

Machine Assisted Reasoning for Multi-Threaded Java Bytecode / Datorstödda resonemang om multi-trådad Java-bytekod

Lagerkvist, Mikael Zayenz January 2005 (has links)
<p>In this thesis an operational semantics for a subset of the Java Virtual Machine (JVM) is developed and presented. The subset contains standard operations such as control flow, computation, and memory management. In addition, the subset contains a treatment of parallel threads of execution.</p><p> </p><p>The operational semantics are embedded into a $µ$-calculus based proof assistant, called the VeriCode Proof Tool (VCPT). VCPT has been developed at the Swedish Institute of Computer Science (SICS), and has powerful features for proving inductive assertions.</p><p> </p><p>Some examples of proving properties of programs using the embedding are presented.</p> / <p>I det här examensarbetet  presenteras en operationell semantik för en delmängd av Javas virtuella maskin. Den delmängd som hanteras innehåller kontrollflöde, beräkningar och minneshantering. Vidare beskrivs  semantiken för parallella exekveringstrådar.</p><p>Den operationella semantiken formaliseras i en bevisassistent for $µ$-kalkyl, VeriCode Proof Tool (VCPT). VCPT har utvecklats vid Swedish Institiute of Computer Science (SICS), och har kraftfulla tekniker för att bevisa induktiva påståenden.</p><p>Några exempel på bevis av egenskaper hos program användandes formaliseringen presenteras också.</p>
55

Theory and techniques for synthesizing efficient breadth-first search algorithms

Nedunuri, Srinivas 05 October 2012 (has links)
The development of efficient algorithms to solve a wide variety of combinatorial and planning problems is a significant achievement in computer science. Traditionally each algorithm is developed individually, based on flashes of insight or experience, and then (optionally) verified for correctness. While computer science has formalized the analysis and verification of algorithms, the process of algorithm development remains largely ad-hoc. The ad-hoc nature of algorithm development is especially limiting when developing algorithms for a family of related problems. Guided program synthesis is an existing methodology for systematic development of algorithms. Specific algorithms are viewed as instances of very general algorithm schemas. For example, the Global Search schema generalizes traditional branch-and-bound search, and includes both depth-first and breadth-first strategies. Algorithm development involves systematic specialization of the algorithm schema based on problem-specific constraints to create efficient algorithms that are correct by construction, obviating the need for a separate verification step. Guided program synthesis has been applied to a wide range of algorithms, but there is still no systematic process for the synthesis of large search programs such as AI planners. Our first contribution is the specialization of Global Search to a class we call Efficient Breadth-First Search (EBFS), by incorporating dominance relations to constrain the size of the frontier of the search to be polynomially bounded. Dominance relations allow two search spaces to be compared to determine whether one dominates the other, thus allowing the dominated space to be eliminated from the search. We further show that EBFS is an effective characterization of greedy algorithms, when the breadth bound is set to one. Surprisingly, the resulting characterization is more general than the well-known characterization of greedy algorithms, namely the Greedy Algorithm parametrized over algebraic structures called greedoids. Our second contribution is a methodology for systematically deriving dominance relations, not just for individual problems but for families of related problems. The techniques are illustrated on numerous well-known problems. Combining this with the program schema for EBFS results in efficient greedy algorithms. Our third contribution is application of the theory and methodology to the practical problem of synthesizing fast planners. Nearly all the state-of-the-art planners in the planning literature are heuristic domain-independent planners. They generally do not scale well and their space requirements also become quite prohibitive. Planners such as TLPlan that incorporate domain-specific information in the form of control rules are orders of magnitude faster. However, devising the control rules is labor-intensive task and requires domain expertise and insight. The correctness of the rules is also not guaranteed. We introduce a method by which domain-specific dominance relations can be systematically derived, which can then be turned into control rules, and demonstrate the method on a planning problem (Logistics). / text
56

Efficient, mechanically-verified validation of satisfiability solvers

Wetzler, Nathan David 04 September 2015 (has links)
Satisfiability (SAT) solvers are commonly used for a variety of applications, including hardware verification, software verification, theorem proving, debugging, and hard combinatorial problems. These applications rely on the efficiency and correctness of SAT solvers. When a problem is determined to be unsatisfiable, how can one be confident that a SAT solver has fully exhausted the search space? Traditionally, unsatisfiability results have been expressed using resolution or clausal proof systems. Resolution-based proofs contain perfect reconstruction information, but these proofs are extremely large and difficult to emit from a solver. Clausal proofs rely on rediscovery of inferences using a limited number of techniques, which typically takes several orders of magnitude longer than the solving time. Moreover, neither of these proof systems has been able to express contemporary solving techniques such as bounded variable addition. This combination of issues has left SAT solver authors unmotivated to produce proofs of unsatisfiability. The work from this dissertation focuses on validating satisfiability solver output in the unsatisfiability case. We developed a new clausal proof format called DRAT that facilitates compact proofs that are easier to emit and capable of expressing all contemporary solving and preprocessing techniques. Furthermore, we implemented a validation utility called DRAT-trim that is able to validate proofs in a time similar to that of the discovery time. The DRAT format has seen widespread adoption in the SAT community and the DRAT-trim utility was used to validate the results of the 2014 SAT Competition. DRAT-trim uses many advanced techniques to realize its performance gains, so why should the results of DRAT-trim be trusted? Mechanical verification enables users to model programs and algorithms and then prove their correctness with a proof assistant, such as ACL2. We designed a new modeling technique for ACL2 that combines efficient model execution with an agile and convenient theory. Finally, we used this new technique to construct a fast, mechanically-verified validation tool for proofs of unsatisfiability. This research allows SAT solver authors and users to have greater confidence in their results and applications by ensuring the validity of unsatisfiability results. / text
57

Efficient and effective symbolic model checking

Iyer, Subramanian Krishnan 28 August 2008 (has links)
Not available / text
58

Combining advanced formal hardware verification techniques

Reeber, Erik Henry, 1978- 29 August 2008 (has links)
This dissertation combines formal verification techniques in an attempt to reduce the human effort required to verify large systems formally. One method to reduce the human effort required by formal verification is to modify general-purpose theorem proving techniques to increase the number of lemma instances considered automatically. Such a modification to the forward chaining proof technique within the ACL2 theorem prover is described. This dissertation identifies a decidable subclass of the ACL2 logic, the Subclass of Unrollable List Formulas in ACL2 (SUFLA). SUFLA is shown to be decidable, i.e., there exists an algorithm that decides whether any SUFLA formula is valid. Theorems from first-order logic can be proven through a methodology that combines interactive theorem proving with a fully-automated solver for SUFLA formulas. This methodology has been applied to the verification of components of the TRIPS processor, a prototype processor designed and fabricated by the University of Texas and IBM. Also, a fully-automated procedure for the Satisfiability Modulo Theory (SMT) of bit vectors is implemented by combining a solver for SUFLA formulas with the ACL2 theorem prover's general-purpose rewriting proof technique. A new methodology for combining theorem proving and model checking is presented, which uses a unique "black-box" formalization of hardware designs. This methodology has been used to combine the ACL2 theorem prover with IBM's SixthSense model checker and applied to the verification of a high-performance industrial multiplier design. A general-purpose mechanism has been created for adding external tools to a general-purpose theorem prover. This mechanism, implemented in the ACL2 theorem prover, is capable of supporting the combination of ACL2 with both SixthSense and the SAT-based SUFLA solver. A new hardware description language, DE2, is described. DE2 has a number of unique features geared towards simplifying formal verification, including a relatively simple formal semantics, support for the description of circuit generators, and support for embedding non-functional constructs within a hardware design. The composition of these techniques extend our knowledge of the languages and logics needed for formal verification and should reduce the human effort required to verify large hardware circuit models.
59

Generalization, lemma generation, and induction in ACL2

Erickson, John D., Ph. D. 29 August 2008 (has links)
Formal verification is becoming a critical tool for designing software and hardware today. Rising complexity, along with software's pervasiveness in the global economy have meant that errors are becoming more difficult to find and more costly to fix. Among the formal verification tools available today, theorem provers offer the ability to do the most complete verification of the most complex systems. However, theorem proving requires expert guidance and typically is too costly to be economical for all but the most mission critical systems. Three major challenges to using a theorem prover are: finding generalizations, choosing the right induction scheme, and generating lemmas. In this dissertation we study all three of these in the context of the ACL2 theorem prover. / text
60

Behavioral Verification of Small Networks of State-Machines Built with Arduino-like Processors

Delfani, Parisa Unknown Date
No description available.

Page generated in 0.0445 seconds