• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 182
  • 37
  • 33
  • 14
  • 12
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 336
  • 336
  • 100
  • 95
  • 93
  • 81
  • 78
  • 72
  • 71
  • 70
  • 66
  • 48
  • 37
  • 33
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Multi-Decision Policy and Policy Combinator Specifications

Giannakopoulos, Theophilos John 21 February 2012 (has links)
Margrave is a specification language and analysis tool for access control policies with semantics based in order-sorted logic. The clear logical roots of Margrave's semantics makes policies specified in the Margrave language both machine analyzable and relatively easy for users to reason about. However, the decision conflict resolution declaration and policy set features of Margrave do not have semantics that are as cleanly rooted in order-sorted logic as Margrave policies and queries are. Additionally, the current semantics of decision conflict resolution declarations and of policy sets do not permit users to take full advantage of the multi-decision capabilities of Margrave policies. The purposes of this thesis are (i) to provide a unified extension to the semantics for policies and policy combination, (ii) to cleanly support decision conflict resolution mechanisms in a general way within those semantics and (iii) to provide insight into the properties of policy combination and decision conflict resolution for multi-decision policies in general. These goals are achieved via the realization that policy combinators may be treated as policies operating within environments extended with the results of the policies to be combined, allowing policy combinators to be defined as if they were policies. The ability to treat policy combinators as policies means that users' current understanding of policies can be easily extended to policy combinators. Additionally, the tools that Margrave has for supporting policies can be leveraged as the Margrave language and analysis tool grow to provide fuller support for policy combination and rule conflict resolution declarations.
42

Automated reasoning in quantified modal and temporal logics

Castellini, Claudio January 2005 (has links)
This thesis is about automated reasoning in quantified modal and temporal logics, with an application to formal methods. Quantified modal and temporal logics are extensions of classical first-order logic in which the notion of truth is extended to take into account its necessity or equivalently, in the temporal setting, its persistence through time. Due to their high complexity, these logics are less widely known and studied than their propositional counterparts. Moreover, little so far is known about their mechanisability and usefulness for formal methods. The relevant contributions of this thesis are threefold: firstly, we devise a sound and complete set of sequent calculi for quantified modal logics; secondly, we extend the approach to the quantified temporal logic of linear, discrete time and develop a framework for doing automated reasoning via Proof Planning in it; thirdly, we show a set of experimental results obtained by applying the framework to the problem of Feature Interactions in telecommunication systems. These results indicate that (a) the problem can be concisely and effectively modeled in the aforementioned logic, (b) proof planning actually captures common structures in the related proofs, and (c) the approach is viable also from the point of view of efficiency.
43

Formal Models and Refinement for Graphical User Interface Design

Bowen, Judith Alyson January 2008 (has links)
Formal approaches to software development require that we correctly describe (or specify) systems in order to prove properties about our proposed solution prior to building it. We must then follow a rigorous process to transform our specification into an implementation to ensure that the properties we have proved are retained. When we design and build the user interfaces of our systems we are similarly keen to ensure that they have certain properties before we build them. For example, do they satisfy the requirements of the user? Are they designed with known good design principles and usability considerations in mind? User-centred design approaches, which incorporate many different techniques which we may consider as informal, seek to consider these issues so that the UIs we build are designed around the needs and capabilities of real users. Both formal methods and user-centred design are important and beneficial in the development of underlying system functionality and user interfaces respectively. Given this we would like to be able to use both approaches in one integrated software development process. Their differences, however, make this a challenging objective. In this thesis we present a solution this problem by describing models and techniques which provide a bridge between the existing work of user-centred design practitioners and formal methods practitioners enabling us to incorporate (representations of) informal design artefacts into a formal software development process. We then use these models as the basis for a refinement theory for user interfaces which allows interface designers to retain their informal design methods whilst providing an underlying theory grounded in the trace refinement theory of the Microcharts language.
44

Analysing Fault Tolerance for Erlang Applications

Nyström, Jan Henry January 2009 (has links)
ERLANG is a concurrent functional language, well suited for distributed, highly concurrent and fault-tolerant software. An important part of Erlang is its support for failure recovery. Fault tolerance is provided by organising the processes of an ERLANG application into tree structures. In these structures, parent processes monitor failures of their children and are responsible for their restart. Libraries support the creation of such structures during system initialisation.A technique to automatically analyse that the process structure of an ERLANG application from the source code is presented. The analysis exposes shortcomings in the fault tolerance properties of the application. First, the process structure is extracted through static analysis of the initialisation code of the application. Thereafter, analysis of the process structure checks two important properties of the fault handling mechanism: 1) that it will recover from any process failure, 2) that it will not hide persistent errors.The technique has been implemented in a tool, and applied it to several OTP library applications and to a subsystem of a commercial system the AXD 301 ATM switch.The static analysis of the ERLANG source code is achieved through symbolic evaluation. The evaluation is peformed according to an abstraction of ERLANG’s actual semamtics. The actual semantics is formalised for a nontrivial part of the language and it is proven that the abstraction of the semantics simulates the actual semantics. / ASTEC
45

Approximation and Refinement Techniques for Hard Model-checking Problems

Bobaru, Mihaela 15 July 2009 (has links)
Formal verification by model checking verifies whether a system satisfies some given correctness properties, and is intractable in general. We focus on several problems originating from the usage of model checking and from the inherent complexity of model checking itself. We propose approximation and iterative refinement techniques and demonstrate that they help in making these problems tractable on practical cases. Vacuity detection is one of the problems, relating to the trivial satisfaction of properties. A similar problem is query solving, useful in model exploration, when properties of a system are not fully known and are to be discovered rather than checked. Both of these problems have solution spaces structured as lattices and can be solved by model checking using those lattices. The lattices, in the most general formulation of these problems, are too complex to be implemented efficiently. We introduce a general approximation framework for model checking with lattices and instantiate this framework for the two problems, leading to algorithms and implementations that can obtain efficiently partial answers to the problems. We also introduce refinement techniques that consider incrementally larger lattices and compute even the partial answers gradually, to further abate the size explosion of the problems. Another problem we consider is the state-space explosion of model checking. The size of system models is exponential in the number of state variables and that renders model checking intractable. We consider systems composed of several components running concurrently. For such systems, compositional verification checks components individually to avoid composing an entire system. Model checking an individual component uses assumptions about the other components. Smaller assumptions lead to smaller verification problems. We introduce iterative refinement techniques that improve the assumptions generated by previous automated approaches. One technique incrementally refines the interfaces between components in order to obtain smaller assumptions that are sufficient to prove a given property. The smaller assumptions are approximations of the assumption that would be obtained without our interface refinement. Another technique computes assumptions as abstractions of components, as an alternative to current approaches that learn assumptions from counterexamples. Our abstraction refinement has the potential to compute smaller nondeterministic assumptions, in contrast to the deterministic assumptions learned by current approaches. We confirm experimentally the benefits of our new approximation and refinement techniques.
46

Predicative Quantum Programming

Tafliovich, Anya 01 September 2010 (has links)
This work presents Quantum Predicative Programming --- a theory ofquantum programming that encompasses many aspects of quantum computation and quantum communication. The theory provides a methodology to specify, implement, and analyse quantum algorithms, the paradigm of quantum non-locality, quantum pseudotelepathy games, computing with mixed states, and quantum communication protocols that use both quantum and classical communication channels.
47

Formal Methods in Computer-aided Design

Mangassarian, Hratch 30 August 2012 (has links)
The VLSI CAD flow encompasses an abundance of critical NP-complete and PSPACE-complete problems. Instead of developing a dedicated algorithm for each, the trend during the last decade has been to encode them in formal languages, such as Boolean satisfiability (SAT) and quantified Boolean formulas (QBFs), and focus academic resources on improving SAT and QBF solvers. The significant progress of these solvers has validated this strategy. This dissertation contributes to the further advancement of formal techniques in CAD. Today, the verification and debugging of increasingly complex RTL designs can consume up to 70% of the VLSI design cycle. In particular, RTL debug is a manual, resource-intensive task in the industry. The first contribution of this thesis is an in-depth examination of the factors affecting the theoretical computational complexity of debugging. It is established that most variations of the debugging problem are NP-complete. Automated debugging tools return all potential error sources in the RTL, called solutions, that can explain a given failing error trace. Finding each solution requires a separate call to a formal engine, which is computationally expensive. The second contribution of this dissertation comprises techniques for reducing the number of such iterations, by leveraging dominance relationships between RTL blocks to imply solutions. Extensive experiments on industrial designs show a three-fold reduction in the number of formal engine calls due to solution implications, resulting in a 1.64x overall speed-up. The third contribution aims to advance the state-of-the-art of QBF solvers, whose progress has not been as impressive as that of SAT solvers. We present a framework for using complete dominators to preprocess and reduce QBFs with an inherent circuit structure, which is common in encodings of PSPACE-complete CAD problems. Experiments show that three modern QBF solvers together solve 55% of preprocessed QBF instances, compared to none without preprocessing. The final contribution consists of a series of QBF encodings for evaluating the reconfigurability of partially programmable circuits (PPCs). The metrics of fault tolerance, design error tolerance and engineering change coverage are defined for PPCs and encoded using QBFs. These formulations along with experimental results demonstrate the theoretical and practical appropriateness of QBFs for dealing with reconfigurability.
48

Scaling SAT-based Automated Design Debugging with Formal Methods

Keng, Brian 12 February 2010 (has links)
The size and complexity of modern VLSI computer chips are growing at a rapid pace. Functional debugging is increasingly becoming a bottleneck in the design flow where it can take up to 60% of the total verification time. Scaling existing automated debugging tools is necessary in order to continue along this path of rapid growth and innovation in the semiconductor industry. This thesis aims to scale automated debugging techniques with two contributions. The first contribution introduces a succinct memory model for automated design debugging that dramatically lowers the memory requirements for the debugging problem. The second contribution presents a scalable SAT-based design debugging algorithm that uses a mathematical technique called interpolation to divide the debugging problem into multiple parts across time which greatly reduces the peak memory requirements of the debugging problem. Extensive experiments on real designs demonstrate the benefit of this work.
49

Formal Methods in Computer-aided Design

Mangassarian, Hratch 30 August 2012 (has links)
The VLSI CAD flow encompasses an abundance of critical NP-complete and PSPACE-complete problems. Instead of developing a dedicated algorithm for each, the trend during the last decade has been to encode them in formal languages, such as Boolean satisfiability (SAT) and quantified Boolean formulas (QBFs), and focus academic resources on improving SAT and QBF solvers. The significant progress of these solvers has validated this strategy. This dissertation contributes to the further advancement of formal techniques in CAD. Today, the verification and debugging of increasingly complex RTL designs can consume up to 70% of the VLSI design cycle. In particular, RTL debug is a manual, resource-intensive task in the industry. The first contribution of this thesis is an in-depth examination of the factors affecting the theoretical computational complexity of debugging. It is established that most variations of the debugging problem are NP-complete. Automated debugging tools return all potential error sources in the RTL, called solutions, that can explain a given failing error trace. Finding each solution requires a separate call to a formal engine, which is computationally expensive. The second contribution of this dissertation comprises techniques for reducing the number of such iterations, by leveraging dominance relationships between RTL blocks to imply solutions. Extensive experiments on industrial designs show a three-fold reduction in the number of formal engine calls due to solution implications, resulting in a 1.64x overall speed-up. The third contribution aims to advance the state-of-the-art of QBF solvers, whose progress has not been as impressive as that of SAT solvers. We present a framework for using complete dominators to preprocess and reduce QBFs with an inherent circuit structure, which is common in encodings of PSPACE-complete CAD problems. Experiments show that three modern QBF solvers together solve 55% of preprocessed QBF instances, compared to none without preprocessing. The final contribution consists of a series of QBF encodings for evaluating the reconfigurability of partially programmable circuits (PPCs). The metrics of fault tolerance, design error tolerance and engineering change coverage are defined for PPCs and encoded using QBFs. These formulations along with experimental results demonstrate the theoretical and practical appropriateness of QBFs for dealing with reconfigurability.
50

Scaling SAT-based Automated Design Debugging with Formal Methods

Keng, Brian 12 February 2010 (has links)
The size and complexity of modern VLSI computer chips are growing at a rapid pace. Functional debugging is increasingly becoming a bottleneck in the design flow where it can take up to 60% of the total verification time. Scaling existing automated debugging tools is necessary in order to continue along this path of rapid growth and innovation in the semiconductor industry. This thesis aims to scale automated debugging techniques with two contributions. The first contribution introduces a succinct memory model for automated design debugging that dramatically lowers the memory requirements for the debugging problem. The second contribution presents a scalable SAT-based design debugging algorithm that uses a mathematical technique called interpolation to divide the debugging problem into multiple parts across time which greatly reduces the peak memory requirements of the debugging problem. Extensive experiments on real designs demonstrate the benefit of this work.

Page generated in 0.0522 seconds