531 |
Abstract satisfactionHaller, Leopold Carl Robert January 2013 (has links)
This dissertation shows that satisfiability procedures are abstract interpreters. This insight provides a unified view of program analysis and satisfiability solving and enables technology transfer between the two fields. The framework underlying these developments provides systematic recipes that show how intuition from satisfiability solvers can be lifted to program analyzers, how approximation techniques from program analyzers can be integrated into satisfiability procedures and how program analyzers and satisfiability solvers can be combined. Based on this work, we have developed new tools for checking program correctness and for solving satisfiability of quantifier-free first-order formulas. These tools outperform existing approaches. We introduce abstract satisfaction, an algebraic framework for applying abstract interpre- tation to obtain sound, but potentially incomplete satisfiability procedures. The framework allows the operation of satisfiability procedures to be understood in terms of fixed point computations involving deduction and abduction transformers on lattices. It also enables satisfiability solving and program correctness to be viewed as the same algebraic problem. Using abstract satisfaction, we show that a number of satisfiability procedures can be understood as abstract interpreters, including Boolean constraint propagation, the dpll and cdcl algorithms, St ̊almarck’s procedure, the dpll(t) framework and solvers based on congruence closure and the Bellman-Ford algorithm. Our work leads to a novel understand- ing of satisfiability architectures as refinement procedures for abstract analyses and allows us to relate these procedures to independent developments in program analysis. We use this perspective to develop Abstract Conflict-Driven Clause Learning (acdcl), a rigorous, lattice-based generalization of cdcl, the central algorithm of modern satisfiability research. The acdcl framework provides a solution to the open problem of lifting cdcl to new prob- lem domains and can be instantiated over many lattices that occur in practice. We provide soundness and completeness arguments for acdcl that apply to all such instantiations. We evaluate the effectiveness of acdcl by investigating two practical instantiations: fp-acdcl, a satisfiability procedure for the first-order theory of floating point arithmetic, and cdfpl, an interval-based program analyzer that uses cdcl-style learning to improve the precision of a program analysis. fp-acdcl is faster than competing approaches in 80% of our benchmarks and it is faster by more than an order of magnitude in 60% of the benchmarks. Out of 33 safe programs, cdfpl proves 16 more programs correct than a mature interval analysis tool and can conclusively determine the presence of errors in 24 unsafe benchmarks. Compared to bounded model checking, cdfpl is on average at least 260 times faster on our benchmark set.
|
532 |
Inclusion problems for one-counter systemsTotzke, Patrick January 2014 (has links)
We study the decidability and complexity of verification problems for infinite-state systems. A fundamental question in formal verification is if the behaviour of one process is reproducible by another. This inclusion problem can be studied for various models of computation and behavioural preorders. It is generally intractable or even undecidable already for very limited computational models. The aim of this work is to clarify the status of the decidability and complexity of some well-known inclusion problems for suitably restricted computational models. In particular, we address the problems of checking strong and weak simulation and trace inclusion for processes definable by one-counter automata (OCA), that consist of a finite control and a single counter ranging over the non-negative integers. We take special interest of the subclass of one-counter nets (OCNs), that cannot fully test the counter for zero and which is subsumed both by pushdown automata and Petri nets / vector addition systems. Our new results include the PSPACE-completeness of strong and weak simulation, and the undecidability of trace inclusion for OCNs. Moreover, we consider semantic preorders between OCA/OCN and finite systems and close some gaps regarding their complexity. Finally, we study deterministic processes, for which simulation and trace inclusion coincide.
|
533 |
Mapping parallel graph algorithms to throughput-oriented architecturesMcLaughlin, Adam 07 January 2016 (has links)
The stagnant performance of single core processors, increasing size of data sets, and variety of structure in information has made the domain of parallel and high-performance computing especially crucial. Graphics Processing Units (GPUs) have recently become an exciting alternative to traditional CPU architectures for applications in this domain. Although GPUs are designed for rendering graphics, research has found that the GPU architecture is well-suited to algorithms that search and analyze unstructured, graph-based data, offering up to an order of magnitude greater memory bandwidth over their CPU counterparts.
This thesis focuses on GPU graph analysis from the perspective that algorithms should be efficient on as many classes of graphs as possible, rather than being specialized to a specific class, such as social networks or road networks. Using betweenness centrality, a popular analytic used to find prominent entities of a network, as a motivating example, we show how parallelism, distributed computing, hybrid and on-line algorithms, and dynamic algorithms can all contribute to substantial improvements in the performance and energy-efficiency of these computations. We further generalize this approach and provide an abstraction that can be applied to a whole class of graph algorithms that require many simultaneous breadth-first searches. Finally, to show that our findings can be applied in real-world scenarios, we apply these techniques to the problem of verifying that a multiprocessor complies with its memory consistency model.
|
534 |
Reliable computation of invariant dynamics for conservative discrete dynamical systemsJames, Jason Desmond 25 August 2010 (has links)
Computing reliable numerical approximations of invariant
sets for nonlinear systems is the core problem for computer assisted
study of dynamical systems. In the case of conservative systems the
problem is complicated by the fact that there is no phase space
dissipation to drive orbits onto attractors. In this dissertation we
discuss several contributions to the field of computer assisted
study of invariant dynamics in conservative systems. / text
|
535 |
The dilemma of the gift registry : how social closeness intensifies itWard, Morgan Kraft 01 October 2010 (has links)
When choosing a gift, the gift-giver has three distinct but interdependent goals: the item must 1) satisfy the recipient 2) be self-reflective for the gift-giver, and 3) indicate the nature of the relationship between the giver and the recipient. However, these goals are often mutually exclusive, making it infeasible for the giver to meet both his/her own and the recipient’s needs with his/her gift choices. In both essays we look at how the important moderator of social closeness between the giver and recipient influences givers’ prioritization of these goals.
In Essay 1, we constrain givers to choose from a gift registry and posit that purchasing an identity-incongruent product can threaten an individual’s identity, particularly when purchasing for a close (vs. distant) friend who is an integral part of the self. Five experiments in the context of gift registry show that givers choosing identity-incongruent gifts for a close (vs. distant) friend experience an identity threat and seek to re-establish their shaken identities by endorsing the threatened identity and choosing identity-expressive products in subsequent decisions.
In Essay 2 we loosen the constraints of the gift choice and allow givers to choose or reject the gift registry. Our main hypothesis is that when choosing for a close (vs. distant) friend, the giver will discount the recipient’s explicit preferences in favor of a gift that signals the giver’s identity or the relationship between them. However, prior research indicates that close friends choose inaccurately for one another as they conflate their own preferences with those of the recipients’. Thus we suggest that since givers are more likely to make a free choice (vs. registry choice) for a close (vs. distant) friend, they face an increased likelihood of choosing a less desirable gift for their close friends. / text
|
536 |
Verifying Absence of ∞ Loops in Parameterized ProtocolsSaksena, Mayank January 2008 (has links)
<p>The complex behavior of computer systems offers many challenges for <i>formal verification</i>. The analysis quickly becomes difficult as the number of participating processes increases.</p><p>A <i>parameterized system</i> is a family of systems parameterized on a number <i>n</i>, typically representing the number of participating processes. The <i>uniform verification problem</i> — to check whether a property holds for each instance — is an infinite-state problem. The automated analysis of parameterized and infinite-state systems has been the subject of research over the last 15–20 years. Much of the work has focused on safety properties. Progress in verification of liveness properties has been slow, as it is more difficult in general.</p><p>In this thesis, we consider verification of parameterized and infinite-state systems, with an emphasis on liveness, in the verification framework called <i>regular model checking (RMC)</i>. In RMC, states are represented as words, sets of states as regular expressions, and the transition relation as a regular relation.</p><p>We extend the automata-theoretic approach to RMC. We define a <i>specification logic</i> sufficiently strong to specify systems representable using RMC, and linear temporal logic properties of such systems, and provide an automatic translation from a specification into an analyzable model.</p><p>We develop <i>acceleration techniques</i> for RMC which allow more uniform and automatic verification than before, with greater power. Using these techniques, we succeed to verify safety and liveness properties of parameterized protocols from the literature.</p><p>We present a novel <i>reachability based</i> verification method for verification of liveness, in a general setting. We implement the method for RMC, with promising results.</p><p>Finally, we develop a framework for the verification of dynamic networks based on graph transformation, which generalizes the systems representable in RMC. In this framework we verify the latest version of the DYMO routing protocol, currently being considered for standardization by the IETF.</p>
|
537 |
Evaluating the DBH Verification Method to Complex Buildings Designed According to New Zealand Compliance Documents C/AS1Han, Yuzhuo January 2011 (has links)
Performance-based fire engineering design is becoming a more common practice for fire safety design of large complex buildings and modifying existing buildings. However, different engineering assumptions and ambiguous acceptance criteria not only lead to inconsistent level of safety, but also cause inefficient Building Consent process and can result in expensive appeals.
In August 2006 the New Zealand Department of Building and Housing (DBH) has been developing a Verification Method (C/VM2) for demonstrating compliance with the Fire Safety requirements of the New Zealand Building Code (C Clauses).
This research evaluated the proposed C/VM2 on four complex buildings, including Multi-level Night Club, Hospital, Shopping Mall and Retail Warehouse. It has showed that the C/VM2 successfully implements a systematic and less ambiguous guidance for the future performance-based fire safety designs. However, continued analysis and development is necessary that a solely deterministic method may not be the best solution. A risk-based concept is suggested to be incorporated into the new generation of the C/VM2.
|
538 |
Conquering Variability for Robust and Low Power DesignsSun, Jin January 2011 (has links)
As device feature sizes shrink to nano-scale, continuous technology scaling has led to a large increase in parameter variability during semiconductor manufacturing process. According to the source of uncertainty, parameter variations can be classified into three categories: process variations, environmental variations, and temporal variations. All these variation sources exert significant influences on circuit performance, and make it more challenging to characterize parameter variability and achieve robust, low-power designs. The scope of this dissertation is conquering parameter variability and successfully designing efficient yet robust integrated circuit (IC) systems. Previous experiences have indicated that we need to tackle this issue at every design stage of IC chips. In this dissertation, we propose several robust techniques for accurate variability characterization and efficient performance prediction under parameter variations. At pre-silicon verification stage, a robust yield prediction scheme under limited descriptions of parameter uncertainties, a robust circuit performance prediction methodology based on importance of uncertainties, and a robust gate sizing framework by ElasticR estimation model, have been developed. These techniques provide possible solutions to achieve both prediction accuracy and computation efficiency in early design stage. At on-line validation stage, a dynamic workload balancing framework and an on-line self-tuning design methodology have been proposed for application-specific multi-core systems under variability-induced aging effects. These on-line validation techniques are beneficial to alleviate device performance degradation due to parameter variations and extend device lifetime.
|
539 |
BSP-Why, un outil pour la vérification déductive de programmes BSP : machine-checked semantics and application to distributed state-space algorithmsFortin, Jean 14 October 2013 (has links) (PDF)
This thesis takes part in the formal verification of parallel programs. The aim of formal verification is to ensure that a program will run as it should, without making mistakes, blocking, or terminating abnormally. This is even more important in the parallel computation field, where the cost of calculations can be very high. The BSP model (Bulk Synchronous Parallelism) is a model of parallelism well suited for the use of formal methods. It guarantees a structure in the parallel program, by organising it into super-steps, each of them consisting of a phase of computations, then communications between the processes. In this thesis, we chose to extend an existing tool to adapt it for the proof of BSP programs. We based ourselves on Why, a VCG (verification condition generator) that has the advantage of being able to interface with several automatic provers and proof assistants to discharge the proof obligations. There are multiple contributions in this thesis. In a first part, we present a comparison of the existing BSP libraries, in order to show the most used BSP primitives, which are the most interesting to formalise. We then present BSP-Why, our tool for the proof of BSP programs. This tools uses a generation of a sequential program to simulate the parallel program in input, thus allowing the use of Why and the numerous associated provers to prove the proof obligations. We then show how BSP-Why can be used to prove the correctness of some basic BSP algorithms, and also on a more complex example, the generation of the state-space (model-checking) of systems, especially for security protocols. Finally, in order to ensure the greatest confidence in the BSP-Why tool, we give a formalisation of the language semantics, in the Coq proof assistant. We also prove the correctness of the transformation used to go from a parallel program to a sequential program
|
540 |
A Formalization of an Extended Object Model Using ViewsNova, Luis January 2000 (has links)
Reuse of software designs, experience and components is essential to making substantial improvements in software productivity, development cost, and quality. However, the many facets of reuse are still rarely used in the various phases of the software development lifecycle because of a lack of adequate theories, processes, and tools to support consistent application of reuse concepts. There is a need for approaches including definitions, models and properties of reuse that would provide explicit guidance to a software development team in applying reuse. In particular there is a need to provide abstractions that clearly separate the various functional concerns addressed in a software system. Separating concerns simplifies the identification of the software components that can benefit from reuse and can provide guidance on how reuse may be applied. In this thesis we present an extended model related to the separation of concerns in object-oriented design. The model, called views, indicates how an object-oriented design can be clearly separated into objects and their corresponding interfaces. In this model objects can be designed so that they are independent of their environment, because adaptation to the environment is the responsibility of the interface or view. The view can be seen as expressing the semantics for the 'glue' that joins components or objects together to create a software system. Informal versions of the views model have already been successfully applied to operational and commercial software systems. The objective of this thesis is to provide the views notion with a theoretical foundation to address reuse and separation of concerns. After clearly defining the views model we show the formal approach to combining the objects, interfaces (views), and their interconnection into a complete software system. The objects and interfaces are defined using an object calculus based on temporal logic, while the interconnections among object and views are specified using category theory. This formal framework provides the mathematical foundation to support the verification of the properties of both the components and the composite software system. We then show how verification can be mechanized by converting the formal version of the views model into higher-order logic and using PVS to support mechanical proofs.
|
Page generated in 0.0999 seconds