• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 3
  • 2
  • Tagged with
  • 23
  • 23
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Alchemy: Transmuting Base Specifications into Implementations

Yoo, Daniel 26 February 2009 (has links)
Alloy specifications are used to define lightweight models of systems. We present Alchemy, which compiles Alloy specifications into implementations that execute against persistent databases. Alchemy translates a subset of Alloy predicates into imperative update operations, and it converts facts into database integrity constraints that it maintains automatically in the face of these imperative actions. In addition to presenting the semantics and an algorithm for this compilation, we present the tool and outline its application to a non-trivial specification. We also discuss lessons learned about the relationship between Alloy specifications and imperative implementations.
2

Precise verification of C programs

Lewis, Matt January 2014 (has links)
Most current approaches to software verification are one-sided -- a safety prover will try to prove that a program is safe, while a bug-finding tool will try to find bugs. It is rare to find an analyser that is optimised for both tasks, which is problematic since it is hard to know in advance whether a program you wish to analyse is safe or not. The result of taking a one-sided approach to verification is false alarms: safety provers will often claim that safe programs have errors, while bug-finders will often be unable to find errors in unsafe programs. Orthogonally, many software verifiers are designed for reasoning about idealised programming languages that may not have widespread use. A common assumption made by verification tools is that program variables can take arbitrary integer values, while programs in most common languages use fixed-width bitvectors for their variables. This can have a real impact on the verification, leading to incorrect claims by the verifier. In this thesis we will show that it is possible to analyse C programs without generating false alarms, even if they contain unbounded loops, use non-linear arithmetic and have integer overflows. To do this, we will present two classes of analysis based on underapproximate loop acceleration and second-order satisfiability respectively. Underapproximate loop acceleration addresses the problem of finding deep bugs. By finding closed forms for loops, we show that deep bugs can be detected without unwinding the program and that this can be done without introducing false positives or masking errors. We then show that programs accelerated in this way can be optimised by inlining trace automata to reduce their reachability diameter. This inlining allows acceleration to be used as a viable technique for proving safety, as well as finding bugs. In the second part of the thesis, we focus on using second-order logic for program analysis. We begin by defining second-order SAT: an extension of propositional SAT that allows quantification over functions. We show that this problem is NEXPTIME-complete, and that it is polynomial time reducible to finite-state program synthesis. We then present a fully automatic, sound and complete algorithm for synthesising C programs from a specification written in C. Our approach uses a combination of bounded model checking, explicit-state model checking and genetic programming to achieve surprisingly good performance for a problem with such high complexity. We conclude by using second-order SAT to precisely and directly encode several program analysis problems including superoptimisation, de-obfuscation, safety and termination for programs using bitvector arithmetic and dynamically allocated lists.
3

IRE: A Framework For Inductive Reverse Engineering

January 2019 (has links)
abstract: Reverse engineering is critical to reasoning about how a system behaves. While complete access to a system inherently allows for perfect analysis, partial access is inherently uncertain. This is the case foran individual agent in a distributed system. Inductive Reverse Engineering (IRE) enables analysis under such circumstances. IRE does this by producing program spaces consistent with individual input-output examples for a given domain-specific language. Then, IRE intersects those program spaces to produce a generalized program consistent with all examples. IRE, an easy to use framework, allows this domain-specific language to be specified in the form of Theorist s, which produce Theory s, a succinct way of representing the program space. Programs are often much more complex than simple string transformations. One of the ways in which they are more complex is in the way that they follow a conversation-like behavior, potentially following some underlying protocol. As a result, IRE represents program interactions as Conversations in order to more correctly model a distributed system. This, for instance, enables IRE to model dynamically captured inputs received from other agents in the distributed system. While domain-specific knowledge provided by a user is extremely valuable, such information is not always possible. IRE mitigates this by automatically inferring program grammars, allowing it to still perform efficient searches of the program space. It does this by intersecting conversations prior to synthesis in order to understand what portions of conversations are constant. IRE exists to be a tool that can aid in automatic reverse engineering across numerous domains. Further, IRE aspires to be a centralized location and interface for implementing program synthesis and automatic black box analysis techniques. / Dissertation/Thesis / Masters Thesis Computer Science 2019
4

Theory and techniques for synthesizing efficient breadth-first search algorithms

Nedunuri, Srinivas 05 October 2012 (has links)
The development of efficient algorithms to solve a wide variety of combinatorial and planning problems is a significant achievement in computer science. Traditionally each algorithm is developed individually, based on flashes of insight or experience, and then (optionally) verified for correctness. While computer science has formalized the analysis and verification of algorithms, the process of algorithm development remains largely ad-hoc. The ad-hoc nature of algorithm development is especially limiting when developing algorithms for a family of related problems. Guided program synthesis is an existing methodology for systematic development of algorithms. Specific algorithms are viewed as instances of very general algorithm schemas. For example, the Global Search schema generalizes traditional branch-and-bound search, and includes both depth-first and breadth-first strategies. Algorithm development involves systematic specialization of the algorithm schema based on problem-specific constraints to create efficient algorithms that are correct by construction, obviating the need for a separate verification step. Guided program synthesis has been applied to a wide range of algorithms, but there is still no systematic process for the synthesis of large search programs such as AI planners. Our first contribution is the specialization of Global Search to a class we call Efficient Breadth-First Search (EBFS), by incorporating dominance relations to constrain the size of the frontier of the search to be polynomially bounded. Dominance relations allow two search spaces to be compared to determine whether one dominates the other, thus allowing the dominated space to be eliminated from the search. We further show that EBFS is an effective characterization of greedy algorithms, when the breadth bound is set to one. Surprisingly, the resulting characterization is more general than the well-known characterization of greedy algorithms, namely the Greedy Algorithm parametrized over algebraic structures called greedoids. Our second contribution is a methodology for systematically deriving dominance relations, not just for individual problems but for families of related problems. The techniques are illustrated on numerous well-known problems. Combining this with the program schema for EBFS results in efficient greedy algorithms. Our third contribution is application of the theory and methodology to the practical problem of synthesizing fast planners. Nearly all the state-of-the-art planners in the planning literature are heuristic domain-independent planners. They generally do not scale well and their space requirements also become quite prohibitive. Planners such as TLPlan that incorporate domain-specific information in the form of control rules are orders of magnitude faster. However, devising the control rules is labor-intensive task and requires domain expertise and insight. The correctness of the rules is also not guaranteed. We introduce a method by which domain-specific dominance relations can be systematically derived, which can then be turned into control rules, and demonstrate the method on a planning problem (Logistics). / text
5

Automated synthesis for program inversion

Hou, Cong 20 September 2013 (has links)
We consider the problem of synthesizing program inverses for imperative languages. Our primary motivation comes from optimistic parallel discrete event simulation (OPDES). There, a simulator must process events while respecting logical temporal event-ordering constraints; to extract parallelism, an OPDES simulator may speculatively execute events and only rollback execution when event-ordering violations occur. In this context, the ability to perform rollback by running time- and space-efficient reverse programs, rather than saving and restoring large amounts of state, can make OPDES more practical. Synthesizing inverses also appears in numerous other software engineering contexts, such as debugging, synthesizing “undo” code, or even generating decompressors automatically given only lossless compression code. This thesis mainly contains three chapters. In the first chapter, we focus on handling programs with only scalar data and arbitrary control flows. By building a value search graph (VSG) that represents recoverability relationships between variable values, we turn the problem of recovering previous values into a graph search one. Forward and reverse programs are generated according to the search results. For any loop that produces an output state given a particular input state, our method can synthesize an inverse loop that reconstructs the input state given the original loop's output state. The synthesis process consists of two major components: (a) building the inverse loop's body, and (b) building the inverse loop's predicate. Our method works for all natural loops, including those that take early exits (e.g., via breaks, gotos, returns). In the second chapter we extend our method to handling programs containing arrays. Based on Array SSA, we develop a modified Array SSA from which we could easily build equalities between arrays and array elements. Specifically, to represent the equality between two arrays, we employ the array subregion as the constraint. During the search those subregions will be calculated to guarantee that all array elements will be retrieved. We also develop a demand-driven method to retrieve array elements from a loop, in which each time we only try to retrieve an array element from an iteration if that element has not been modified in previous iterations. To ensure the correctness of each retrieval, the boundary conditions are created and checked at the entry and the exit of the loop. In the last chapter, we introduce several techniques of handling high-level constructs of C++ programs, including virtual functions, copying C++ objects, C++ STL containers, C++ source code normalization, inter-procedural function calls, etc. Since C++ is an object-oriented (OO) language, our discussion in this chapter can also be extended to other OO languages like Java.
6

The diagrammatic specification and automatic generation of geometry subroutines

Li, Yulin, Ph. D. 20 October 2010 (has links)
Programming has advanced a great deal since the appearance of the stored-program architecture. Through the successive generations of machine codes, assembly languages, high-level languages, and object-oriented languages, the drive has been toward program descriptions that express more meaning in a shorter space. This trend continues today with domain-specific languages. However, conventional languages rely on a textual formalism (commands, statements, lines of code) to capture the programmer's intent, which, regardless of its level of abstraction, imposes inevitable overheads. Before successful programming activities can take place, the syntax has to be mastered, names and keywords memorized, the library routines mastered, etc. Existing visual programming languages avoid some of these overheads, but do not release the programmer from the task of specifying the program logic, which consumes the main portion of programming time and also is the major source of difficult bugs. Our work aims to minimize the demands a formalism imposes on the programmer of geometric subroutines other than what is inherent in the problem itself. Our approach frees the programmer from syntactic constraints and generates logically correct programs automatically from program descriptions in the form of diagrams. To write a program, the programmer simply draws a few diagrams to depict the problem context and specifies all the necessary parameters through menu operations. Diagrams are succinct, easy to learn, and intuitive to use. They are much easier to modify than code, and they help the user visualize and analyze the problem, in addition to providing information to the computer. Furthermore, diagrams describe a situation rather than a task and thus are reusable for different tasks—in general, a single diagram can generate many programs. For these reasons, we have chosen diagrams as the main specification mechanism. In addition, we leverage the power of automatic inference to reason about diagrams and generic components—the building blocks of our programs—and discover the logic for assembling these components into correct programs. To facilitate inference, symbolic facts encode entities present in the diagrams, their spatial relationships, and the preconditions and effects of reusable components. We have developed a reference implementation and tested it on a number of real-world examples to demonstrate the feasibility and efficacy of our approach. / text
7

Bridging the specification protocol gap in argumentation

Maghraby, Ashwag Omar January 2013 (has links)
As multi-agent systems (MAS) have become more mature and systems in general have become more distributed, it is necessary for those who want to build large scale systems to consider, in some computational depth, how agents can communicate in large scale, complex and distributed systems. Currently, some MAS systems have been developed to use an abstract specification language for argumentation. This as a basis for agent communication; to provide effective decision support for agents and yield better agreements. However, as we build complete MAS that involve argumentation, there is a need to produce concrete implementations in which these abstract specifications are realised via protocols coordinating agent behaviour. This creates a gap between standard argument specification and deployment of protocols. This thesis attempts to close this gap by using a combination of automated synthesis and verification methods. More precisely, this thesis proposes a means of moving rapidly from argument specification to protocol implementation using an extension of the Argument Interchange Format (AIF is a generic specification language for argument structure) called a Dialogue Interaction Diagram (DID) as the dialogue game specification language and the Lightweight Coordination Calculus (LCC is an executable specification language used for coordinating agents in open systems) as an implementation language. The main contribution of this research is to provide approaches for enabling developers of dialogue game argumentation systems to use specification languages (in our case AIF/DID) to generate agent protocol systems that are capable of direct implementation on open infrastructures (in our case LCC).
8

Runtime Service Composition via Logic-Based Program Synthesis

Lämmermann, Sven January 2002 (has links)
No description available.
9

Runtime Service Composition via Logic-Based Program Synthesis

Lämmermann, Sven January 2002 (has links)
No description available.
10

Program reliability through algorithmic design and analysis

Samanta, Roopsha 10 February 2014 (has links)
Software systems are ubiquitous in today's world and yet, remain vulnerable to the fallibility of human programmers as well as the unpredictability of their operating environments. The overarching goal of this dissertation is to develop algorithms to enable automated and efficient design and analysis of reliable programs. In the first and second parts of this dissertation, we focus on the development of programs that are free from programming errors. The intent is not to eliminate the human programmer, but instead to complement his or her expertise, with sound and efficient computational techniques, when possible. To this end, we make contributions in two specific domains. Program debugging --- the process of fault localization and error elimination from a program found to be incorrect --- typically relies on expert human intuition and experience, and is often a lengthy, expensive part of the program development cycle. In the first part of the dissertation, we target automated debugging of sequential programs. A broad and informal statement of the (automated) program debugging problem is to suitably modify an erroneous program, say P, to obtain a correct program, say P'. This problem is undecidable in general; it is hard to formalize; moreover, it is particularly challenging to assimilate and mechanize the customized, expert programmer intuition involved in the choices made in manual program debugging. Our first contribution in this domain is a methodical formalization of the program debugging problem, that enables automation, while incorporating expert programmer intuition and intent. Our second contribution is a solution framework that can debug infinite-state, imperative, sequential programs written in higher-level programming languages such as C. Boolean programs, which are smaller, finite-state abstractions of infinite-state or large, finite-state programs, have been found to be tractable for program verification. In this dissertation, we utilize Boolean programs for program debugging. Our solution framework involves two main steps: (a) automated debugging of a Boolean program, corresponding to an erroneous program P, and (b) translation of the corrected Boolean program into a correct program P'. Shared-memory concurrent programs are notoriously difficult to write, verify and debug; this makes them excellent targets for automated program completion, in particular, for synthesis of synchronization code. Extant work in this domain has focused on either propositional temporal logic specifications with simplistic models of concurrent programs, or more refined program models with the specifications limited to just safety properties. Moreover, there has been limited effort in developing adaptable and fully-automatic synthesis frameworks that are capable of generating synchronization at different levels of abstraction and granularity. In the second part of this dissertation, we present a framework for synthesis of synchronization for shared-memory concurrent programs with respect to temporal logic specifications. In particular, given a concurrent program composed of synchronization-free processes, and a temporal logic specification describing their expected concurrent behaviour, we generate synchronized processes such that the resulting concurrent program satisfies the specification. We provide the ability to synthesize readily-implementable synchronization code based on lower-level primitives such as locks and condition variables. We enable synchronization synthesis of finite-state concurrent programs composed of processes that may have local and shared variables, may be straight-line or branching programs, may be ongoing or terminating, and may have program-initialized or user-initialized variables. We also facilitate expression of safety and liveness properties over both control and data variables by proposing an extension of propositional computation tree logic. Most program analyses, verification, debugging and synthesis methodologies target traditional correctness properties such as safety and liveness. These techniques typically do not provide a quantitative measure of the sensitivity of a computational system's behaviour to unpredictability in the operating environment. We propose that the core property of interest in reasoning in the presence of such uncertainty is robustness --- small perturbations to the operating environment do not change the system's observable behavior substantially. In well-established areas such as control theory, robustness has always been a fundamental concern; however, the techniques and results therein are not directly applicable to computational systems with large amounts of discretized, discontinuous behavior. Hence, robustness analysis of software programs used in heterogeneous settings necessitates development of new theoretical frameworks and algorithms. In the third part of this dissertation, we target robustness analysis of two important classes of discrete systems --- string transducers and networked systems of Mealy machines. For each system, we formally define robustness of the system with respect to a specific source of uncertainty. In particular, we analyze the behaviour of transducers in the presence of input perturbations, and the behaviour of networked systems in the presence of channel perturbations. Our overall approach is automata-theoretic, and necessitates the use of specialized distance-tracking automata for tracking various distance metrics between two strings. We present constructions for such automata and use them to develop decision procedures based on reducing the problem of robustness verification of our systems to the problem of checking the emptiness of certain automata. Thus, the system under consideration is robust if and only if the languages of particular automata are empty. / text

Page generated in 0.0882 seconds