• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 10
  • 9
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Efficient specification-based testing using incremental techniques

Uzuncaova, Engin 10 October 2012 (has links)
As software systems grow in complexity, the need for efficient automated techniques for design, testing and verification becomes more and more critical. Specification-based testing provides an effective approach for checking the correctness of software in general. Constraint-based analysis using specifications enables checking various rich properties by automating generation of test inputs. However, as specifications get more complex, existing analyses face a scalability problem due to state explosion. This dissertation introduces a novel approach to analyze declarative specifications incrementally; presents a constraint prioritization and partitioning methodology to enable efficient incremental analyses; defines a suite of optimizations to improve the analyses further; introduces a novel approach for testing software product lines; and provides an experimental evaluation that shows the feasibility and scalability of the approach. The key insight behind the incremental technique is declarative slicing, which is a new class of optimizations. The optimizations are inspired by traditional program slicing for imperative languages but are applicable to analyzable declarative languages, in general, and Alloy, in particular. We introduce a novel algorithm for slicing declarative models. Given an Alloy model, our fully automatic tool, Kato, partitions the model into a base slice and a derived slice using constraint prioritization. As opposed to the conventional use of the Alloy Analyzer, where models are analyzed as a whole, we perform analysis incrementally, i.e., using several steps. A satisfying solution to the base slice is systematically extended to generate a solution for the entire model, while unsatisfiability of the base implies unsatisfiability of the entire model. We show how our incremental technique enables different analysis tools and solvers to be used in synergy to further optimize our approach. Compared to the conventional use of the Alloy Analyzer, this means even more overall performance enhancements for solving declarative models. Incremental analyses have a natural application in the software product line domain. A product line is a family of programs built from features that are increments in program functionality. Given properties of features as firstorder logic formulas, we automatically generate test inputs for each product in a product line. We show how to map a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experiments using different data structure product lines show that our approach can provide an order of magnitude speed-up over conventional techniques. / text
12

Expressiveness of answer set languages

Ferraris, Paolo, 1972- 28 August 2008 (has links)
Answer set programming (ASP) is a form of declarative programming oriented towards difficult combinatorial search problems. It has been applied, for instance, to plan generation and product configuration problems in artificial intelligence and to graph-theoretic problems arising in VLSI design and in historical linguistics. Syntactically, ASP programs look like Prolog programs, but the computational mechanisms used in ASP are different: they are based on the ideas that have led to the development of fast satisfiability solvers for propositional logic. ASP is based on the answer set/stable model semantics for logic problems, originally intended as a specification for query answering in Prolog. From the original definition of 1988, the semantics was independently extended by different research groups to more expressive kinds of programs, with syntax and semantics that are incompatible with each other. In this thesis we study how the various extensions are related to each other. In order to do that, we propose another definition of an answer set. This definition has three main characteristics: (i) it is very simple, (ii) its syntax is more general than the usual concept of a logic program, and (iii) strong theoretical tools can be used to reason on it. About (ii), we show that our syntax allows constructs defined in many other extensions of the answer sets semantics. This fact, together with (iii), allows us to study the expressiveness of those constructs. We also compare the answer set semantics with another important formalism developed by Norm McCain and Hudson Turner, called logic. / text
13

Efficient specification-based testing using incremental techniques

Uzuncaova, Engin. January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2008. / Vita. Includes bibliographical references.
14

Expressiveness of answer set languages

Ferraris, Paolo, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
15

Graph Search as a Feature in Imperative/Procedural Programming Languages

January 2018 (has links)
abstract: Graph theory is a critical component of computer science and software engineering, with algorithms concerning graph traversal and comprehension powering much of the largest problems in both industry and research. Engineers and researchers often have an accurate view of their target graph, however they struggle to implement a correct, and efficient, search over that graph. To facilitate rapid, correct, efficient, and intuitive development of graph based solutions we propose a new programming language construct - the search statement. Given a supra-root node, a procedure which determines the children of a given parent node, and optional definitions of the fail-fast acceptance or rejection of a solution, the search statement can conduct a search over any graph or network. Structurally, this statement is modelled after the common switch statement and is put into a largely imperative/procedural context to allow for immediate and intuitive development by most programmers. The Go programming language has been used as a foundation and proof-of-concept of the search statement. A Go compiler is provided which implements this construct. / Dissertation/Thesis / Masters Thesis Software Engineering 2018
16

Visual Compositional-Relational Programming

Zetterström, Andreas January 2010 (has links)
In an ever faster changing environment, software developers not only need agile methods, but also agile programming paradigms and tools. A paradigm shift towards declarative programming has begun; a clear indication of this is Microsoft's substantial investment in functional programming. Moreover, several attempts have been made to enable visual programming. We believe that software development is ready for a new paradigm which goes beyond any existing declarative paradigm: visual compositional-relational programming. Compositional-relational programming (CRP) is a purely declarative paradigm -- making it suitable for a visual representation. All procedural aspects -- including the increasingly important issue of parallelization -- are removed from the programmer's consideration and handled in the underlying implementation. The foundation for CRP is a theory of higher-order combinatory logic programming developed by Hamfelt and Nilsson in the 1990's. This thesis proposes a model for visualizing compositional-relational programming. We show that the diagrams are isomorphic with the programs represented in textual form. Furthermore, we show that the model can be used to automatically generate code from diagrams, thus paving the way for a visual integrated development environment for CRP, where programming is performed by combining visual objects in a drag-and-drop fashion. At present, we implement CRP using Prolog. However, in future we foresee an implementation directly on one of the major object-oriented frameworks, e.g. the .NET platform, with the aim to finally launch relational programming into large-scale systems development.
17

Towards Simpler Argument Binding : Knowledge Gathering by Mining Logic Program Repositories

Luo, Tong January 2016 (has links)
The compositional relational programming (CRP) is a purely declarative and naturally compositional programming paradigm, but the low readability and some binding issues limit its use. The main purpose in this thesis is utilizing the common binding patterns identified from Prolog programs to improve current argument binding mechanism in CRP. In order to collect relevant Prolog rules and convert them to a measurable form, a data mining tool is built and applied to extract data from Prolog code repository. After the analysis, two kinds of patterns are identified respectively, based on the binding outside and inside the logical combination. Correspondingly, the projection operator make is optimized for highlighting the dummy argument; three extended and combinators are proposed to handle common binary combinations; the join operator is modified to efficiently and flexibly combine multiple predicates. In the future, the usability of those improved operators should be carefully evaluated.
18

An Extensible Framework for Annotation-based Parameter Passing in Distributed Object Systems

Gopal, Sriram 28 July 2008 (has links)
Modern distributed object systems pass remote parameters based on their runtime type. This design choice limits the expressiveness, readability, and maintainability of distributed applications. While a rich body of research is concerned with middleware extensibility, modern distributed object systems do not offer programming facilities to extend their remote parameter passing semantics. Thus, extending these semantics requires understanding and modifying the underlying middleware implementation. This thesis addresses these design shortcomings by presenting (i) a declarative and extensible approach to remote parameter passing that decouples parameter passing from parameter types, and (ii) a plugin-based framework, DeXteR, that enables the programmer to extend the native set of remote parameter passing semantics, without having to understand or modify the underlying middleware implementation. DeXteR treats remote parameter passing as a distributed cross-cutting concern. It uses generative and aspect-oriented techniques, enabling the implementation of different parameter passing semantics as reusable application-level plugins that work with application, system, and third-party library classes. The flexibility and expressiveness of the framework is validated by implementing several non-trivial parameter passing semantics as DeXteR plugins. The material presented in this thesis has been accepted for publication at the ACM/USENIX Middleware 2008 conference. / Master of Science
19

The programming language TransLucid

Ditu, Gabriel Cristian, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
This thesis presents TransLucid, a low-level, purely declarative, intensional programming language. Built on a simple algebra and with just a small number of primitives, TransLucid programs define arbitrary dimensional infinite data structures, which are then queried to produce results. The formal foundations of TransLucid come from the work in intensional logic by Montague and Scott. The background chapters give a history of intensional logic and its predecessors in the Western world, as well as a history of intensional programming and Lucid, the first intensional programming language. The semantics of TransLucid are fully specified in the form of operational semantics. Three levels of semantics are given, in increasing order of efficiency, with the sequential warehouse semantics, the most efficient, being presented together with a proof that any expression will be evaluated by only examining relevant dimensions in the current context. The language is then extended in three important ways, by adding versioned identifiers, (declarative) side-effects and timestamped equations and demands. Adding versioned identifiers to TransLucid enriches the expressiveness of the language and allows the encoding of a variety of programming paradigms, ranging from manipulating large data-cubes to pattern-matching. Adding side-effects supports one of the main reasons for TransLucid: namely, to provide a target language, together with a methodology, for translating the main programming paradigms, thus creating a uniform end platform that can be the focus for optimisation and program verification. A translation of imperative programs into TransLucid is given. Timestamped equations and demands enable TransLucid to become a language for synchronous programming in real-time systems, as well as allowing runtime updates to a program's equations. The language TransLucid represents a decisive advance in declarative programming. It has applications in many fields of computer science and opens up exciting new avenues of research.
20

The programming language TransLucid

Ditu, Gabriel Cristian, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
This thesis presents TransLucid, a low-level, purely declarative, intensional programming language. Built on a simple algebra and with just a small number of primitives, TransLucid programs define arbitrary dimensional infinite data structures, which are then queried to produce results. The formal foundations of TransLucid come from the work in intensional logic by Montague and Scott. The background chapters give a history of intensional logic and its predecessors in the Western world, as well as a history of intensional programming and Lucid, the first intensional programming language. The semantics of TransLucid are fully specified in the form of operational semantics. Three levels of semantics are given, in increasing order of efficiency, with the sequential warehouse semantics, the most efficient, being presented together with a proof that any expression will be evaluated by only examining relevant dimensions in the current context. The language is then extended in three important ways, by adding versioned identifiers, (declarative) side-effects and timestamped equations and demands. Adding versioned identifiers to TransLucid enriches the expressiveness of the language and allows the encoding of a variety of programming paradigms, ranging from manipulating large data-cubes to pattern-matching. Adding side-effects supports one of the main reasons for TransLucid: namely, to provide a target language, together with a methodology, for translating the main programming paradigms, thus creating a uniform end platform that can be the focus for optimisation and program verification. A translation of imperative programs into TransLucid is given. Timestamped equations and demands enable TransLucid to become a language for synchronous programming in real-time systems, as well as allowing runtime updates to a program's equations. The language TransLucid represents a decisive advance in declarative programming. It has applications in many fields of computer science and opens up exciting new avenues of research.

Page generated in 0.1169 seconds