• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 7
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 109
  • 109
  • 59
  • 43
  • 41
  • 25
  • 16
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Augmenting trace-based functional debugging

Penney, Alastair William January 2000 (has links)
No description available.
2

Strictness analysis of lazy functional programs

Benton, Peter Nicholas January 1992 (has links)
No description available.
3

Realizability and partiality in constructive set theories

Ḥamīyah, ʿAlī January 1992 (has links)
No description available.
4

The parallel reduction of lambda calculus expressions

Watson, Paul January 1986 (has links)
No description available.
5

Names and higher-order functions

Stark, Ian David Bede January 1994 (has links)
Many functional programming languages rely on the elimination of 'impure' features: assignment to variables, exceptions and even input/output. But some of these are genuinely useful, and it is of real interest to establish how they can be reintroducted in a controlled way. This dissertation looks in detail at one example of this: the addition to a functional language of dynamically generated names. Names are created fresh, they can be compared with each other and passed around, but that is all. As a very basic example of state, they capture the graduation between private and public, local and global, by their interaction with higher-order functions. The vehicle for this study is the nu-calculus, an extension of the simply-typed lambdacalculus. The nu-calculus is equivalent to a certain fragment of Standard ML, omitting side-effects, exceptions, datatypes and recursion. Even without all these features, the interaction of name creation with higher-order functions can be complex and subtle. Various operational and denotational methods for reasoning about the nu-calculus are developed. These include a computational metalanguage in the style of Moggi, which distinguishes in the type system between values and computations. This leads to categorical models that use a strong monad, and examples are devised based on functor categories. The idea of logical relations is used to derive powerful reasoning methods that capture some of the distinction between private and public names. These techniques are shown to be complete for establishing contextual equivalence between first-order expressions; they are also used to construct a correspondingly abstract categorical model. All the work with the nu-calculus extends cleanly to Reduced ML, a larger language that introduces integer references: mutable storage cells that are dynamically allocated. It turns out that the step up is quite simple, and both the computational metalanguage and the sample categorical models can be reused.
6

Linearity and laziness

Wakeling, David January 1990 (has links)
No description available.
7

A declarative debugger for Haskell

Pope, Bernard James Unknown Date (has links) (PDF)
This thesis considers the design and implementation of a Declarative Debugger for Haskell. At its core is a tree which captures the logical dependencies between function calls in a given execution of the program being debugged (the debuggee). The debuggee is transformed into a new Haskell program which produces the tree in addition to its normal value. A bug is identified in the tree when a call returns the wrong result but all the calls it depends upon are correct.
8

Semantics and type checking of dependently-typed lazy functional programs /

Hünke, Yorck. January 2004 (has links)
Based on the author's D. Phil. Thesis (University of Oxford). / Includes bibliographical references. Available on-line.
9

Proof-theoretic investigations into integrated logical and functional programming

Pinto, Luis Filipe Ribeiro January 1997 (has links)
This thesis is a proof-theoretic investigation of logic programming based on hereditary Harrop logic (as in lambdaProlog). After studying various proof systems for the first-order hereditary Harrop logic, we define the proof-theoretic semantics of a logic LFPL, intended as the basis of logic programming with functions, which extends higher-order hereditary Harrop logic by providing definition mechanisms for functions in such a way that the logical specification of the function rather than the function may be used in proof search. In Chap. 3, we define, for the first-order hereditary Harrop fragment of LJ, the class of uniform linear focused (ULF) proofs (suitable for goal-directed search with backchaining and unification) and show that the ULF-proofs are in 1-1 correspondence with the expanded normal deductions, in Prawitz's sense. We give a system of proof-term annotations for LJ-proofs (where proof-terms uniquely represent proofs). We define a rewriting system on proof-terms (where rules represent a subset of Kleene's permutations in LJ) and show that: its irreducible proof- terms are those representing ULF-proofs; it is weakly normalising. We also show that the composition of Prawitz's mappings between LJ and NJ, restricted to ULF-proofs, is the identity. We take the view of logic programming where: a program P is a set of formulae; a goal G is a formula; and the different means of achieving G w.r.t. P correspond to the expanded normal deductions of G from the assumptions in P (rather than the traditional view, whereby the different means of goal-achievement correspond to the different answer substitutions). LFPL is defined in Chap. 4, by means of a sequent calculus. As in LeFun, it extends logic programming with functions and provides mechanisms for defining names for functions, maintaining proof search as the computation mechanism (contrary to languages such as ALF, Babel, Curry and Escher, based on equational logic, where the computation mechanism is some form of rewriting). LFPL also allows definitions for declaring logical properties of functions, called definitions of dependent type. Such definitions are of the form: (f,x) =def(A, w) : EX:RF, where f is a name for A and x is a name for w, a proof-term witnessing that the formula [A/x]F holds (i.e. A meets the specification Ex:rF). When searching for proofs, it may suffice to use the formula [A/x]F rather than A itself. We present an interpretation of LFPL into NNlambdanorm, a natural deduction system for hereditary Harrop logic with lambda-terms. The means of goal-achievement in LFPL are interpreted in NNlambdanorm essentially by cut-elimination, followed by an interpretation of cut-free sequent calculus proofs as normal deductions. We show that the use of definitions of dependent type may speed up proof search because the equivalent proofs using no such definitions may be much longer and because normalisation may be done lazily, since not all parts of the proof need to be exhibited. We sketch two methods for implementing LFPL, based on goal-directed proof search, differing in the mechanism for selecting definitions of dependent type on which to backchain. We discuss techniques for handling the redundancy arising from the equivalence of each proof using such a definition to one using no such definitions.
10

Automatic complexity analysis of logic programs.

Lin, Nai-Wei. January 1993 (has links)
This dissertation describes research toward automatic complexity analysis of logic programs and its applications. Automatic complexity analysis of programs concerns the inference of the amount of computational resources consumed during program execution, and has been studied primarily in the context of imperative and functional languages. This dissertation extends these techniques to logic programs so that they can handle nondeterminism, namely, the generation of multiple solutions via backtracking. We describe the design and implementation of a (semi)-automatic worst-case complexity analysis system for logic programs. This system can conduct the worst-case analysis for several complexity measures, such as argument size, number of solutions, and execution time. This dissertation also describes an application of such analyses, namely, a runtime mechanism for controlling task granularity in parallel logic programming systems. The performance of parallel systems often starts to degrade when the concurrent tasks in the systems become too fine-grained. Our approach to granularity control is based on time complexity information. With this information, we can compare the execution cost of a procedure with the average process creation overhead of the underlying system to determine at runtime if we should spawn a procedure call as a new concurrent task or just execute it sequentially. Through experimental measurements, we show that this mechanism can substantially improve the performance of parallel systems in many cases. This dissertation also presents several source-level program transformation techniques for optimizing the evaluation of logic programs containing finite-domain constraints. These techniques are based on number-of-solutions complexity information. The techniques include planning the evaluation order of subgoals, reducing the domain of variables, and planning the instantiation order of variable values. This application allows us to solve a problem by starting with a more declarative but less efficient program, and then automatically transforming it into a more efficient program. Through experimental measurements we show that these program transformation techniques can significantly improve the efficiency of the class of programs containing finite-domain constraints in most cases.

Page generated in 0.0302 seconds