• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 2
  • 1
  • Tagged with
  • 76
  • 13
  • 12
  • 11
  • 10
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Practical implementation of a dependently typed functional programming language

Brady, Edwin ? January 2005 (has links)
Types express a program's meaning, and checking types ensures that a program has the intended meaning. In a dependently typed programming language types are predicated on values, leading to the possibility of expressing invariants of a program's behaviour in its type. Dependent types allow us to give more detailed meanings to programs, and hence be more confident of their correctness. This thesis considers the practical implementation of a dependently typed programming language, using the Epigram notation defined by McBride and McKinna. Epigram is a high level notation for dependently typed functional programming elaborating to a core type theory based on Lu๙s UTT, using Dybjer's inductive families and elimination rules to implement pattern matching. This gives us a rich framework for reasoning about programs. However, a naive implementation introduces several run-time overheads since the type system blurs the distinction between types and values; these overheads include the duplication of values, and the storage of redundant information and explicit proofs. A practical implementation of any programming language should be as efficient as possible; in this thesis we see how the apparent efficiency problems of dependently typed programming can be overcome and that in many cases the richer type information allows us to apply optimisations which are not directly available in traditional languages. I introduce three storage optimisations on inductive families; forcing, detagging and collapsing. I further introduce a compilation scheme from the core type theory to G-machine code, including a pattern matching compiler for elimination rules and a compilation scheme for efficient run-time implementation of Peano's natural numbers. We also see some low level optimisations for removal of identity functions, unused arguments and impossible case branches. As a result, we see that a dependent type theory is an effective base on which to build a feasible programming language.
22

Studies in the design and implementation of programming languages for symbol manipulation

Rees, D. J. January 1970 (has links)
Compared with the development of computing hardware, the development of programming languages has followed a different course. Hardware innovations such as the use of transistors and integrated circuitry have resulted in machines with very substantially improved capabilities, making older machines and even comparatively modern machines obsolescent. The programming languages currently in most widespread use, however, remain those which were already in use as many as ten years ago, namely HJRTRAN, ALGOL 60, and COBOL. Nevertheless, considerable improvements can be made to these languages. The reasons why no improvements were made appear to be primarily twofold. Firstly, they are regarded as 'standard' languages, which in order to facilitate transferability of programs, has made them virtually immutable. Secondly, they can be employed in almost all programming situations without the need for change. Instead, very many other languages have been designed and implemented with particular objectives in view, but which almost invariably limit their application to a narrow field. Only recently have attempts been made to unify some of the developments under the cloak of a single language ( PL/1 and ALGOL 68 ). Data structures are a particular example of what features have been incorporated. There are still considerable omissions however. For instance, neither language has incorporated list processing or symbol manipulation facilities within its basic framework. The latter seems to be most surprising. With the increased capabilities of modern computers and the consequent broadening of their range of application, techniques involving symbol manipulation are becoming increasingly important. Natural language processing such as the analysis of texts for authorship and mechanical translation, and formal manipulations, such as those involved in mechanical theorem-proving and algebraic formula manipulation are some obvious applications. The last mentioned, that of algebraic manipulation of formulae, is one of the most important applications. Several systems, notably R3RMAC, have been developed for this purpose. With the advent of multi-access computing systems a much greater interaction between man and machine is becoming possible, where the advantages of algebraic manipulation and mathematical assistance packages are felt the greatest. This, further, demonstrates the need for symbol manipulation facilities to be available together with normal arithmetic facilities in a programming language, for not only must the formulae be manipulated but also they must be evaluated in normal arithmetic terns. This combination has not completely satisfactorily been acheived in any languages developed in the past. The present investigation is an attempt to overcome this deficiency. A language called ASTRA has been the result. Before discussing the design and implementation of ASTRA, several existing languages are examined in order to discern the desirable properties of a language for symbol manipulation. It is the belief of the present author that the features of ASTRA described herein represent an advance on previous languages. The methods used in the ASTRA compiler are also described.
23

Bifibrational parametricity : from zero to two dimensions

Orsanigo, Federico January 2017 (has links)
In this thesis we use bifibrations in order to study relational parametricity. There are three main contributions in this thesis. First, through the lenses of bifibrations, we give a new framework for models of parametricity. This allows us to make some of the underlying categorical structure in Reynolds' original work clearer. Using the same approach we then give a universal property for the interpretation of forall types: they are characterized as terminal objects in a certain category. The universal property permits us to prove both Reynolds' Identity Extension Lemma and Abstraction Theorem. The third contribution consists in defining two-dimensional parametricity. The insight derived from the bifibrational approach leads to a generalization of parametricity to proof relevant relations, incorporating higher-dimensional relations between relations. We call the resulting theory two-dimensional parametricity.
24

A generic high-level specification language for non-functional properties of component-based systems

Alreshidi, Abdulrahman Nassar January 2016 (has links)
The component-based software development is helpful in providing reuse of the components and reducing complexity of software systems. Different components work together to produce a complete system that needs a good understanding of the way the components interact with each other. The components' reuse requires a high level specification, among other things for non-functional properties (NFPs) as these properties control the way these components co-ordinate with each other. The complexity of modern software systems demands a generic and flexible language for formal specification of the functional and NFPs of the system so that the different components in a system can have a well-defined behaviour expectation. The non-functional properties of component-based system are important part of specification because they highlight the non-functional perspective of the system. They also help in implementation of functional elements with constraints on the NFPs in consideration. The absence of specification of NFPs can render the system not usable because the functional implementation may not have considered the constraints for working environment of the system. This is because the component developer will have no clearly defined non-functional objectives of the system. The formal specification of NFPs for components and their interaction with each other can help implement reliable systems. Incorporating these design concepts in the language specification would describe the usage context of language features in clear and precise manner. In this thesis, we developed a novel generic specification language (QML/CS) for NFPs of component-based systems. Defining such a high level specication language using a standard meta-modelling approach is challenging because its definition requires multi levels modelling. We employed deep meta-modelling technique to address this complex problem. We begin by discussing the key concepts used, then show how our meta-model is defined. In addition, we show how our meta-model for QML/CS overcame the issues of the standard meta-modelling language like UML and the mapping of a measurement to a concrete application. Finally, we show a prototype for QML/CS and discuss how the mapping of QML/CS expressions into TLA+ specications can dene the QML/CS semantics.
25

Modular verification of equivalence for memory allocating procedures

Wood, Timothy January 2016 (has links)
Verifying the equivalence of programs has been applied in many situations: for example, proving the correctness of bug-fixes, refactorings, compilation, and optimisation, proving program continuity, proving non-interference in secure information flow, proving abstraction and refinement relationships between programs, and proving that programs conform to differential privacy policies. Verifying the equivalence of heap manipulating procedures where the order and amount of memory allocations differ is challenging for state-of-the-art equivalence verifiers. We describe a fully automatic program equivalence tool, and propose a verification methodology, for such dynamically allocating programs. Recent years have seen significant progress toward fully automatic program equivalence verification, with the release of several tools taking a variety of approaches. Two main approaches are to use a weakest-precondition based program verifier or a bounded model checker. One such tool has built in support for programs that differ in the order of memory allocation, it uses a bounded model checker to discharge some proof obligations and restricts the allowable shapes of heap data structures to trees. We describe a fully automatic program equivalence verification tool for a simple object oriented language. It has a notion of procedure equivalence that is powerful enough to allow procedures with different orders and amounts of memory allocation or garbage creation to be considered equivalent, with no restrictions on heap shapes. Our tool establishes equivalence by verifying that procedures result in isomorphic heaps. The tool is built on top of an off-the-shelf weakest-precondition based verifier which itself uses an SMT solver to discharge proof obligations. A naive encoding of procedure equivalence would require the verification tool to produce a witness to the heap isomorphism before and after procedure calls, which SMT based tools are not very good at. Instead we propose a modular verification methodology, called RIE, that allows us to soundly establish heap isomorphism by checking that an approximation preserves heap equality. RIE then allows us to assume that: whenever we can establish an isomorphism between parts of stores that these stores are in fact equal, and that whenever equivalent procedures are called in an isomorphic manner their effects are equal. RIE also allows our tool to handle some cases where there is not a simulation between the recursive procedure calls of the programs being compared. We prove, and provide intuitions, that RIE is sound for a simple programming language that includes non-deterministic allocation, unbounded recursion, and unbounded heap updates.
26

Inferring visual contracts from Java applications

Alshanqiti, Abdullah Mahfodh M. January 2017 (has links)
Visual contracts model the operations of components or services by pre- and post-conditions formalised as graph transformation rules. They provide a precise intuitive notation to support testing, understanding and analysis of software. However, creating a detailed model of a system in any language is error-prone. Visual contracts are no exception, and their specification of object states and transformations requires a deeper understanding of a system than models of externally visible behaviour. This limits their applicability in testing, verification and program understanding, thus inventing an effective technique for extracting visual contracts automatically would enable their wider use in general. In this thesis we study a reverse engineering approach to address such problems by extracting visual contracts dynamically from existing systems. We propose an inference solution and implement a prototype tool in Java with empirical evaluations of the performance, completeness, correctness and utilisations. The resulting contracts give an accurate description of the observed object transformations, their effects and preconditions in terms of object structures, parameter and attribute values, and allow generalisation by universally quantified (multi) objects. They support program understanding in general, and the analysis of tests based on a concise, visual and comprehensive representation of operations' behaviour in particular.
27

Verifying information flow and metaprogramming in dynamically typed languages

Lester, Martin Mariusz January 2015 (has links)
The ubiquity of JavaScript in Web applications means that its analysis has become an important security problem. This thesis develops techniques for analysing information flow in JavaScript programs and verifying the absence of undesirable flows (for example, of sensitive data to untrusted third parties). JavaScript presents a unique combination of challenges not usually addressed by information flow analyses: its semantics are quaint and poorly specified, making formal reasoning largely infeasible; it is dynamically typed, which precludes the use of the most advanced existing analyses; and its eval construct, which executes a string as program code, allows arbitrary behaviour to result from data manipulated within the program. The thesis focuses on the last of these problems. It considers an idealised subset of JavaScript and augments it with staged metaprogramming, a formalism that captures the construction, execution and manipulation of code templates. The resulting language is called SLamJS. An information flow analysis for SLamJS is developed and its correctness is proved. This builds on the existing analysis 0CFA, but adds support for metaprogramming (paying particular attention to the difficult behaviour of variable capture) and information flow. In order to demonstrate the applicability of this analysis to JavaScript, where eval-using programs operate on code strings formed through concatenation, rather than splicing of templates, an algorithm to transform an eval-using program into one that uses staged metaprogramming is also developed. The transformation repurposes an existing lexer/parser (for example, one produced by the popular tools lex and yacc) to parse fragments of program code out-of-order. The transformed program is then a suitable target for the analysis. This multi-step approach is vital to allow subdivision of the complex overall problem into smaller, more manageable parts.
28

A domain-specific aspect language approach to distributed systems development

Soule, Paul January 2008 (has links)
Distributed applications are difficult to write. Programmers need to adhere to specific distributed systems programming conventions and frameworks, which makes distributed systems development complex and error prone and ties the resultant application to the distributed system because the application's code is tangled with the crosscutting concern distribution. Existing mainstream programming languages, such as Java, do not provide language support for distribution. Rather, programmers must rely on object-oriented distribution frameworks to provide distribution support. Although highly successful, the cost of using these frameworks is that the resultant code is tied to the framework. Object-oriented frameworks in general, and distribution frameworks in particular, can therefore be considered crosscutting in nature because the framework's code, either via inheritance or containment, is scattered throughout the application's code thereby tying the application to the framework. This is a particular concern in distributed systems development because distribution frameworks impose a large code overhead due to the requirements distributed systems impose, such as the need to catch distribution-specific exceptions, locating and binding to distributed objects, locating another server in the event the current server becomes unavailable, and adhering to programming conventions dictated by the framework, such as implementing framework specific interfaces. Consequently, developing distributed applications is complex and error prone and results in application components tied to the distribution framework, which cannot be easily reused outside the application. In this thesis we address the above issues and present four contributions to distributed systems development. Firstly, we introduce the concept of a Distribution Definition Language, a high-level domain-specific aspect language that generalises the distribution concern by describing the classes and methods of an existing application to be made remote, the distributed system to use to make them remote and the recovery mechanism to use in the event of a remote error. Secondly, we provide the ability for multiple distribution protocols to be applied to the same code base thereby generalising the distribution concern. Thirdly, we allow the application of distribution awareness to applications in such a way that the application is oblivious of the distribution implementation and recovery mechanism yet is able to fully participate in both. Finally, we provide a simplified approach to the development of distributed systems that allows an application to be either distributed or non-distributed thereby improving software reuse and simplifying testability of distributed applications as applications may be functionally tested before having the distribution and recovery concerns applied. We introduce a software tool in the form of the RemoteJ compiler/generator that uses information contained in the Distribution Definition Language to generate the distributed system specific code and apply it to the application using bytecode manipulation and generation techniques. Finally, we evaluate our contributions and show that the concept of a Distribution Definition Language simplifies the development of distributed applications whilst allowing for greater reuse of application components.
29

The geometry of implementation

Mackie, Ian Craig January 1994 (has links)
This thesis aims to develop efficient implementation techniques for functional programming languages using novel technology arising from recent work on Linear Logic, the Geometry of Interaction, and Optimal Reductions. The philosophy behind the implementations that we consider is that they should arise naturally from some underlying semantics of the language. We are then in a setting where correctness becomes a triviality, and semantic tools are at hand to assist directly with any optimisations, transformations, etc. that we may apply. In other words our interests are to implement the theory directly, rather than to find a theory that fits the practice; putting an engineering perspective on the theory. We begin in Chapter 1 with general motivations for the use of the semantic tools that we will utilise, in particular the Geometry of Interaction. We go on in Chapter 2 to give basic definitions and background for the ideas used in this thesis. In Chapter 3 we will look at translating the A-calculus in to Linear Logic proof structures which will be the starting point of all our implementations. We then go onto extracting the result from these proof structures. Chapter 4 takes a detailed look at the notions related to the Geometry of Interaction, namely Paths, Labels and Types. Lafont's Interaction nets provide the first implementation of Linear Logic that we shall consider in Chapter 5. We give one very simple implementation of the A-calculus as an interaction net, and work through the proofs of correctness using ideas from Girard's Geometry of Interaction. We address the problem of Levy's theory of optimality too; however, we stress that this is not one of our goals. We are interested in efficient implementations of functional languages from a very practical point of view. An alternative implementation is given in Chapter 6 which is based on data-flow. We show that we can set up a computational interpretation of the Geometry of Interaction that mimics cut-elimination in the A-calculus. This chapter concludes by giving several very simple concrete implementations of this data flow on a standard von-Neumann architecture. More specifically, we give a coding of the Geometry of Interaction in assembly language. We conclude our ideas in Chapter 7 and suggest some further extensions to this work. In particular, we look at Hybrid systems, derived from combinations of the ideas presented in the two previous chapters.
30

Logic and the development of programming languages, 1930-1975

Priestley, P. M. January 2008 (has links)
Compared with the history' of computing hardware, the history of software is in a relatively unde veloped state. In particular, the history of programming languages still consists for the most part of technical accounts presenting a rather Whiggish perspective on developments. Given the importance of software in the contemporary world, however, it is important to develop a more sophisticated un derstanding of the medium in which it is expressed. This thesis considers some aspects of this history with the aim of examining the influence of formal logic on the evolution of notations for expressing computer programs. It is argued that this was not a natural or inev itable application of theory to practice, as is sometimes suggested, but a complex and contingent process with a rich history of its own. Two introductory chapters discuss the work on computability carried out by logicians in the mid-1930s. and the controversial topic of the role of logic in the invention of the computer. The body of the thesis proceeds chronologically, considering machine codes, the introduction of higher level notations, structured progTamrning and software engineering, and the early object-oriented languages. The picture that emerges is that formal logic was deliberately employed by programming lan guage designers to provide a model for a theoretical understanding of programming languages and the process of program development. This led to a flourishing research programme in the 1960s and 1970s, many of whose results are of lasting significance and value. The thesis concludes by exam ining the early history of object-oriented languages, arguing that this episode shows the emergence of limits to the applicability of the logical research programme.

Page generated in 0.015 seconds