• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 266
  • 49
  • 36
  • 30
  • 28
  • 16
  • 13
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

High-level synthesis using structural input

Finlay, Iain William January 1992 (has links)
The task of a high-level behavioural synthesis system is to create a structure to implement a given abstract specification of behaviour. The behaviour is specified at the algorithmic level, typically in the form of a high-level programming language. The synthesized structure is described at the register-transfer level. In such systems the synthesis task is guided only by the behaviour and some physical design constraints such as speed and area. This approach frequently leads to difficulties in synthesizing a suitable architecture. The synthesis system reported in this thesis tackles this problem by enabling the designer to specify structural input alongside the behaviour. The structural input is described at the register-transfer level and need not define a complete structure. The synthesis tool makes use of this input structure by incorporating it into the design where appropriate or as instructed. This structurally directed approach is shown to give the designer greater control over structural aspects of the design in addition to enabling greater exploration of possible structural solutions.
12

The limits of a decoupled out-of-order superscalar architecture

Jones, Graham P. January 1999 (has links)
This thesis presents a study into a technique for improving performance in out-of-order superscalar architectures. It identifies three technological trends limiting superscalar performance; they are the increasing cost of a main memory access, control dependencies and the greater hardware complexity of out-of-order execution. Decoupling is a technique that can provide higher performance through the machine of dynamically executing, asynchronous instruction streams. It offers the capability to improve ILP, through effective latency hiding and dynamic scheduling, and to reduce hardware complexity, through decentralised logic. This thesis evaluates this capability, by investigating the effectiveness of decoupled out-of-order superscalar architectures. This thesis identifies the degree to which operations can reorder (the degree of reordering) as the critical dimension to an out-of-order superscalar architecture. It investigates the effectiveness of decoupling by focusing on those design issues that determine the degree of reordering, and relaxes all other architectural constraints. This approach allows the thesis to establish the limitations of decoupled out-of-order superscalar architectures. The results from this thesis show that a decoupled architecture, through dynamically reordering small instruction windows provides a possible solution to the problem of latency hiding and issue logic complexity.
13

Annotated transition systems for verifying concurrent programs

Paczkowski, Pawel January 1990 (has links)
We propose what we view as a generalization of an assertional approach to the verification of concurrent programs. In doing so we put an emphasis on reflecting the semantic contents of programs rather than their syntax in the adopted pattern of reasoning. Therefore assertions annotate not a text of a program but a transition system which represents an object derived from the operational semantics, the control flow of a program. Unlike in the case of sequential programs, where annotating a program text and its control flow amounts to the same, those two possible patterns of attaching assertions are different in the presence of concurrency. The annotated transition systems (annotations, in short) that we introduce and the satisfaction relation between behaviours and annotations are intended to capture the basic idea of assertional reasoning, i.e. of characterizing the reachable states of computations by assertions and deriving program properties from such characterizations. We emphasize the role of control flow as, on the one hand, a separable ingredient of the operational semantics and, on the other hand, as a major concern in formulating properties of concurrent programs and verifying them. The rigorous definition of control flow proves important for analysing deadlock freedom and mutual exclusion. We develop proof techniques for showing partial correctness, mutual exclusion, deadlock freedom, and termination of concurrent programs. The relative ease in establishing soundness and completeness of the proposed proof methods is due to the fact that the semantics is given a priority in suggesting the pattern of reasoning and the abstractions of program behaviours. Moreover, as annotations can faithfully represent control flows of programs, no need for auxiliary variables arises. We consider also a method which allows us to isolate some inessential interleavings of concurrent actions and ignore them in correctness proofs, where a partial commutativity relation on actions is exploited. The concepts of trace theory provide a convenient framework for this study. Investigating this particular issue in an assertional framework was in fact an important objective from the outset of this work.
14

Modelling, analysing and model checking commit protocols

Kempster, Tim January 2000 (has links)
Distributed transactions are playing, and will continue to play, an increasingly important role in all forms of electronic business. A key ingredient of a distributed transaction is a commit protocol. We present a novel modelling technique for commit protocols and the environments in which they execute. We devise a new commit protocol X3PC using this modelling technique. We demonstrate that our technique is flexible and formal enough to support automatic verification of behavioural properties of commit protocols, using techniques such as model checking as well as more traditional proof techniques. It is possible to verify many different properties of commit protocols by expressing properties in temporal logics and then performing model checking to verify them. In order to carry out model checking a labelled transition system must must first be generated from our models. We will describe different techniques that allow us to automatically generate transition systems. One such technique is an abstraction scheme that produces small finite transition systems for models with arbitrary numbers of processors. Using this abstraction makes the process of model checking commit protocols with arbitrary numbers of participant processes efficient. The role of commit protocols in providing transaction isolation for distributed transactions is studied. We present novel definitions for the four different levels of transactions isolation first proposed by the ANSI community. By first modelling a system of multiple concurrent distributed transactions, using our new technique, we show how to verify that a particular level of isolation is attained within the system. This once again demonstrates the applicability of our modelling technique.
15

Verification in ASL and related specification languages

Farres-Casals, Jorge January 1992 (has links)
In recent years a new framework for specification has been defined around ASL [SW 83, Wir 86, ST 88a, ST 88b]. Stress has been put on defining a specification language consisting of a few powerful specification building operations (SBO's for short) with simple semantics and an elegant implementation notion. Some important features of this work are the generalization to an arbitrary institution [GB 84] of a lot of previous work done on algebraic specification and the study of behavioural abstraction in the context of a model-oriented specification language. The basic research on formal specifications is generally regarded as the basis for a new generation of specification languages. These specification languages will instantiate ASL with their institution of interest, and will define their own specification constructs and implementation notion in terms of ASL's primitive SBO's and implementation notion. However, any useful formal framework for program development needs an inference system for the implementation relation, <i>i.e.</i> proofs that one specification implements another must be produced by a fixed family of rules without model-theoretical considerations. This poses a new and difficult problem to ASL due to its model-oriented nature and the great generality of both the implementation relation and the SBO's. In this thesis we study this problem starting from a simple specification language with only three SBO's, and progressively adding other common SBO's. In the course of this analysis we encounter four main problems for the verification of implementations: hiding of auxiliary functions, behavioural abstraction, reachability constraints and parameterization. These problems can be considered classical of algebraic specifications and the study of their verification aspects at an institution-independent level provides valuable results for many other specification languages. New results for the verification of implementations w.r.t. specifications with hidden parts and abstracted specifications at an institution-independent level are the main contribution of the thesis. Verification of reachability constraints is shown to be below the institutional level. In this case, a common institution for constraints is formally presented showing some ignored verification aspects. Finally, an original presentation of parameterization and structured implementations concludes the thesis. In conclusion, this thesis presents a collection of sublanguages, inference systems and side conditions which add a new dimension to the fascinating job started by ASL in [SW 83].
16

Models of sharing graphs : a categorical semantics of let and letrec

Hasegawa, Masahito January 1997 (has links)
A general abstract theory for computation involving shared resources is presented. We develop the models of <I>sharing graphs, </I>also known as term graphs, in terms of both syntax and semantics. According to the complexity of the permitted form of sharing, we consider four situations of sharing graphs. The simplest is first-order acyclic sharing graphs represented by let-syntax, and others are extensions with higher-order constructs (lambda calculi) and/or cyclic sharing (recursive letrec binding). For each of four settings, we provide the equational theory for representing the sharing graphs, and identify the class of categorical models which are shown to be sound and complete for the theory. The emphasis is put on the algebraic nature of sharing graphs, which leads us to the semantic account of them. We describe the models in terms of the notions of symmetric monoidal categories and functors, additionally with symmetric monoidal adjunctions and traced monoidal categories for interpreting higher-order and cyclic features. The models studied here are closely related to structures known as notions of computation, as well as models for intuitionistic linear type theory. As an interesting implication of the latter observation, for the acyclic settings, we show that our calculi conservatively embed into linear type theory. The models for higher-order cyclic sharing are of particular interest as they support a generalized form of recursive computation, and we look at this case in detail, together with the connection with cyclic lambda calculi. We demonstrate that our framework can accommodate Milner's <I>action calculi, </I>a proposed framework for general interactive computation, by showing that our calculi, enriched with suitable constructs for interpreting parameterized constants called controls, are equivalent to the closed fragments of action calculi and their higher-order/reflexive extensions. The dynamics, the computational counterpart of action calculi, is then understood as rewriting systems on our calculi, and interpreted as local preorders on our models.
17

Simulated annealing based datapath synthesis

Neil, John Paul January 1994 (has links)
The behavioural synthesis procedure aims to produce optimised register-transfer level datapath descriptions from an algorithmic problem definition, normally expressed in a high-level programming language. The procedure can be partitioned into a number of subtasks linked by a serial synthesis flow. Graph theoretic algorithms can be used to provide solutions to these subtasks. Many of these techniques, however, belong to a class of algorithm for which there is no exact solution computable in polynomial time. To overcome this problem, heuristics are used to constrain the solution space. The introduction of heuristics can cause the algorithm to terminate in a local cost minimum. This thesis supports a global formulation of the behavioural synthesis problem. An algorithm which can avoid local minima, simulated annealing forms the basis of the synthesis system reported. A modular software system is presented in support of this approach. A novel data structure enables multiple degrees of optimisation freedom within the datapath solution space. Synthesis primitives, tightly coupled to a solution costing mechanism directed towards the prevalent datapath implementation technologies, form the core of the system. The software is exercised over small and large-scale synthesis benchmarks. The synthesis paradigm is extended by the provision of optimisation routines capable of supporting the generation of functional pipelines.
18

Logic programming : operational semantics and proof theory

Andrews, James H. January 1991 (has links)
Logic programming systems which use parallel strategies for computing 'and' and 'or' are theoretically elegant, but systems which use sequential strategies are far more widely used and do not fit well into the traditional theory of logic programming. This thesis presents operational and proof-theoreticcharacterisations for systems having each of the possible combinations of parallel or sequential 'and' and parallel or sequential 'or'. The operational semantics are in the form of an abstract machine. The four control strategies emerge as simple variants of this machine with varying degrees of determinism; some of these variants have equivalent, compositional operational semantics, which are given. The proof-theoretic characterisations consist of a single central sequent calculus, LKE (similar to Gentzen's sequent calculus for classical first order logic), and sets of axioms which capture the success or failure of queries in the four control strategies in a highly compositional, logical way. These proof-theoretic characterisations can be seen as logical semantics of the logic programming languages. The proof systems can also be used in practice to prove more general properties of logic programs, although it is shown that they are unavoidably incomplete for this purpose. One aspect of this incompleteness is that it is not possible to derive all valid sequents having free variables; however, induction rules are given which can help to prove many useful sequents of this kind.
19

Correctness proofs of compilers and debuggers : an approach based on structural operational semantics

Da Silva, Fabio January 1992 (has links)
In this thesis we study the use of semantics-based formal methods in the specification and proof of correctness of compilers and debuggers. We use a Structural Operational Semantics as the basis for the specification of compilers and propose a notion of correctness based on an observational equivalence relation. We define program evaluation and a notion of evaluation step based on a Structural Operational Semantics and use these definitions as the basis for the specification of debuggers. Debugger correctness is then defined by an equivalence relation between a specification and an implementation of the debugger based on the bisimulation concept. The main results of this thesis are: a definition of a variant of Structural Operational Semantics, called Relational Semantics, which is the underlying formalism of this thesis; the definition of a notion of Observational Equivalence between Relational Semantics Specifications; a formulation of the problem of compiler correctness using Observational Equivalence; an evaluation model for programming languages and a definition of an evaluation step; an abstract definition of Interpreter-debuggers; a specification notation for the formal specification of debuggers, called DSL; a notion of equivalence between debuggers using bisimulation; a study on Compiler-debuggers and the problems involved in their definition. These results form a theory for the formal specification and proofs of correctness of compilers and debuggers. Our starting point is that the use of this theory helps in building better compilers and debuggers. It is our goal to provide theoretical foundations and tools to show that our methods are achievable.
20

Dynamic programming using local optimality conditions for action elimination

Steiner, Erich Wolfgang January 1999 (has links)
In the theory of dynamic programming (DP) the elimination of non-optimal actions is an important topic. For many DP problems the calculation is slow and action elimination helps to speed up the calculation. A great part of this thesis is dedicated to the development of action elimination procedures for various classes of DP problems. Common to all these action elimination procedures is that they are based on local optimality conditions. Among the classes of DP problems looked at are deterministic allocation problems and stochastic problems with either continuous or discrete state and action spaces. For DP problems with continuous state and action space the action elimination procedures are based on the Fritz-John first order optimality conditions. For problems with discrete state and action space the action elimination procedures are based on local optimality conditions for discrete problems. It is shown that action elimination based on local optimality conditions usually leads to a speed up of one order of magnitude. Chapters 7 and 8 discuss a constrained non-linear oil production optimization problem. In this problem most functions involved are continuous but a few functions contain discontinuities, which seriously undermines the scope of local optimization. A hybrid algorithm combining a dual method, DP and local optimization is proposed and computational results are presented. These results are then compared to those of another hybrid algorithm, which combines Tabu Search and local optimization.

Page generated in 0.0167 seconds