• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 745
  • 148
  • 141
  • 132
  • 30
  • 26
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 12
  • 9
  • Tagged with
  • 1419
  • 1419
  • 1300
  • 500
  • 326
  • 265
  • 265
  • 265
  • 261
  • 261
  • 261
  • 261
  • 261
  • 261
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The effective application of syntactic macros to language extensibility

Campbell, William R. January 1978 (has links)
Starting from B M Leavenworth's proposal for syntactic macros, we describe an extension language LE with which one may extend a base Language LB for defining a new programming language LP. The syntactic macro processor is designed to minimise the overheads required for implementing the extensions and for carrying the syntax and data type error diagnostics of LB through to the extended language LP. Wherever possible, programming errors are flagged where they are introduced in the source text, whether in a macro definition or in a macro call. LE provides a notation, similar to popular extended forms of BNF, for specifying alternative syntaxes for new linguistic forms in the macro template, a separate assertion clause for imposing context sensitive restrictions on macro calls which cannot be imposed by the template, and a non-procedural language which reflects the nested structure of the template for prescribing conditional text replacement in the macro body. A super user may use LE for introducing new linguistic forms to LB and redefining, replacing or deleting existing forms. The end user is given the syntactic macro in terms of an LP macro declaration with which he may define new forms which are local to the lexical environments in which they are declared in his LP program. Because the macro process is embedded in and directed by a deterministic top down parse, the user can be sure that his extensions are unambiguous. Examples of macro definitions are given using a base language LB which has been designed to be rich enough in syntax and data types for illustrating the problems encountered in extending high level languages. An implementation of a compiler/processor for LB and LE is also described. A survey of previous work in this area, summaries of LE and LB, and a description of the abstract target machine are contained in appendices.
42

Effective termination techniques

Cropper, Nick I. January 1997 (has links)
An important property of term rewriting systems is termination: the guarantee that every rewrite sequence is finite. This thesis is concerned with orderings used for proving termination, in particular the Knuth-Bendix and polynomial orderings. First, two methods for generating termination orderings are enhanced. The Knuth-Bendix ordering algorithm incrementally generates numeric and symbolic constraints that are sufficient for the termination of the rewrite system being constructed. The KB ordering algorithm requires an efficient linear constraint solver that detects the nature of degeneracy in the solution space, and for this a revised method of complete description is presented that eliminates the space redundancy that crippled previous implementations. Polynomial orderings are more powerful than Knuth-Bendix orderings, but are usually much harder to generate. Rewrite systems consisting of only a handful of rules can overwhelm existing search techniques due to the combinatorial complexity. A genetic algorithm is applied with some success. Second, a subset of the family of polynomial orderings is analysed. The polynomial orderings on terms in two unary function symbols are fully resolved into simpler orderings. Thus it is shown that most of the complexity of polynomial orderings is redundant. The order type (logical invariant), either r or A (numeric invariant), and precedence is calculated for each polynomial ordering. The invariants correspond in a natural way to the parameters of the orderings, and so the tabulated results can be used to convert easily between polynomial orderings and more tangible orderings. The orderings of order type are two of the recursive path orderings. All of the other polynomial orderings are of order type w or w2 and each can be expressed as a lexicographic combination of r (weight), A (matrix), and lexicographic (dictionary) orderings. The thesis concludes by showing how the analysis extends to arbitrary monadic terms, and discussing possible developments for the future.
43

On the development of Algol

Morrison, Ronald January 1979 (has links)
The thesis outlines the major problems in the design of high level programming languages. The complexity of these languages has caused the user problems in intellectual manageability. Part of this complexity is caused by lack of generality which also causes loss of power. The maxim of power through simplicity, simplicity through generality is established. To achieve this simplicity a number of ground rules, the principle of abstraction, the principle of correspondence and the principle of data type completeness are discussed and used to form a methodology for programming language design. The methodology is then put into practice and the language S-algol is designed as the first member of a family of languages. The second part of the thesis describes the implementation of the S-algol language. In particular a simple and effective method of compiler construction based on the technique of recursive descent is developed. The method uses a hierarchy of abstractions which are implemented as layers to define the compiler. The simplicity and success of the technique depends on the structuring of the layers and the choice of abstractions. The compiler is itself written in S-algol. An abstract machine to support the S-algol language is then proposed and implemented. This machine, the S-code machine, has two stacks and a heap with a garbage collector and a unique method of procedure entry and exit. A detailed description of the S-code machine for the PDP11 computer is given in the Appendices. The thesis then describes the measurement tools used to aid the implementer and the user. The results of improvements in efficiency when these tools are used on the compiler itself are discussed. Finally, the research is evaluated and a discussion of how it may be extended is given.
44

The imperative implementation of algebraic data types

Thomas, Muffy January 1988 (has links)
The synthesis of imperative programs for hierarchical, algebraically specified abstract data types is investigated. Two aspects of the synthesis are considered: the choice of data structures for efficient implementation, and the synthesis of linked implementations for the class of ADTs which insert and access data without explicit key. The methodology is based on an analysis of the algebraic semantics of the ADT. Operators are partitioned according to the behaviour of their corresponding operations in the initial algebra. A family of relations, the storage relations of an ADT, Is defined. They depend only on the operator partition and reflect an observational view of the ADT. The storage relations are extended to storage graphs: directed graphs with a subset of nodes designated for efficient access. The data structures in our imperative language are chosen according to properties of the storage relations and storage graphs. Linked implementations are synthesised in a stepwise manner by implementing the given ADT first by its storage graphs, and then by linked data structures in the imperative language. Some circumstances under which the resulting programs have constant time complexity are discussed.
45

An extensible system for the automatic transmission of a class of programming languages

Perwaiz, Najam January 1975 (has links)
This thesis deals with the topic of programming linguistics. A survey of the current techniques in the fields of syntax analysis and semantic synthesis is given. An extensible automatic translator has been described which can be used for the automatic translation of a class of programming languages. The automatic translator consists of two major parts: the syntax analyser and the semantic synthesizer. The syntax analyser is a generalised version of LL(K) parsers, the theoretical study of which has already been published by Lewis and Stearns and also by Rosenkrantz and Stearns. It accepts grammar of a given language in a modified version of the Backus Normal Form (MBNF) and parses the source language statements in a top down, left to right process without ever backing up. The semantic synthesizer is a table driven system which is called by the parser and performs semantic synthesis as .the parsing proceeds. The semantics of a programming language is specified in the form of semantic productions. These are used by the translator to construct semantic tables. The system is implemented in SN0B0L4 (SPITBOL version 2.0) on an IBM 360/44 and its description is supported by various examples. The automatic translator is an extensible system and SN0B0L4, the implementation language appears as its subset. It can be used to introduce look ahead in the parser, so that backup can be avoided. It can also be used to introduce new facilities in the semantic synthesizer.
46

Translation of APL to other high-level languages

Jacobs, Margaret M. January 1975 (has links)
The thesis describes a method of translating the computer language APL to other high-level languages. Particular reference is made to FORTRAN, a language widely available to computer users. Although gaining in popularity, APL is not at present so readily available, and the main aim of the translation process is to enable the more desirable features of APL to be at the disposal of a far greater number of users. The translation process should also speed up the running of routines, since compilation in general leads to greater efficiency than interpretive techniques. Some inefficiencies of the APL language have been removed by the translation process. The above reasons for translating APL to other high-level languages are discussed in the introduction to the thesis. A description of the method of translation forms the main part of the thesis. The APL input code is first lexically scanned, a process whereby the subsequent phases are greatly simplified. An intermediate code form is produced in which bracketing is used to group operators and operands together, and to assign priorities to operators such that sub-expressions will be handled in the correct order. By scanning the intermediate code form, information is stacked until required later. The information is used to make possible a process of macro expansion. Each of the above processes is discussed in the main text of the thesis. The format of all information which can or must be supplied at translation time is clearly outlined in the text.
47

A television camera to computer interface

Yusuf, Tundey January 1970 (has links)
This is an instrumentation thesis. The interface system discussed is a link between an ordinary TV camera and a computer for storage of visual data. The same system can also be used as a link between the computer and a display monitor. Because of its wide bandwidth, a video signal cannot be sampled at the Nyquist rate and presented to a computer. Previous interface systems overcame the problem by scanning slowly on an element-by-element basis using a special scanner and then presenting the samples to the computer. After processing, the data would be read out at the same slow rate and displayed on a special display monitor. The interface described in this thesis will accept material obtained from an ordinary TV camera scanning at standard rate. By using a "stroboscope" sampling technique the samples are presented to the computer slowly enough for it to process. After processing, the data is displayed in a similar manner on a normally scanned monitor for evaluation. Basically the interface operates as follows: A TV camera video signal is sampled at a rate slow enough for computer acceptance. The camera scans the same picture several hundred times until all the points representing the picture have been sampled and stored, the sampling is controlled such that all the points are each sampled only once. Because of the sampling method consecutive samples in the computer do not correspond to adjacent points on the picture being stored. It may therefore be necessary to programme the computer to arrange the samples such that adjacent data in the computer represent consecutive picture points before processing. After processing, the samples may be rearranged and read out for display in the same order they were stored. The horizontal resolution of the picture being stored can be varied quite easily in steps. For example, a system designed to have a maximum of 480 points/line will also have the ability to provide such lower resolutions as 60, 120 and 240 points/lines. This variation is made possible by the design of the hardware. By software the vertical resolution can be varied between an upper limit of 525 lines per picture and such near submultiples of this as 263 and 131 lines/picture. The thesis is discussed in relation to the PDP-9 computer on which most of the work described was done. However, the system interfaces readily with other computers. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
48

Towards computer-based analysis of clinical electroencephalograms

Doyle, Daniel John January 1974 (has links)
Two approaches to the automatic analysis of clinical electroencephalograms (EEGs) are considered with a view towards classifying clin ical EEGs as normal or abnormal. The first approach examines the variability of various EEG features in a population of astronaut candidates known to be free of neurological disorders by constructing histograms of these features; unclassified EEGs of subjects in the same age group are examined by comparison of their feature values to the histograms of this neurologically normal group. The second approach employs the techniques of automatic pattern recognition for classification of clinical EEGs. A set of 57 EEG records designated normal or abnormal by clinical electro-encephalographers are used to evaluate pattern recognition systems based on stepwise discriminant analysis. In particular, the efficacy of using various feature sets in such pattern recognition systems is evaluated in terms of estimated classification error probabilities (Pe). The results of the study suggest a potential for the development of satisfactory automatic systems for the classification of clinical EEGs. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
49

Analysis of concurrency in data base systems

Silva Lopez, Fernando Isaac January 2010 (has links)
Photocopy of typescript. / Digitized by Kansas Correctional Industries
50

Computer technology integration : handbook for primary and elementary teachers /

Foley, Marina Bishop, January 2005 (has links)
Thesis (M.Ed.)--Memorial University of Newfoundland, 2005. / Bibliography: leaves 23-26.

Page generated in 0.0581 seconds