• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 28
  • 24
  • 20
  • 15
  • 12
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Semantics-Based Change-Merging of Abstract Data Types

Chadha, Vineet 11 May 2002 (has links)
Maintaining any software is difficult. Whenever an evolutionary change is made to the base version of a program and the new version of the program is created, changes made to the base version of the software must be made to the new version. The answer is to build the software initially with the knowledge that it will change and that the base version will evolve. In other words, change-merging of software is a possible solution. All the work in this area has been done on program integration, change-merging of PSDL programs and software prototypes. The present work explores the possibility of combining the results of two independent updates of an abstract data type into a merged version that is both correct and safe. This report describes a developing theory for semantics-based change-merging of abstract data types.
2

The combinatorics of abstract container data types

Tulley, Dominic H. January 1997 (has links)
The study of abstract machines such as Turing machines, push down automata and finite state machines has played an important role in the advancement of computer science. It has led to developments in the theory of general purpose computers, compilers and string manipulation as well as many other areas. The language associated with an abstract machine characterises an important aspect of the behaviour of that machine. It is therefore the principal object of interest when studying such a machine. In this thesis we consider abstract container data types to be abstract machines. We define the concept of a language associated with an abstract container data type and investigate this in the same spirit as for other abstract machines. We also consider a model which allows us to describe various abstract container data types. This model is studied in a similar manner. There is a rich selection of problems to investigate. For instance, the data items which the abstract container data types operate on can take many forms. The input stream could consist of distinct data items, say 1, 2,..., n, or it could be a word over the binary alphabet. Alternatively it could be a sequence formed from the data items in some arbitrary multiset. Another consideration is whether or not an abstract data type has a finite storage capacity. It is shown how to construct a regular grammar which generates (an encoded form of) the set of permutations which can be realised by moving tokens through a network. A one to one correspondence is given between ordered forests of bounded height and members of the language associated with a bounded capacity priority queue operating on binary data. A number of related results are also proved; in particular for networks operating on binary data, and priority queues of capacity 2.
3

The intelligent data object and its data base interface

Busack, Nancy Long January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
4

The Semantics, Formal Correctness and Implementation of History Variables in an Imperative Programming Language.

Mallon, Ryan Peter Kingsley January 2006 (has links)
Storing the history of objects in a program is a common task. Web browsers remember which websites we have visited, drawing programs maintain a list of the images we have modified recently and the undo button in a wordprocessor allows us to go back to a previous state of a document. Maintaining the history of an object in a program has traditionally required programmers either to write specific code for handling the historical data, or to use a library which supports history logging. We propose that maintaining the history of objects in a program could be simplified by providing support at the language level for storing and manipulating the past versions of objects. History variables are variables in a programming language which store not only their current value, but also the values they have contained in the past. Some existing languages do provide support for history variables. However these languages typically have many limits and restrictions on use of history variables. In this thesis we discuss a complete implementation of history variables in an imperative programming language. We discuss the semantics of history variables for scalar types, arrays, pointers, strings, and user defined types. We also introduce an additional construct called an 'atomic block' which allows us to temporarily suspend the logging of a history variable. Using the mathematical system of Hoare logic we formally prove the correctness of our informal semantics for atomic blocks and each of the history variable types we introduce. Finally, we develop an experimental language and compiler with support for history variables. The language and compiler allow us to investigate the practical aspects of implementing history variables and to compare the performance of history variables with their non- history counterparts.
5

Classified models for software engineering

Stuart, Gordon F. 30 September 2005 (has links)
In this dissertation it is shown that abstract data types (ADTs) can be specified by the Classified Model (CM) specification language - a first-order Horn language with equality and sort "classification" assertations. It is shown how these sort assertations generalize the traditional syntactic signatures of ADT specifications, resulting in all of the specification capability of traditional equational specifications, but with the improved expressibility of the Horn-with-equality language and additional theorem proving applications such as program synthesis. This work extends corresponding results from Many Sorted Algebra (MSA), Order Sorted Algebra (OSA) and Order Sorted Model (OSM) specification techniques by promoting their syntactic signatures to assertions in the Classified Model Specification language, yet retaining sorted quantification. It is shown how this solves MSA problems such as error values, polymorphism and subtypes in a way different from the OSA and OSM solutions. However, the CM technique retains the MSA and order sorted approach to parameterization. The CS generalization also suggests the use of CM specifications to axiomatize modules as a generalization of variables within Hoare Logic, with application to a restricted, but safe, use of procedures as state changing operations and functions as value returning operations of a module. CM proof theory and semantics are developed, including theorems for soundness, completeness and the existence of a free model.
6

Generalized algebraic datatypes a different approach /

Le Normand, Jacques. January 1900 (has links)
Thesis (M.Sc.). / Written for the Dept. of Computer Science. Title from title page of PDF (viewed 2007/08/30). Includes bibliographical references.
7

Conceptual object-oriented programming

Hines, Timothy R. January 1986 (has links)
Call number: LD2668 .T4 1986 H56 / Master of Science / Computing and Information Sciences
8

Simula prettyprinter using Pascal

Chen, Jung-Juin January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
9

DataLab, a graphical system for specifying and synthesizing abstract data types

Al-Mulhem, Muhammed Saleh 14 December 1989 (has links)
Formal methods using text to specify abstract data types (ADTs) are powerful, but they require great effort and a high level of expertise. Visual programming languages present an alternative way of programming but are limited to building small programs. This research presents an approach for specifying ADTs using a combination of text and visual objects. Furthermore, it presents two algorithms to map those specifications into imperative code. DataLab, a computer program for the MacintoshTM computer, is an implementation model for this approach. DataLab consists of two major components: a graphical editor and a source code generator. The graphical editor allows the user to build a specification consisting of an interface part and an implementation part for each ADT. The interface of the ADT is specified textually in a window that is part of the graphical editor. The implementation part of the ADT includes the operations, which are specified in Data Lab as a set of "Condition/Action" transformations. These transformations describe the behavior of the operations and are built by selecting graphical objects from a palette and placing them on the screen. The source code generator takes the specification of the ADT as an input and generates an encapsulated Pascal code. It consists of two algorithms: the first maps the specification into its semantics, and the second maps the semantics into Pascal modules. / Graduation date: 1990
10

Ähnlichkeitsmessung von ausgewählten Datentypen in Datenbanksystemen zur Berechnung des Grades der Anonymisierung

Heinrich, Jan-Philipp, Neise, Carsten, Müller, Andreas 21 February 2018 (has links) (PDF)
Es soll ein mathematisches Modell zur Berechnung von Abweichungen verschiedener Datentypen auf relationalen Datenbanksystemen eingeführt und getestet werden. Basis dieses Modells sind Ähnlichkeitsmessungen für verschiedene Datentypen. Hierbei führen wir zunächst eine Betrachtung der relevanten Datentypen für die Arbeit durch. Danach definieren wir für die für diese Arbeit relevanten Datentypen eine Algebra, welche die Grundlage zur Berechnung des Anonymisierungsgrades θ ist. Das Modell soll zur Messung des Grades der Anonymisierung, vor allem personenbezogener Daten, zwischen Test- und Produktionsdaten angewendet werden. Diese Messung ist im Zuge der Einführung der EU-DSGVO im Mai 2018 sinnvoll, und soll helfen personenbezogene Daten mit einem hohen Ähnlichkeitsgrad zu identifizieren.

Page generated in 0.0541 seconds