• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 1
  • Tagged with
  • 16
  • 16
  • 16
  • 16
  • 15
  • 8
  • 7
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The combinatorics of abstract container data types

Tulley, Dominic H. January 1997 (has links)
The study of abstract machines such as Turing machines, push down automata and finite state machines has played an important role in the advancement of computer science. It has led to developments in the theory of general purpose computers, compilers and string manipulation as well as many other areas. The language associated with an abstract machine characterises an important aspect of the behaviour of that machine. It is therefore the principal object of interest when studying such a machine. In this thesis we consider abstract container data types to be abstract machines. We define the concept of a language associated with an abstract container data type and investigate this in the same spirit as for other abstract machines. We also consider a model which allows us to describe various abstract container data types. This model is studied in a similar manner. There is a rich selection of problems to investigate. For instance, the data items which the abstract container data types operate on can take many forms. The input stream could consist of distinct data items, say 1, 2,..., n, or it could be a word over the binary alphabet. Alternatively it could be a sequence formed from the data items in some arbitrary multiset. Another consideration is whether or not an abstract data type has a finite storage capacity. It is shown how to construct a regular grammar which generates (an encoded form of) the set of permutations which can be realised by moving tokens through a network. A one to one correspondence is given between ordered forests of bounded height and members of the language associated with a bounded capacity priority queue operating on binary data. A number of related results are also proved; in particular for networks operating on binary data, and priority queues of capacity 2.
2

The intelligent data object and its data base interface

Busack, Nancy Long January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
3

Generalized algebraic datatypes a different approach /

Le Normand, Jacques. January 1900 (has links)
Thesis (M.Sc.). / Written for the Dept. of Computer Science. Title from title page of PDF (viewed 2007/08/30). Includes bibliographical references.
4

Conceptual object-oriented programming

Hines, Timothy R. January 1986 (has links)
Call number: LD2668 .T4 1986 H56 / Master of Science / Computing and Information Sciences
5

Simula prettyprinter using Pascal

Chen, Jung-Juin January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
6

DataLab, a graphical system for specifying and synthesizing abstract data types

Al-Mulhem, Muhammed Saleh 14 December 1989 (has links)
Formal methods using text to specify abstract data types (ADTs) are powerful, but they require great effort and a high level of expertise. Visual programming languages present an alternative way of programming but are limited to building small programs. This research presents an approach for specifying ADTs using a combination of text and visual objects. Furthermore, it presents two algorithms to map those specifications into imperative code. DataLab, a computer program for the MacintoshTM computer, is an implementation model for this approach. DataLab consists of two major components: a graphical editor and a source code generator. The graphical editor allows the user to build a specification consisting of an interface part and an implementation part for each ADT. The interface of the ADT is specified textually in a window that is part of the graphical editor. The implementation part of the ADT includes the operations, which are specified in Data Lab as a set of "Condition/Action" transformations. These transformations describe the behavior of the operations and are built by selecting graphical objects from a palette and placing them on the screen. The source code generator takes the specification of the ADT as an input and generates an encapsulated Pascal code. It consists of two algorithms: the first maps the specification into its semantics, and the second maps the semantics into Pascal modules. / Graduation date: 1990
7

Die ondersteuning van abstrakte datatipes en toestelle in 'n programmeertaal

Olivier, Martin Stephanus 27 March 2014 (has links)
M.Sc. (Computer Science) / Please refer to full text to view abstract
8

Faithfulness in Abstractive Summarization: Progress and Challenges

Ladhak, Faisal January 2023 (has links)
The exponential increase in online text has created a pressing need for automatic summarization systems that can distill key information from lengthy documents. While neural abstractive summarizers have achieved gains in fluency and coherence, a critical challenge that has emerged is ensuring faithfulness, i.e., accurately preserving the meaning from the original text. Modern neural abstractive summarizers can distort or fabricate facts, undermining their reliability in real-world applications. Thus, this thesis tackles the critical issue of improving faithfulness in abstractive summarization. This thesis is comprised of four parts. The first part examines challenges in evaluating summarization faithfulness, including issues with reference-free metrics and human evaluation. We propose a novel approach for building automated evaluation metrics that are less reliant on spurious correlations and demonstrate significantly improved performance over existing faithfulness evaluation metrics. We further introduce a novel evaluation framework that enables a more holistic assessment of faithfulness by accounting for the abstractiveness of summarization systems. This framework enables more rigorous faithfulness evaluation, differentiating between gains from increased extraction versus improved abstraction. The second part focuses on explaining the root causes of faithfulness issues in modern summarization systems. We introduce a novel contrastive approach for attributing errors that vastlyoutperforms prior work at tracing hallucinations in generated summaries back to training data deficiencies. Moreover, incorporating our method’s ideas into an existing technique substantially boosts its performance. Through a case study, we also analyze pre-training biases and demonstrate their propagation to summarization models, yielding biased hallucinations. We show that while mitigation strategies during finetuning can reduce overall hallucination rates, the remaining hallucinations still closely reflect intrinsic pre-training biases. The third part applies insights from previous sections to develop impactful techniques for improving faithfulness in practice. We propose a novel approach for adaptively determining the appropriate level of abstractiveness for a given input to improve overall faithfulness. Our method yields systems that are both more faithful and more abstractive compared to baseline systems. We further leverage our error attribution approach to clean noisy training data, significantly reducing faithfulness errors in generated outputs. Models trained on datasets cleaned with our approach generate markedly fewer hallucinations than both baseline systems and models trained using other data cleaning techniques. Finally, the fourth part examines the summarization capabilities of LLMs and assesses their faithfulness. We demonstrate that instruction-tuning and RLHF are key for enabling LLMs to achieve high-quality zero-shot summarization in the news domain, with state-of-the-art LLMs generating summaries comparable to human-written ones. However, this ability does not extend to narrative summarization, where even advanced LLMs struggle to produce consistently faithful summaries. Finally, we highlight the difficulty in evaluating high-performing LLMs, showing that crowdsourcing evaluations of LLM outputs may no longer be reliable as fluency and coherence improve. We observe a substantial gap between crowd workers and experts in identifying deficiencies in LLM-generated narrative summaries.
9

Performance characteristics of semantics-based concurrency control protocols.

January 1995 (has links)
by Keith, Hang-kwong Mak. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 122-127). / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background --- p.4 / Chapter 2.1 --- Read/Write Model --- p.4 / Chapter 2.2 --- Abstract Data Type Model --- p.5 / Chapter 2.3 --- Overview of Semantics-Based Concurrency Control Protocols --- p.7 / Chapter 2.4 --- Concurrency Hierarchy --- p.9 / Chapter 2.5 --- Control Flow of the Strict Two Phase Locking Protocol --- p.11 / Chapter 2.5.1 --- Flow of an Operation --- p.12 / Chapter 2.5.2 --- Response Time of a Transaction --- p.13 / Chapter 2.5.3 --- Factors Affecting the Response Time of a Transaction --- p.14 / Chapter 3 --- Semantics-Based Concurrency Control Protocols --- p.16 / Chapter 3.1 --- Strict Two Phase Locking --- p.16 / Chapter 3.2 --- Conflict Relations --- p.17 / Chapter 3.2.1 --- Commutativity (COMM) --- p.17 / Chapter 3.2.2 --- Forward and Right Backward Commutativity --- p.19 / Chapter 3.2.3 --- Exploiting Context-Specific Information --- p.21 / Chapter 3.2.4 --- Relaxing Correctness Criterion by Allowing Bounded Inconsistency --- p.26 / Chapter 4 --- Related Work --- p.32 / Chapter 4.1 --- Exploiting Transaction Semantics --- p.32 / Chapter 4.2 --- Exploting Object Semantics --- p.34 / Chapter 4.3 --- Sacrificing Consistency --- p.35 / Chapter 4.4 --- Other Approaches --- p.37 / Chapter 5 --- Performance Study (Testbed Approach) --- p.39 / Chapter 5.1 --- System Model --- p.39 / Chapter 5.1.1 --- Main Memory Database --- p.39 / Chapter 5.1.2 --- System Configuration --- p.40 / Chapter 5.1.3 --- Execution of Operations --- p.41 / Chapter 5.1.4 --- Recovery --- p.42 / Chapter 5.2 --- Parameter Settings and Performance Metrics --- p.43 / Chapter 6 --- Performance Results and Analysis (Testbed Approach) --- p.46 / Chapter 6.1 --- Read/Write Model vs. Abstract Data Type Model --- p.46 / Chapter 6.2 --- Using Context-Specific Information --- p.52 / Chapter 6.3 --- Role of Conflict Ratio --- p.55 / Chapter 6.4 --- Relaxing the Correctness Criterion --- p.58 / Chapter 6.4.1 --- Overhead and Performance Gain --- p.58 / Chapter 6.4.2 --- Range Queries using Bounded Inconsistency --- p.63 / Chapter 7 --- Performance Study (Simulation Approach) --- p.69 / Chapter 7.1 --- Simulation Model --- p.70 / Chapter 7.1.1 --- Logical Queueing Model --- p.70 / Chapter 7.1.2 --- Physical Queueing Model --- p.71 / Chapter 7.2 --- Experiment Information --- p.74 / Chapter 7.2.1 --- Parameter Settings --- p.74 / Chapter 7.2.2 --- Performance Metrics --- p.75 / Chapter 8 --- Performance Results and Analysis (Simulation Approach) --- p.76 / Chapter 8.1 --- Relaxing Correctness Criterion of Serial Executions --- p.77 / Chapter 8.1.1 --- Impact of Resource Contention --- p.77 / Chapter 8.1.2 --- Impact of Infinite Resources --- p.80 / Chapter 8.1.3 --- Impact of Limited Resources --- p.87 / Chapter 8.1.4 --- Impact of Multiple Resources --- p.89 / Chapter 8.1.5 --- Impact of Transaction Type --- p.95 / Chapter 8.1.6 --- Impact of Concurrency Control Overhead --- p.96 / Chapter 8.2 --- Exploiting Context-Specific Information --- p.98 / Chapter 8.2.1 --- Impact of Limited Resource --- p.98 / Chapter 8.2.2 --- Impact of Infinite and Multiple Resources --- p.101 / Chapter 8.2.3 --- Impact of Transaction Length --- p.106 / Chapter 8.2.4 --- Impact of Buffer Size --- p.108 / Chapter 8.2.5 --- Impact of Concurrency Control Overhead --- p.110 / Chapter 8.3 --- Summary and Discussion --- p.113 / Chapter 8.3.1 --- Summary of Results --- p.113 / Chapter 8.3.2 --- Relaxing Correctness Criterion vs. Exploiting Context-Specific In- formation --- p.114 / Chapter 9 --- Conclusions --- p.116 / Bibliography --- p.122 / Chapter A --- Commutativity Tables for Queue Objects --- p.128 / Chapter B --- Specification of a Queue Object --- p.129 / Chapter C --- Commutativity Tables with Bounded Inconsistency for Queue Objects --- p.132 / Chapter D --- Some Implementation Issues --- p.134 / Chapter D.1 --- Important Data Structures --- p.134 / Chapter D.2 --- Conflict Checking --- p.136 / Chapter D.3 --- Deadlock Detection --- p.137 / Chapter E --- Simulation Results --- p.139 / Chapter E.l --- Impact of Infinite Resources (Bounded Inconsistency) --- p.140 / Chapter E.2 --- Impact of Multiple Resource (Bounded Inconsistency) --- p.141 / Chapter E.3 --- Impact of Transaction Type (Bounded Inconsistency) --- p.142 / Chapter E.4 --- Impact of Concurrency Control Overhead (Bounded Inconsistency) --- p.144 / Chapter E.4.1 --- Infinite Resources --- p.144 / Chapter E.4.2 --- Limited Resource --- p.146 / Chapter E.5 --- Impact of Resource Levels (Exploiting Context-Specific Information) --- p.149 / Chapter E.6 --- Impact of Buffer Size (Exploiting Context-Specific Information) --- p.150 / Chapter E.7 --- Impact of Concurrency Control Overhead (Exploiting Context-Specific In- formation) --- p.155 / Chapter E.7.1 --- Impact of Infinite Resources --- p.155 / Chapter E.7.2 --- Impact of Limited Resources --- p.157 / Chapter E.7.3 --- Impact of Transaction Length --- p.160 / Chapter E.7.4 --- Role of Conflict Ratio --- p.162
10

Environment Analysis of Higher-Order Languages

Might, Matthew Brendon 29 June 2007 (has links)
Any analysis of higher-order languages must grapple with the tri-facetted nature of lambda. In one construct, the fundamental control, environment and data structures of a language meet and intertwine. With the control facet tamed nearly two decades ago, this work brings the environment facet to heel, defining the environment problem and developing its solution: environment analysis. Environment analysis allows a compiler to reason about the equivalence of environments, i.e., name-to-value mappings, that arise during a program's execution. In this dissertation, two different techniques-abstract counting and abstract frame strings-make this possible. A third technique, abstract garbage collection, makes both of these techniques more precise and, counter to intuition, often faster as well. An array of optimizations and even deeper analyses which depend upon environment analysis provide motivation for this work. In an abstract interpretation, a single abstract entity represents a set of concrete entities. When the entities under scrutiny are bindings-single name-to-value mappings, the atoms of environment-then determining when the equality of two abstract bindings infers the equality of their concrete counterparts is the crux of environment analysis. Abstract counting does this by tracking the size of represented sets, looking for singletons, in order to apply the following principle: If {x} = {y}, then x = y. Abstract frame strings enable environmental reasoning by statically tracking the possible stack change between the births of two environments; when this change is effectively empty, the environments are equivalent. Abstract garbage collection improves precision by intermittently removing unreachable environment structure during abstract interpretation.

Page generated in 0.0922 seconds