1 |
Semantics-Based Change-Merging of Abstract Data TypesChadha, Vineet 11 May 2002 (has links)
Maintaining any software is difficult. Whenever an evolutionary change is made to the base version of a program and the new version of the program is created, changes made to the base version of the software must be made to the new version. The answer is to build the software initially with the knowledge that it will change and that the base version will evolve. In other words, change-merging of software is a possible solution. All the work in this area has been done on program integration, change-merging of PSDL programs and software prototypes. The present work explores the possibility of combining the results of two independent updates of an abstract data type into a merged version that is both correct and safe. This report describes a developing theory for semantics-based change-merging of abstract data types.
|
2 |
The combinatorics of abstract container data typesTulley, Dominic H. January 1997 (has links)
The study of abstract machines such as Turing machines, push down automata and finite state machines has played an important role in the advancement of computer science. It has led to developments in the theory of general purpose computers, compilers and string manipulation as well as many other areas. The language associated with an abstract machine characterises an important aspect of the behaviour of that machine. It is therefore the principal object of interest when studying such a machine. In this thesis we consider abstract container data types to be abstract machines. We define the concept of a language associated with an abstract container data type and investigate this in the same spirit as for other abstract machines. We also consider a model which allows us to describe various abstract container data types. This model is studied in a similar manner. There is a rich selection of problems to investigate. For instance, the data items which the abstract container data types operate on can take many forms. The input stream could consist of distinct data items, say 1, 2,..., n, or it could be a word over the binary alphabet. Alternatively it could be a sequence formed from the data items in some arbitrary multiset. Another consideration is whether or not an abstract data type has a finite storage capacity. It is shown how to construct a regular grammar which generates (an encoded form of) the set of permutations which can be realised by moving tokens through a network. A one to one correspondence is given between ordered forests of bounded height and members of the language associated with a bounded capacity priority queue operating on binary data. A number of related results are also proved; in particular for networks operating on binary data, and priority queues of capacity 2.
|
3 |
The intelligent data object and its data base interfaceBusack, Nancy Long January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
4 |
Classified models for software engineeringStuart, Gordon F. 30 September 2005 (has links)
In this dissertation it is shown that abstract data types (ADTs) can be specified by the Classified Model (CM) specification language - a first-order Horn language with equality and sort "classification" assertations. It is shown how these sort assertations generalize the traditional syntactic signatures of ADT specifications, resulting in all of the specification capability of traditional equational specifications, but with the improved expressibility of the Horn-with-equality language and additional theorem proving applications such as program synthesis.
This work extends corresponding results from Many Sorted Algebra (MSA), Order Sorted Algebra (OSA) and Order Sorted Model (OSM) specification techniques by promoting their syntactic signatures to assertions in the Classified Model Specification language, yet retaining sorted quantification. It is shown how this solves MSA problems such as error values, polymorphism and subtypes in a way different from the OSA and OSM solutions. However, the CM technique retains the MSA and order sorted approach to parameterization. The CS generalization also suggests the use of CM specifications to axiomatize modules as a generalization of variables within Hoare Logic, with application to a restricted, but safe, use of procedures as state changing operations and functions as value returning operations of a module. CM proof theory and semantics are developed, including theorems for soundness, completeness and the existence of a free model.
|
5 |
Generalized algebraic datatypes a different approach /Le Normand, Jacques. January 1900 (has links)
Thesis (M.Sc.). / Written for the Dept. of Computer Science. Title from title page of PDF (viewed 2007/08/30). Includes bibliographical references.
|
6 |
Conceptual object-oriented programmingHines, Timothy R. January 1986 (has links)
Call number: LD2668 .T4 1986 H56 / Master of Science / Computing and Information Sciences
|
7 |
Simula prettyprinter using PascalChen, Jung-Juin January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
|
8 |
DataLab, a graphical system for specifying and synthesizing abstract data typesAl-Mulhem, Muhammed Saleh 14 December 1989 (has links)
Formal methods using text to specify abstract data types (ADTs) are powerful, but
they require great effort and a high level of expertise. Visual programming languages
present an alternative way of programming but are limited to building small programs.
This research presents an approach for specifying ADTs using a combination of text and
visual objects. Furthermore, it presents two algorithms to map those specifications into
imperative code. DataLab, a computer program for the MacintoshTM computer, is an
implementation model for this approach.
DataLab consists of two major components: a graphical editor and a source code
generator. The graphical editor allows the user to build a specification consisting of an
interface part and an implementation part for each ADT. The interface of the ADT is
specified textually in a window that is part of the graphical editor. The implementation
part of the ADT includes the operations, which are specified in Data Lab as a set of
"Condition/Action" transformations. These transformations describe the behavior of the
operations and are built by selecting graphical objects from a palette and placing them on
the screen. The source code generator takes the specification of the ADT as an input and
generates an encapsulated Pascal code. It consists of two algorithms: the first maps the
specification into its semantics, and the second maps the semantics into Pascal modules. / Graduation date: 1990
|
9 |
Die ondersteuning van abstrakte datatipes en toestelle in 'n programmeertaalOlivier, Martin Stephanus 27 March 2014 (has links)
M.Sc. (Computer Science) / Please refer to full text to view abstract
|
10 |
Faithfulness in Abstractive Summarization: Progress and ChallengesLadhak, Faisal January 2023 (has links)
The exponential increase in online text has created a pressing need for automatic summarization systems that can distill key information from lengthy documents. While neural abstractive summarizers have achieved gains in fluency and coherence, a critical challenge that has emerged is ensuring faithfulness, i.e., accurately preserving the meaning from the original text. Modern neural abstractive summarizers can distort or fabricate facts, undermining their reliability in real-world applications. Thus, this thesis tackles the critical issue of improving faithfulness in abstractive summarization. This thesis is comprised of four parts.
The first part examines challenges in evaluating summarization faithfulness, including issues with reference-free metrics and human evaluation. We propose a novel approach for building automated evaluation metrics that are less reliant on spurious correlations and demonstrate significantly improved performance over existing faithfulness evaluation metrics. We further introduce a novel evaluation framework that enables a more holistic assessment of faithfulness by accounting for the abstractiveness of summarization systems. This framework enables more rigorous faithfulness evaluation, differentiating between gains from increased extraction versus improved abstraction.
The second part focuses on explaining the root causes of faithfulness issues in modern summarization systems. We introduce a novel contrastive approach for attributing errors that vastlyoutperforms prior work at tracing hallucinations in generated summaries back to training data deficiencies. Moreover, incorporating our method’s ideas into an existing technique substantially boosts its performance. Through a case study, we also analyze pre-training biases and demonstrate their propagation to summarization models, yielding biased hallucinations. We show that while mitigation strategies during finetuning can reduce overall hallucination rates, the remaining hallucinations still closely reflect intrinsic pre-training biases.
The third part applies insights from previous sections to develop impactful techniques for improving faithfulness in practice. We propose a novel approach for adaptively determining the appropriate level of abstractiveness for a given input to improve overall faithfulness. Our method yields systems that are both more faithful and more abstractive compared to baseline systems. We further leverage our error attribution approach to clean noisy training data, significantly reducing faithfulness errors in generated outputs. Models trained on datasets cleaned with our approach generate markedly fewer hallucinations than both baseline systems and models trained using other data cleaning techniques.
Finally, the fourth part examines the summarization capabilities of LLMs and assesses their faithfulness. We demonstrate that instruction-tuning and RLHF are key for enabling LLMs to achieve high-quality zero-shot summarization in the news domain, with state-of-the-art LLMs generating summaries comparable to human-written ones. However, this ability does not extend to narrative summarization, where even advanced LLMs struggle to produce consistently faithful summaries. Finally, we highlight the difficulty in evaluating high-performing LLMs, showing that crowdsourcing evaluations of LLM outputs may no longer be reliable as fluency and coherence improve. We observe a substantial gap between crowd workers and experts in identifying deficiencies in LLM-generated narrative summaries.
|
Page generated in 0.5085 seconds