• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 2
  • 1
  • Tagged with
  • 937
  • 143
  • 105
  • 73
  • 73
  • 63
  • 44
  • 39
  • 35
  • 21
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

The use of proofs-as-programs to build an analogy-based functional program editor

Whittle, J. N. D. January 1999 (has links)
This thesis presents a novel application of the technique known as proofs-as-programs. Proofs-as-programs defines a correspondence between proofs in a constructive logic and fundamental programs. By using this correspondence, a functional program may be represented directly as the proof of a specification and so the program may be analysed, within this proof framework. <I>C<SUP>Y</SUP>NTHIA</I> is a programming editor for the functional language ML which uses proofs-as-programs to analyse users' programs as they are written. So that the user requires no knowledge of proof theory, the underlying proof representation is completely hidden. The proof framework allows programs written in <I>C<SUP>Y</SUP>NTHIA </I>to be checked to be syntactically correct, well-typed, well-defined and terminating. Users of <I>C<SUP>Y</SUP>NTHIA</I> make fewer programming errors and the feedback facilities of <I>C<SUP>Y</SUP>NTHIA</I> mean that it is easier to track down the source of errors when they do occur. <I>C<SUP>Y</SUP>NTHIA</I> also embodies the idea of programming by analogy - rather than starting from scratch, users always begin with an existing function definition. They then apply a sequence of high-level editing commands which transform this starting definition into the one required. These commands preserve correctness and also increase programming efficiency by automating commonly occurring steps. The thesis describes the design and implementation of <I>C<SUP>Y</SUP>NTHIA</I> and investigates its role as a novice programming environment. Use by experts is possible but only a subset of ML is currently supported. Two major trials of <I>C<SUP>Y</SUP>NTHIA</I> have shown that <I>C<SUP>Y</SUP>NTHIA</I> is well-suited as a teaching tool.
182

A Model Driven Approach for the Atomated Analysis of UML Class Diagrams

Anastasakis, Kyriakos January 2009 (has links)
The Unified Modeling Language (UML) is widely considered as the de/acto standard for the design of Object Oriented systems. UML class diagrams are used to depict the static structure of a system with its entities and the relationships between them. The Object Constraint Language (OCL) is a textual language based on first-order logic and can be used to define constraints on the elements of class diagrams. The lack of strong formal semantics for the UML makes it difficult to analyse UML models. This work utilises Alloy to analyse UML models. More specifically, this work employs the Model Driven Architecture (MDA) technology to achieve an automated transformation of UML class diagrams enriched with OCL constraints to Alloy. This is accomplished by defining a number of transformation rules from UML and OCL concepts to Alloy concepts. However, due to the different philosophies of the UML and Alloy, the languages have a number of fundamental differences. These differences and their effect on the definition of the transformation rules is discussed. To bridge the differences and to achieve fully automated analysis of UML class diagrams though Alloy, a UML profile for Alloy is developed. Details of our implementation of the model transformation in the SiTra transformation engine and a number of case studies are also presented.
183

A novel clustering algorithm with a new similarity measure and ensemble methods for mixed data clustering

Al Shaqsi, Jamil Darwish January 2010 (has links)
This thesis addressed some specific issues in clustering: (1) clustering algorithms, (2) similarity measures, (3) number of clusters, K, and (4) clustering ensemble methods. Following on an in-depth review of clustering methods, a new three staged (3-Staged) clustering algorithm is proposed, with new three key aspects: (1) a new method for automatically estimating the K value, (2) a new similarity measure and (3) initiating the clustering process with a promising BASE. A BASE is a real sample that acts like a centroid or a medoid in common clustering methods but it is determined differently in our approach. A new similarity measure is defined particularly to reflect the degree of relative change between data samples, and more importantly to be able to accommodate numerical and categorical variables. We have proven mathematically that the proposed similarity measure meets the three properties of the metric measure. This research also investigated the problem of determining the appropriate number of clusters in a dataset and devised a novel function, which is integrated into our 3-Staged clustering algorithm, to automatically estimate the most appropriate number of clusters, K. Based on our new 3-Staged clustering algorithm, we developed two new ensemble algorithms. For all experiments, we used publicly available real-world benchmark datasets as these datasets have been commonly used by other researchers. Experimental results showed that the 3- Staged clustering algorithm performed better than the compared individual methods including K-means, TwoStep and also some ensemble based methods such as K-ANMI, and ccdByEnsemble. They also showed that the proposed similarity measure is very effective in improving the clustering quality. Besides, they showed that our proposed method for estimating the K value identified the correct number of clusters for most of the tested datasets.
184

Formalisation and evaluation of focus theories for requirements elicitation dialogues in natural language

Renaud, L. January 1999 (has links)
Requirements elicitation is a difficult part of software engineering in which the specifications for a new system are discussed with potential users. Because verifying that requirements are correct for a complex task, computer support is often beneficial. This support requires formal specifications. However, people are not usually trained to use formal specification languages. Task or domain specific languages smooth the learning curve to write formal specifications but the elicitation process often remains error prone. Users need more support while writing specifications. In particular, a tool which interacts with them and helps them express their requirements in a domain specific way could lower the number of requirements elicitation errors. However, although numerous frameworks have been developed to support the expression and analysis of requirements, much less has been paid to the control of the dialogue taking place between the users and the system whilst using such frameworks. Focus theories are theories which explain how participants in a dialogue pay attention to certain things, at certain point of a dialogue, and how this attention may shift to other topics. In this thesis, we propose to use focus theories to improve the quality of the interaction between users and requirements elicitation tools. We show that, by using the constraints on dialogue evolution provided by these theories and the constraints provided by the requirements elicitation task, we can guide the elicitation process in a natural and easily understandable manner. This interactive way of using focus theories is new. It could be used in other applications where reasoning and communication play important roles and need to interact. We also carry out a comparative study of the use of focus theories for requirements elicitation, which requires us to be precise about our interpretation of our chosen focus theories and to develop an innovative means of empirical testing for them. This gives us a formalisation of focus theories as well as a method for developing and testing experimental dialogue managers.
185

The use of proof plans for transformation of functional programs by changes of data type

Richardson, J. D. C. January 1996 (has links)
Program transformation concerns the derivation of an efficient program by applying correctness-preserving manipulations to a source program. Transformation is a lengthy process, and it is important to keep user interaction to a manageable level by automating the transformation steps. In this thesis I present an automated technique for transforming a program by changing the data types in that program to ones which are more appropriate for the task. Programs are constructed by proving synthesis theorems in the proofs-as-programs paradigm. Programs are transformed by modifying their synthesis theorems and relating the modified theorem to the original. Proof transformation allows more powerful transformations than program transformation because the proof of the modified theorem yields a program which meets the original specification, but may compute a different function to the original program. Synthesis proofs contain information which is not present in the corresponding program and can be used to aid the transformation process. Type changes are motivated by the presence of certain subexpressions in the synthesised program. A library of possible type changes is maintained, indexed by the motivating subexpressions. I present a new method which extends these type changes from the motivating expressions to cover as much of the program as possible. Once a type change has been chosen, a revised synthesis theorem is constructed automatically. Search is a major problem for program transformation systems. The synthesis theorems which arise after a type change have a particular form. I present an automated proof strategy for synthesis theorems of this form which requires very little search.
186

Guidance during program composition in a Prolog techniques editor

Vargas-Vera, M. de S. January 1994 (has links)
It is possible to build complex programs by repeated combination of pairs of simpler programs. However, naive combination often produces programs that are far too inefficient. We would like to have a system that would produce the optimal combination of two programs, and also work with minimal supervision by the user. In this thesis we make a significant step towards such an ideal, with the presentation of an interactive system based on program transformation complemented with knowledge of the program development. The first contribution of this thesis is to recognise that knowledge contained in the <I>program history</I> can be used in program transformation, reducing the need for user interaction. The interactive composition system presented can automatically take major decisions, such as the selection of which subgoal should be unfolded or the laws to be applied in order to get a more efficient combined program. Furthermore, a component of our system called the <I>selection procedure</I> can decide automatically which is the most suitable combination method by analysing the characteristics of the initial pair of programs as given by their program histories. Approaches that do not use the program history suffer from the problem that it is not always practical to extract the required information about the structure of the program. Our second contribution is to provide a range of new methods which exploit the program history in order to produce more efficient programs, and to deal with a wider range of combination problems. The new methods not only combine programs with the same control flow, but can also deal with some cases in which the control flows are different. All of these methods are completely automatic with the exception of our "mutant" method in which the combined clause needs to be approved by the user. The third contribution is to present relevant properties in our composition system. These properties fall into the following three groups: (i) properties which hold after applying each combination method. (ii) properties about the type of program which is obtained after the combination. (iii) properties of the join specification which defines the characteristics of the combined program.
187

Analyzing and evaluating the use of visemes in an interpolative synthesizer for visual speech

Martinez Lazalde, Oscar Manuel January 2010 (has links)
No description available.
188

Improving the performance of recommender algorithms

Redpath, Jennifer Louise January 2010 (has links)
Recommender systems were designed as a software solution to the problem of information overload. Recommendations can be generated based on the content descriptions of past purchases (Content-based), the personal ratings an individual has assigned to a set of items (Collaborative) or from a combination of both (Hybrid). There are issues that affect the performance of recommender systems, in terms of accuracy and coverage, such as data sparsity and dealing with new users and items. This thesis presents a comprehensive set of offline experiments and empirical results with the goal of improving the recommendation accuracy and coverage for the poorest performers in the dataset. This research suggests approaches for dealing with four specific research challenges: the standardisation of evaluation methods and metrics, the definition and identification of sparse users and items, improving the accuracy of hybrid systems targeted specifically at the poor performers and addressing the cold-start problem for new users. A selection of recommendation algorithms were implemented and/or extended, namely, user-based collaborative filtering, item-based collaborative filtering, collaboration-via-content and two hybrid prediction algorithms. The first two methods were developed with the express intention of providing a baseline for improvement, facilitating the identification of poor performers and analysing the factors which influenced the performance of recommendation algorithms. The later algorithms were targeted at the poor performers and were also examined with respect to user and item sparsity. The collaboration-via-content algorithm, when extended with a new content attribute, resulted in an improvement for new users. The hybrid prediction algorithms, which combined user-based and item-based approaches in such a way as to include information about transitive relationships, were able to improve upon the baseline accuracy and coverage results. In particular, the final hybrid algorithm saw a 3.5% improvement in accuracy for the poor performers compared to item-based collaborative filtering.
189

Extending the domain theory to support application generation

Papamargaritis, George January 2008 (has links)
This thesis investigates reuse in two main paradigms of software development: component-based and generative software design. The main objective is to propose Domain Theory-based methods for developing a) libraries of reusable components and b) application generators. One research question is whether these two approaches can be combined to the benefit of application generation. This research goes through two case studies consisting of domain analysis, development of proof-of-concept tools, and validation studies. First, a telemedicine case-study shows that applying the Domain Theory as an analysis method manages to identify abstract domain models in the given domain and specifies a reuse library at the conceptual level. Also mapping these models to design patterns and defining variability points for further extensions in the application framework is feasible. Second, the Resource Allocation case study extends the Domain Theory with a generative reuse method. An application generator is developed as proof-of-concept which receives in its input abstract resource allocation requirements and generates allocation applications. Validation tests show that both expert and less expert users interact successfully with the application generator to generate application frameworks. Also end-users use the allocation applications efficiently to execute their tasks. Future redesign of the tool needs to address usability, terminology and domain understanding issues pinpointed during the tests.
190

Parallel rule induction

Stahl, Frederic Theodor January 2009 (has links)
Classification rule induction on large datasets is a major challenge in the field of data mining in a world where massive amounts of data are recorded on a large scale. There are two main approaches to classification rule induction; the 'divide and conquer' approach and the 'separate and conquer' approach. Even though both approaches deliver a comparable classification accuracy, they differ when it comes to rule representation and quality of rules in certain circumstances. There is the intuitive representation of classification rules in the form of a tree when using the 'divide and conquer' approach which is easy to assimilate by humans. However, modular rules induced by the 'separate and conquer' approach generally perform better in environments where the training data of the classifier is noisy or contains clashes. The term 'modular rules' is used to mean any set of rules describing some domain of interest. They will generally not fit together naturally in a decision tree. Both approaches are challenged by increasingly large volumes of data. There have been several attempts to scale up the 'divide and conquer' approach, however there is very little work on scaling up the 'separate and conquer' approach. One general approach is to use supercomputers with faster hardware to process these huge amounts of data, yet modest-sized organisations may not be able to afford such hardware. However most organisations have local computer workstations that they use for many applications such as word processing or spreadsheets. These computer workstations are usually connected in a local network and mainly used during normal working hours and are usually idle overnight and at weekends. During these idle times these computer workstations connected in a network could be used for data mining applications on large datasets. This research focuses on a cheap solution for modest sized organisations that cannot afford fast supercomputers. For this reason this work aims to utilise the computational power and memory of a network of workstations. In this research a novel framework for scaling up modular classification rule induction is presented, based on a distributed blackboard architecture. The framework is called PMCRI (Parallel Modular Classification Rule Inducer). It provides an underlying communication infrastructure for parallelising a whole family of modular classification rule induction algorithms: the Prism family. Experimental results obtained show a good scale up behaviour on various datasets and thus confirm the success of PMCRI.

Page generated in 0.0449 seconds