• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3249
  • 1209
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8702
  • 4040
  • 2506
  • 2429
  • 2429
  • 805
  • 805
  • 588
  • 579
  • 554
  • 551
  • 525
  • 486
  • 480
  • 471
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Extracting and exploiting interaction information in constraint-based local search

Andrew, Alastair Neil January 2014 (has links)
Local Search is a simple and effective approach for solving complex constrained combinatorial problems. To maximise performance, Local Search can utilise problem-specific information and be hybridised with other algorithms in an often intricate fashion. This results in algorithms that are tightly coupled to a single problem and difficult to characterise; experience gained whilst solving one problem may not be applicable in another. Even if it is, the translation can be a non-trivial task offering little opportunity for code reuse. Constraint Programming (CP) and Linear Programming (LP) can be applied to many of the same combinatorial problems as Local Search but do not exhibit these restrictions. They use a different paradigm; one where a problem is captured as a general model and then solved by a independent solver. Improvements to the underlying solver can be harnessed by any model. The CP community show signs of moving Local Search in this direction; Constraint-Based Local Search (CBLS) strives to achieve the CP ideal of "Model + Search". CBLS provides access to the performance benefits of Local Search without paying the price of being specific to a single problem. This thesis explores whether information to improve the performance of CBLS can be automatically extracted and exploited without compromising the independence of the search and model. To achieve these goals, we have created a framework built upon the CBLS language COMET. This framework primarily focusses on the interface between two core components: the constraint model, and the search neighbourhoods. Neighbourhoods define the behaviour of a Local Search and how it can traverse the search space. By separating the neighbourhoods from the model, we are able to create an independent analysis component. The first aspect of our work is to uncover information about the interactions between the constraint model and the search neighbourhoods. The second goal is to look at how information about the behaviour of neighbourhoods - with respect to a set of constraints - can be used within the search process. In particular, we concentrate on enhancing a form of Local Search called Variable Neighbourhood Search (VNS) allowing it to make dynamic decisions based upon the current search state. The resulting system retains the domain independence of model-based solution technologies whilst being able to configure itself automatically to a given problem. This reduces the level of expertise required to adopt CBLS and provides users with another potential tool for tackling their constraint problems.

Studying the lives of software bugs

Davies, Steven January 2014 (has links)
For as long as people have made software, they have made mistakes in that software. Software bugs are widespread, and the maintenance required to fix them has a major impact on the cost of software and how developers' time is spent. Reducing this maintenance time would lower the cost of software and allow for developers to spend more time on new features, improving the software for end-users. Bugs are hugely diverse and have a complex life cycle. This makes them difficult to study, and research is often carried out on synthetic bugs or toy programs. However, a better understanding of the bug life cycle would greatly aid in developing tools to reduce the time spent on maintenance. This thesis will study the life cycle of bugs, and develop such an understanding. Overall, this thesis examines over 3000 real bugs, from real projects, concentrating on three of the most important points in the life cycle: origin, reporting and fix. Firstly, two existing techniques are compared for discovering the origin of a bug. A number of improvements are evaluated, and the most effective approach is found to be combining the techniques. Furthermore, the behaviour of developers is found to have a major impact on the accuracy of the techniques. Secondly, a large number of bugs are analysed to determine what information is provided when users report bugs. For most bugs, much important information is missing, or inaccurate. Most importantly, there appears to be a considerable gap between what users provide and what developers actually want. Finally, an evaluation is carried out on a number of novel alterations to techniques used to determine the location of bug fixes. Compared to existing techniques, these alterations successfully increase the number of bugs which can be usefully localised, aiding developers in removing the bugs.

Automated software development and model generation by means of syntactic and semantic analysis

Meiklejohn, Mark January 2014 (has links)
Software development is a global activity and the development of a software system starts from some requirement that describes the problem domain. These requirements need to be communicated so that the software system can be fully engineered and in the majority of cases the communication of software requirements typically take the form of written text, which is difficult to transform into a model of the software system and consumes an inordinate amount of project effort. This thesis proposes and evaluates a fully automated analysis and model creation technique that exploits the syntactic and semantic information contained within an English natural language requirements specification to construct a Unified Modelling Language (UML) model of the software requirements. The thesis provides a detailed description of the related literature, a thorough description of the Common Semantic Model (CSM) and Syntactic Analysis Model (SAM) models, and the results of a qualitative and comparative evaluation given realistic requirement specifications and ideal models. The research findings confirm that the CSM and SAM models can identify: classes, relationships, multiplicities, operations, parameters and attributes all from the written natural language requirements specification which is subsequently transformed into a UML model. Furthermore, this transformation is undertaken without the need of manual intervention or manipulation of the requirements specification.

Investigations into inductive-recursive definitions

Malatesta, Lorenzo January 2015 (has links)
The theory of recursive functions where the domain of a function is inductively defined at the same time as the function is called induction-recursion. This theory has been introduced in Martin-Lèof type theory by Dybjer and further explored in a series of papers by Dybjer and Setzer. Important data types like universes closed under dependent type operators are instances of this theory. In this thesis we study the class of data types arising from inductive-recursive definitions, taking the seminal work of Dybjer and Setzer as our starting point. We show how the theories of inductive and indexed inductive types arise as sub-theories of induction-recursion, by revealing the role played by a notion of of size within the theory of induction-recursion. We then expand the expressive power of induction-recursion, showing how to extend the theory of induction-recursion in two different ways: in one direction we investigate the changes needed to obtain a more flexible semantics which gives rise to a more comprehensive elimination principle for inductive-recursive types. In another direction we generalize the theory of induction-recursion to a fibrational setting. In both extensions we provide a finite axiomatization of the theories introduced, we show applications and examples of these theories not previously covered by induction-recursion, and we justify the existence of data types built within these theories.

Problem models for rule based planning

Lindsay, Alan January 2015 (has links)
The effectiveness of rule-based policies as a search control mechanism in planning has been demonstrated in several planners. A key benefit is that a single policy captures the solution to a set of related planning problems. However, it has been observed that a small number of weak rules, common in learned control knowledge, can make a rule system ineffective. As a result, research has focussed on approaches that improve the robustness of exploiting (potentially weak) rules in search. In this work we examine two aspects that can lead to weak rules: the language that the rules are drawn from and the approach used to learn the rules. The rules are often captured using the predicates and actions of the problem models that the knowledge applies to. However, this language is appropriate for expressing the constraints of the planning world, and will not necessarily include the appropriate words required to express a general solution. We present an approach to automatically invoke language enhancements that are appropriate for the particular aspects of the target problems. These enhancements support rules in problems that include structure interactions, such as graph traversal and block stacking; and optimisation tasks, such as resource management. There have been several approaches made to learning policies explored in the literature. Learning policies requires a fitness function, which measures the quality of a policy. In previous approaches these have relied on a collection of examples generated by a remote planner. However, we have observed that this leads to weak guidance in domains where global optimisation is required for an optimal solution (such as transportation domains). In these domains we expect good, but not optimal action choices, and this conflicts with the assumption that example states can be accurately explained and ultimately leads to weak rules. Instead of measuring performance from a set of remotely drawn example situations, we propose using progress towards goal instead. Our approach is evaluated using rule-based policies to control search in problems from the benchmark planning domains. We demonstrate that domain models can be automatically enhanced and that this enhanced language can be exploited by both hand-written and learned policies allowing them to effectively control search. The learning approach is evaluated by learning policies for several of the enhanced domains and it is analysed providing guidance for future work. A key contribution of this work is demonstrating that both hand-written and learned rule-based policies can be used to generate plans that have better quality than domain independent planners. We also learn effective policies for several domains currently untreated in the literature.

A new heuristic-based model of goal recognition without libraries

Pattison, David Thomas January 2015 (has links)
Goal Recognition concerns the problem of determining an agent's final goal, deduced from the plan they are currently executing (and subsequently being observed). For over twenty years, the de facto standard in plan and goal recognition has been to map an agent's observations to a set of known, valid and sound plans held within a plan library. In this time many novel techniques have been applied to the recognition problem, but almost all have relied on the presence of a library in some form or another. The work presented in this thesis advances the state-of-the-art in goal recognition by removing the need for any plan or goal library. Such libraries are tedious to construct, incomplete if done by hand, and possibly contain erroneous or irrelevant entries when done by machine. This work presents a new formulation of the recognition problem based on planning, which removes the need for such a structure to be present. This greatly widens the scenarios in which goal recognition can be realistically performed. While this new formalism overcomes many of the problems associated with traditional recognition research, it remains compatible with many of the concepts found in previous recognition work. This new defnition is first defined in the context of a rational agent and observer, before several relaxations are introduced which enable tractable goal recognition. This relaxed implementation is then extensively evaluated with regard to multiple aspects of the recognition problem.

Novel text entry and mobile interaction techniques for Arabic language users

El Batran, Karim Mohsen Mahmoud January 2015 (has links)
Inspired by an observational study of Egyptian Agricultural Census counters, this research aims to improve mobile data entry though better form navigation and improved Arabic text entry. Four improvements were taken into consideration in sequence: (1) minimizing large forms to fit small mobile device screens and easing form navigation process, (2) optimizing Arabic keyboard layout to suit Arabic Language users, (3) introducing Gesture-based Arabic Writing Pads (GBAWPs) that fit small mobile device screens and smart watch surfaces, and (4) enhancing a quantitative prediction model to overcome the defect in modeling interactions on mobile devices. This research shows an improvement of form navigation on mobile devices. The approach is based on computerizing forms and using Panning and Zooming as a navigation technique. In order to do so, an observational study was conducted on the Egyptian Agricultural Census (EAC). However, there were considerable challenges in reducing the size of the paper forms to fit mobile devices and introducing fast navigation technique. It was concluded after computerizing the forms that using the Panning and Zooming technique scored less completion task time and workload in comparison to the tabbed navigation technique. Moreover, this research presents a new design of an Arabic keyboard layout for effective text entry on touch screen mobile phones. The approach is based on Pareto front optimization using three metrics: minimizing finger travel distance in order to maximize speed, minimizing neighboring key error ambiguities in order to maximize the quality of spell correction, and maximizing familiarity for Arabic Language users through approximate alphabetic sorting. In user studies, the new layout showed an observed improvement in typing speed in comparison to a common Arabic layout. Currently, there is an opportunity to research new optimized keyboard designs with less usage experience than QWERTY as in mainstream Western European languages. Pareto optimization can produce high quality keyboards for alphabet based languages that could be beneficial when there is less reluctance to change from QWERTY. Furthermore, this research also illustrates Gesture-based text entry as a method used for mobile devices. Its success and acceptance is critically dependent on the reliability of gesture recognition. The gesture recognition of the GBAWP is accomplished through a sequence of touched points or swipes on the screen. In order to maximize the text area field and minimize the number of keys displayed on the screen, a 12-key GBAWP interface was introduced appearing like a 12-key physical keypad phone. Considering the Arabic letters characteristics, structure, and maximizing speed, a 6-key GBAWP layout based on dot recognition was introduced. After conducting usability tests on both the 12-key and 6-key GBAWP, it was found that users could perform text entry on mobile devices using the 12-key GBAWP with an estimate of 2.9 words-per-minute on average. They also executed text entry tasks on a Sony SmartWatch 2 with an average of 3.2 words-per-minute. This could increase to an estimate of 4.5 words-per-minute on average, on the long term. While entry speeds were slow, users found it easy to use and it supports largely eyes free interaction. Gesture-based technique enables users to perform Arabic text entry on small display mobile devices and watches using both the 12-key and 6-key GBAWP. Finally, this research introduces an enhancement to KLM (Keystroke-Level Model), a quantitative prediction model predicting the user's behaviour in low-level tasks. This was acomplished by extending it with three new operators describing interactions on mobile touchscreen devices and tablets. The approach is based on Fitts' Law to identify a performance measure estimate equation for each of the introduced interactions. Three prototypes were developed to serve as a test environment in validating Fitts equations and estimating the parameters for these interactions. Three-thousand and ninty observations took place with a total of 51 users. The studies confirmed that most interactions fitted well with Fitts' Law. On the other hand, it was noticed that Fitts' Law does not fit well on small mobile device screens when the Index of Difficulty exceeds 4 bits. These results enable developers of mobile device and tablet applications to describe tasks as a sequence of operators used and predict user interaction times prior to creating prototypes.

Algebraic methods for incremental maintenance and updates of views within XML databases

Goodfellow, Martin Hugh January 2014 (has links)
Within XML data management the performance of queries has been improved by using materialised views. However, modifications to XML documents must be reflected to these views. This is known as the view maintenance problem. Conversely updates to the view must be reflected on the XML source documents. This is the view update problem. Fully recalculating these views or documents to reflect these changes is inefficient. To address this, a number of distinct methods are reported in the literature that address either incremental view maintenance or update. This thesis develops a consistent incremental algebraic approach to view maintenance and view update using generic operators. This approach further differs from related work in that it supports views with multiple returned nodes. Generally the data sets to be incrementally maintained are smaller for the view update case. Therefore, it was necessary to investigate the circumstances in which converting view maintenance into view update ga ve better performance. Finally, dynamic reasoning on updates was considered to determine whether it improved the performance of the proposed view maintenance and view update methods. The system was implemented using features of XML stores and XML query evaluation engines including structural identifiers for XML and structural join algorithms. Methods for incrementally handling the view maintenance and view update problem are presented and the benefits of these methods over existing algorithms are established by means of experiments. These experiments also depict the benefit of translating view maintenance updates into view updates, where applicable, and the benefits of dynamic reasoning. The main contribution of this thesis is the development of similar incremental algebraic methods which provide a consistent solution to the view maintenance and view update problems. The originality of these methods is their ability to handle statement-level updates using generic operators and views returning data from multiple nodes.

Rod-cone convergence in the retina

Muchungi, Kendi January 2015 (has links)
Vision enables visual perception of one's environs, as well as self-navigation within space. Objects within our environs are visible by virtue of the fact that they re ect light. To see or have visual perception, this light needs to be converted into an electrical signal. This process is referred to as visual transduction and takes place in the retina. Recently, it has become apparent that the convergence of rod and cone systems in transduction is crucial to enable retina functionality. Specifically, for local adaptation and contrast gain control in response to changes in illumination. However, because research until recently showed that rod and cone pathways have operated autonomously of each other, existing retinal models and designs of retinal prosthesis have been of either one of these pathways and have not incorporated their convergence. In this thesis we introduce a new retina model, which is biologically plausible, computationally simple and effective, and one that captures the convergence of rod and cone pathways both in the Outer and Inner Plexiform Layers (O/I PL) of the retina. In the OPL, we introduce rod cone convergence via electrical gap junctions to simulate rod-cone coupling. We demonstrate that introducing convergence in the OPL improves the perception of input stimuli and extends the range of adaptation to light levels. In the IPL, we introduce the convergence by developing a simulated rod On Bipolar Cell (ONBC) and introducing it via a rod pathway into the cone system via an Amacrine model. At this layer, we were able to show improved visual acuity as well as an increase in the dynamic range by improving contrast enhancement at very high luminance levels. Our results are compared with biology to determine whether rod and cone convergence gives rise to a better model of biology as measured through the threshold versus intensity (tvi) function. We also assess the signal-to-noise ratio results of the model when compared with an image processing technique to determine if the model has computational benefits. The results obtained from our retinal model show that if incorporated in the design of retinal prosthesis and visual systems used in robotics, there should be marked improvement during visual processing.

On the Möbius function and topology of the permutation poset

Smith, Jason P. January 2015 (has links)
A permutation is an ordering of the letters 1, . . . , n. A permutation σ occurs as a pattern in a permutation π if there is a subsequence of π whose letters appear in the same relative order of size as the letters of σ, such a subsequence is called an occurrence. The set of all permutations, ordered by pattern containment, is a poset. In this thesis we study the behaviour of the Möbius function and topology of the permutation poset. The first major result in this thesis is on the Möbius function of intervals [1,π], such that π = π₁π₂. . . πn has exactly one descent, where a descent occurs at position i if πi > π i+1. We show that the Möbius function of these intervals can be computed as a function of the positions and number of adjacencies, where an adjacency is a pair of letters in consecutive positions with consecutive increasing values. We then alter the definition of adjacencies to be a maximal sequence of letters in consecutive positions with consecutive increasing values. An occurrence is normal if it includes all letters except (possibly) the first one of each of all the adjacencies. We show that the absolute value of the Möbius function of an interval [σ, π] of permutations with a fixed number of descents equals the number of normal occurrences of σ in π. Furthermore, we show that these intervals are shellable, which implies many useful topological properties. Finally, we allow adjacencies to be increasing or decreasing and apply the same definition of normal occurrence. We present a result that shows the Möbius function of any interval of permutations equals the number of normal occurrences plus an extra term. Furthermore, we conjecture that this extra term vanishes for a significant proportion of intervals.

Page generated in 0.0469 seconds