• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Intelligent agents for mobile location services

McInerney, James January 2014 (has links)
Understanding human mobility patterns is a significant research endeavour that has recently received considerable attention. Developing the science to describe and predict how people move from one place to another during their daily lives promises to address a wide range of societal challenges: from predicting the spread of infectious diseases, improving urban planning, to devising effective emergency response strategies. Individuals are also set to benefit from this area of research, as mobile devices will be able to analyse their mobility pattern and offer context-aware assistance and information. For example, a service could warn about travel disruptions before the user is likely to encounter them, or provide recommendations and mobile vouchers for local services that promise to be of high value to the user, based on their predicted future plans. More ambitiously, control systems for home heating and electric vehicle charging could be enhanced with knowledge of when the user will be home. In this thesis, we focus on such anticipatory computing. Some aspects of the vision of context-awareness have been pursued for many years, resulting in mature research in the area of ubiquitous systems. However, the combination of surprisingly rapid adoption of advanced mobile devices by consumers and the broad acceptance of location-based apps has surfaced not only new opportunities, but also a number of pressing challenges. In more detail, these challenges are the (i) prediction of future mobility, (ii) inference of features of human location behaviour, and (iii) use of prediction and inference to make decisions about timely information or control actions. Our research brings together, for the first time, the entire workflow that a mobile location service needs to follow, in order to achieve an understanding of mobile user needs and to act on such understanding effectively. This framing of the problem highlights the shortcomings of existing approaches which we seek to address. In the current literature, prediction is only considered for established users, which implicitly assumes that new users will continue to use an initially inaccurate prediction system long enough for it to improve and increase in accuracy over time. Additionally, inference of user behaviour is mostly concerned with interruptibility, which does not take into account the constructive role of intelligent location services that goes beyond simply avoiding interrupting the user at inopportune times (e.g., in a meeting, or while driving). Finally, no principled decision framework for intelligent location services has been provided that takes into account the results of prediction and inference. To address these shortcomings, we make three main contributions to the state of the art. Firstly, we provide a novel Bayesian model that relates the location behaviour of new and established users, allowing the reuse of structure learnt from rich mobility data. This model shows a factor of 2.4 improvement over the state-of-the-art baseline in heldout data likelihood in experiments using the Nokia Lausanne dataset. Secondly, we give new tools for the analysis and prediction of routine in mobility, which is a latent feature of human behaviour, that informs the service about the user’s availability to follow up on any information provided. And thirdly, we provide a fully worked example of an intelligent mobile location service (a crowdsourced package delivery service) that performs decision-making using predictive densities of current and future user mobility. Simulations using real mobility data from the Orange Ivory Coast dataset indicate a 81.3% improvement in service efficiency when compared with the next best (non-anticipatory) approach.
92

Large-scale reordering models for statistical machine translation

Alrajeh, Abdullah January 2015 (has links)
In state-of-the-art phrase-based statistical machine translation systems (SMT), modelling phrase reorderings is an important need to enhance naturalness of the translate outputs, particularly when the grammatical structures of the language pairs differ significantly. The challenge in developing machine learning methods for machine translation can be summarised in two points. First is the ability to characterise language features such as morphology, syntax and semantics. Second is adapting complex learning algorithms to process large corpora. Posing phrase movements as a classification problem, we exploit recent developments in solving large-scale SVM, Multiclass SVM and Multinomial Logistic Regression. Using dual coordinate descent methods for learning, we provide a mechanism to shrink the amount of training data required for each iteration. Hence, we produce significant saving in time and memory while preserving the accuracy of the models. These efficient classifiers allow us to build large-scale discriminative reordering models. We also explore a generative learning approach namely naive Bayes. Our Bayesian model is shown to be superior to the widely-used lexicalised reordering model. It is fast to train and the storage requirement is many times smaller than the lexicalised model. Although discriminative models might achieve higher accuracy than naive Bayes, the absence of iterative learning is a critical advantage for very large corpora. Our reordering models are fully integrated with the Moses machine translation system, widely used in the community. Evaluated in large-scale translation tasks, our model have proved successful for two very different language pairs: Arabic-English and German-English.
93

Bayesian learning for multi-agent coordination

Allen-Williams, Mair January 2009 (has links)
Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, decentralisation, openness, dynamism and uncertainty. As work in these fields develops, such systems face increasing challenges. Two particular challenges are decision making in uncertain and partially-observable environments, and coordination with other agents in such environments. Although uncertainty and coordination have been tackled as separate problems, formal models for an integrated approach are typically restricted to simple classes of problem and are not scalable to problems with tens of agents and millions of states. We improve on these approaches by extending a principled Bayesian model into more challenging domains, using Bayesian networks to visualise specific cases of the model and thus as an aid in deriving the update equations for the system. One approach which has been shown to scale well for networked offline problems uses finite state machines to model other agents. We used this insight to develop an approximate scalable algorithm applicable to our general model, in combination with adapting a number of existing approximation techniques, including state clustering. We examine the performance of this approximate algorithm on several cases of an urban rescue problem with respect to differing problem parameters. Specifically, we consider first scenarios where agents are aware of the complete situation, but are not certain about the behaviour of others; that is, our model with all elements but the actions observable. Secondly, we examine the more complex case where agents can see the actions of others, but cannot see the full state and thus are not sure about the beliefs of others. Finally, we look at the performance of the partially observable state model when the system is dynamic or open. We find that our best response algorithm consistently outperforms a handwritten strategy for the problem, more noticeably as the number of agents and the number of states involved in the problem increase.
94

Person detection using wide angle overhead cameras

Ahmed, Imran January 2014 (has links)
In cluttered environments, the overhead view is often preferred because looking down can afford better visibility and coverage. However detecting people in this or any other extreme view can be challenging as there are significant variation in a person's appearances depending only on their position in the picture. The Histogram of Oriented Gradient (HOG) algorithm, a standard algorithm for pedestrian detection, does not perform well here, especially where the image quality is poor. We show that with the SCOVIS dataset, on average, 9 false detections occur per image. We propose a new algorithm where transforming the image patch containing a person to remove positional dependency and then applying the HOG algorithm eliminates 98% of the spurious detections in the noisy images from our industrial assembly line and detects people with a 95% efficiency. The algorithm is demonstrated as part of a simple but effective person tracking by detection system. This incorporates simple motion detection to highlight regions of the image that might contain people. These are then searched with our algorithm. This has been evaluated on a number of SCOVIS sequences and correctly tracks people approximately 99% of the time. By comparison, the exampled algorithms in the OpenCV are less than approximately 50% efficient. Finally, we show our algorithm's potential for generalization across different scenes. We show that a classifier trained on the SCOVIS dataset achieves a detection rate of 96% when applied to new overhead data recorded at Southampton. Using the output from this stage to generate labelled 'true positives' data we train a new model which achieves a detection rate of 98%. Both these results compared favourably with the performance of a model trained with manually labelled images. This achieves a detection rate of greater than 99%.
95

Design and experimental evaluation of iterative learning controllers on a multivariable test facility

Dinh Van, Thanh January 2013 (has links)
Iterative learning control (ILC) algorithms are employed in many applications, especially these involving single-input and single-output plants undertaking repeated tasks with finite-time interval. ILC is applicable to systems executing a repeated trajectory tracking task, and uses data recorded over previous trials in the construction of the next control input. The objective is to sequentially improve tracking accuracy as the trial number increases. This method has been shown to operate well in the presence of significant modeling uncertainty and exogenous disturbances. However, for MIMO (multiple input -multiple output) systems, there exist far fewer applications reported in the literature, and minimal benchmarking and evaluation studies have been undertaken. To tackle this shortcoming, this thesis focuses on designing an electromechanical test-bed which can verify the weaknesses and the advantages of various ILC methods on a purpose-built platform. The system has two inputs and two outputs and enables variation of the interaction between inputs and outputs through simple and rapid parameter modification. This interaction variation permits the control problem to be modified, allowing stipulation over the challenge presented to the ILC controller. The system is made up of two back-to-back differential gearboxes with mass-spring damper components to increase the system order and control difficulty. In its standard configuration, two motors provide torque to the two input ports and the two outputs are measured using encoders. This work enables a comparative summary of ILC approaches for MIMO systems, together with modifications for improved performance and robustness, and the development of new control schemes incorporating input and output constraints and point-to point tracking capability. The system can also be configured in a variety of other arrangements, varying the number of inputs and outputs, and allowing noise to be injected using a dc motor. Models of the system are derived using a lumped parameter system representation, as well as purely from experimental input and output data. Simple structure controllers such as proportional-type ILC, derivative-type ILC and phase-lead ILC are then applied to test the combined performance of the controller and the MIMO system, and establish its efficacy as a benchmarking platform. Advanced controllers are then derived and applied and experimental data are used to confirm theoretical findings concerning the link between interaction and convergence rate, input norm and robustness.
96

An artificial experimenter for automated response characterisation

Lovell, Christopher James January 2011 (has links)
Biology exhibits information processing capabilities, such as parallel processing and context sensitivity, which go far beyond the capabilities of modern conventional electronic computation. In particular the interactions of proteins suh as enzymes are interesting, as they appear to act as efficient biomolecular computers. Harnessing proteins as biomolecular computers is currently not possible, as little is understood about their interactions outside of a physiological context. Understanding these interactions can only occur through experimentation. However, the size and dimensionality of the available experiment parameter spaces far outsize the resources typically available to investigate them, creating a restriction on the knowledge aquisition possible. To address this restriction, new tools are required to enable the development of biomolecular computation. One such tool is autonomous experimentation, a union of machine learning and computer controlled laboratory equipment within in a closed-loop machine. Both the machine learning and experiment platforms can be designed to address the resource problem. The machine learning element attempts to provide techniques for intelligent experiment selection and effective data analysis that reduce the number of experiments required to learn from. Whilst resource efficient automated experiment platforms, such as lab on-chip technology, can minimise the volumes of reactants per experiment. Here the machine learning aspect of autonomous experimentation is considered. These machine learning techniques must act as an artificial experimenter, mimicking the processes of successful human experimenters, through developing hypotheses and selecting the experiments to perform. Using this biological domain as motivation, an investigation of learning from a small set of noisy and sometimes erroneous observations is presented. Presented is a principled multiple hypotheses technique motivated from philosophy of science and machine learning for producing potential response characteristics, combined with active learning techniques that provide a robust method for hypothesis separation and a Bayesian surprise method for managing the exploration{exploitation trade-off between new feature discovery and hypothesis disproving. The techniques are validated through a laboratory trial where successful biological characterisation has been shown
97

Towards a framework and model for acceptable user experiences in e-government physical and virtual identity access management systems

Alotaibi, Sara Jeza January 2013 (has links)
The wide spread of services on the internet has aggravated the issue of maintaining multiple identities such as the virtual identities that are based on specific login credentials like username, passwords and PINs. On the other hand, multiple physical identities also prove to be difficult to maintain since different sources require the presence of different smart cards, mobile devices or other proofs of identity. Therefore, the modern world is populated with so many virtual and physical Identity Access Management Systems (IAMS) that individuals are required to maintain multiple passwords and login credentials. The tedious task of remembering these can be minimised through the utilisation of an innovative approach of single sign-in mechanisms. During recent times, several systems have been developed to provide physical and virtual IAMS; however, most have not been very successful according to specific criteria. Furthermore, alongside increasing the level of awareness for the need to deploy interoperable physical and virtual IAMS, there exists an immediate need for the establishment of clear guidelines for the successful integration of the two media. The importance of and motivation for the integration of the two media will be discussed in this thesis with respect to three perspectives: security, which includes identity; user experience, comprising usability; and acceptability, containing accessibility. Not many frameworks and models abide by all guidelines for all of these perspectives; thus, the thesis addresses the immediate need to establish a framework and a model for acceptable user experience for successful integration of the two media for public services within the e-government domain. The IAMS framework is based on the attributes from the researched theories of the three perspectives and expert evaluations of the unique nine themes. Regarding the users evaluation to test the proposed Unified Theory of Acceptance and Use of Technology Model(UTAUT), there is an indirect effect on behavioural intentions to use a new prototype system (Ubiquitous Identity Access Management System "UbIAMS") through performance expectancy, effort expectancy, social influence, and through items pertaining to acceptability and user experience.
98

Expressive and efficient bounded model checking of concurrent software

Morse, Jeremy January 2015 (has links)
To improve automated verification techniques for ANSI-C software, I examine temporal logics for describing program properties, and techniques for increasing the speed of program verification,for both single threaded and concurrent programs, based on the model checker ESBMC. A technique for evaluating LTL formulae over finite program traces is proposed and evaluated over a piece of industrial software and a suite of benchmarks, with favourable results. Efficient formulations of the model checking problem for SMT solvers are evaluated, and the performance of different solvers compared. Finally a number of optimisations for concurrent program verification not previously applied to symbolic software model checking are evaluated, resulting in an order of magnitude performance improvement over ESBMCs prior and already internationally competitive performance.
99

Converting polynomial and rational expressions to normal form

Nickoll, Philip Richard Spencer January 1999 (has links)
The aim of this research is to design and implement a program that will be able to manipulate multiple algebraic expressions and transform them to a 'normalised form' . The expression to be manipulated is user defined (the user will be able to input an expression or read it from a file). This expression will be read in by a scanner and checked with a parser using a set of grammar rules which are to be defined later on. The expression will be able to contain constants (both real and imaginary), identifiers (strings or individual characters) and operators which include +, -, *, /, ( ), A (power sign), E (exponential sign for constants). The algebraic manipulation process simplifies and sorts the expression into order. The simplification involves removing all bracketed expressions by various methods including multiplying them out. Like terms will then be collected together and sorted into a user defined order. The order can be alphabetical, terms in ascending or descending power etc. For rational expressions , the whole expression is converted into one fraction. The program will also be able to inform the user of the total degree of the expression or if, for example, the expression considered is a polynomial in x, then the program will inform the user of the degree of the polynomial with respect to x. The program is built on a PC platform and an Object Oriented language will be used. The language initially used was Borland Turbo Pascal version 7 which uses Turbo Vision (Object Oriented data structures and user interface), but with the advent of 32 bit programming Borland C++ 5 was considered a better choice. The STL (Standard Template Library) is used for the data structures and OWL (Object Windows Library) is used for the user interface.
100

Error resilient techniques for storage elements of low power design

Yang, Sheng January 2013 (has links)
Over two decades of research has led to numerous low-power design techniques being reported. Two popular techniques are supply voltage scaling and power gating. This thesis studies the impact of these two design techniques on the reliability of embedded processor registers and memory systems in the presence of transient faults; and with the aim to develop and validate efficient mitigation techniques to improve reliability with small cost of energy consumption, performance and area overhead. This thesis presents three original contributions. The first contribution presents a technique for improving the reliability of embedded processors. A key feature of the technique is low cost, which is achieved through reuse of the scan chain for state monitoring, and it is effective because it can correct single and multiple bit errors through hardware and software respectively. To validate the technique, ARMR Cortex TM -M0 embedded microprocessor is implemented in FPGA and further synthesised using 65-nm technology to quantify the cost in terms of area, latency and energy. It is shown that the presented technique has a small area overhead (8.6%) with less than 4% worst-case increase in critical path. The second contribution demonstrates that state integrity of flip-flops is sensitive to process, voltage and temperature (PVT) variation through measurements from 82 test chips. A PVT-aware state protection technique is presented to ensure state integrity of flip-flops while achieving maximum leakage savings. The technique consists of characterisation algorithm and employs horizontal and vertical parity for error detection and correction. Silicon results show that flip-flops state integrity is preserved while achieving up to 17.6% reduction in retention voltage across 82-dies. Embedded processors memory systems are susceptible to transient errors and blanket protection of every part of memory system through ECC is not cost effective. The final contribution addresses the reliability of embedded processor memory systems and describes an architectural simulation-based framework for joint optimisation of reliability, energy consumption and performance. Accurate estimation of memory reliability with targeted protection is proposed to identify and protect the most vulnerable part of the memory system to minimise protection cost. Furthermore, L1-cache resizing together with voltage and frequency scaling is proposed for further energy savings while maintaining performance and reliability. The contributions presented are supported by detailed analyses using state-of-the-art design automation tools, in-house software tools and validated using FPGA and silicon implementation of commercial low power embedded processors

Page generated in 0.0872 seconds