• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3248
  • 1209
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8701
  • 4039
  • 2505
  • 2429
  • 2429
  • 805
  • 805
  • 588
  • 579
  • 554
  • 551
  • 525
  • 486
  • 480
  • 471
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Optimising runtime reconfigurable designs for high performance applications

Niu, Xinyu January 2015 (has links)
This thesis proposes novel optimisations for high performance runtime reconfigurable designs. For a reconfigurable design, the proposed approach investigates idle resources introduced by static design approaches, and exploits runtime reconfiguration to eliminate the inefficient resources. The approach covers the circuit level, the function level, and the system level. At the circuit level, a method is proposed for tuning reconfigurable designs with two analytical models: a resource model for computational and memory resources and memory bandwidth, and a performance model for estimating execution time. This method is applied to tuning implementations of finite-difference algorithms, optimising arithmetic operators and memory bandwidth based on algorithmic parameters, and eliminating idle resources by runtime reconfiguration. At the function level, a method is proposed to automatically identify and exploit runtime reconfiguration opportunities while optimising resource utilisation. The method is based on Reconfiguration Data Flow Graph, a new hierarchical graph structure enabling runtime reconfigurable designs to be synthesised in three steps: function analysis, configuration organisation, and runtime solution generation. At the system level, a method is proposed for optimising reconfigurable designs by dynamically adapting the designs to available runtime resources in a reconfigurable system. This method includes two steps: compile-time optimisation and runtime scaling, which enable efficient workload distribution, asynchronous communication scheduling, and domain-specific optimisations. It can be used in developing effective servers for high performance applications.

Socio-economic aware data forwarding in mobile sensing networks and systems

Adeel, Usman January 2014 (has links)
The vision for smart sustainable cities is one whereby urban sensing is core to optimising city operation which in turn improves citizen contentment. Wireless Sensor Networks are envisioned to become pervasive form of data collection and analysis for smart cities but deployment of millions of inter-connected sensors in a city can be cost-prohibitive. Given the ubiquity and ever-increasing capabilities of sensor-rich mobile devices, Wireless Sensor Networks with Mobile Phones (WSN-MP) provide a highly flexible and ready-made wireless infrastructure for future smart cities. In a WSN-MP, mobile phones not only generate the sensing data but also relay the data using cellular communication or short range opportunistic communication. The largest challenge here is the efficient transmission of potentially huge volumes of sensor data over sometimes meagre or faulty communications networks in a cost-effective way. This thesis investigates distributed data forwarding schemes in three types of WSN-MP: WSN with mobile sinks (WSN-MS), WSN with mobile relays (WSN-HR) and Mobile Phone Sensing Systems (MPSS). For these dynamic WSN-MP, realistic models are established and distributed algorithms are developed for efficient network performance including data routing and forwarding, sensing rate control and and pricing. This thesis also considered realistic urban sensing issues such as economic incentivisation and demonstrates how social network and mobility awareness improves data transmission. Through simulations and real testbed experiments, it is shown that proposed algorithms perform better than state-of-the-art schemes.

Fast and robust methods for non-rigid registration of medical images

Pszczolkowski Parraguez, Stefan January 2014 (has links)
The automated analysis of medical images plays an increasingly significant part in many clinical applications. Image registration is an important and widely used technique in this context. Examples of its use include, but are not limited to: longitudinal studies, atlas construction, statistical analysis of populations and automatic or semi-automatic parcellation of structures. Although image registration has been subject of active research since the 1990s, it is a challenging topic with many issues that remain to be solved. This thesis seeks to address some of the open challenges of image registration by proposing fast and robust methods based on the widely utilised and well established registration framework of B-spline Free-Form Deformations (FFD). In this work, a statistical method has been incorporated into the FFD model, in order to obtain a fast learning-based method that produces results that are in accordance with the underlying variability of the population under study. Several comparisons between different statistical analysis methods that can be used in this context are performed. Secondly, a method to improve the convergence of the B-Spline FFD method by learning a gradient projection using principal component analysis and linear regression is proposed. Furthermore, a robust similarity measure is proposed that enables the registration of images affected by intensity inhomogeneities and images with pathologies, e.g. lesions and/or tumours. All the methods presented in this thesis have been extensively evaluated using both synthetic data and large datasets of real clinical data, such as Magnetic Resonance (MR) images of the brain and heart.

Multi-sensor fusion for human-robot interaction in crowded environments

McKeague, Stephen John January 2014 (has links)
For challenges associated with the ageing population, robot assistants are becoming a promising solution. Human-Robot Interaction (HRI) allows a robot to understand the intention of humans in an environment and react accordingly. This thesis proposes HRI techniques to facilitate the transition of robots from lab-based research to real-world environments. The HRI aspects addressed in this thesis are illustrated in the following scenario: an elderly person, engaged in conversation with friends, wishes to attract a robot's attention. This composite task consists of many problems. The robot must detect and track the subject in a crowded environment. To engage with the user, it must track their hand movement. Knowledge of the subject's gaze would ensure that the robot doesn't react to the wrong person. Understanding the subject's group participation would enable the robot to respect existing human-human interaction. Many existing solutions to these problems are too constrained for natural HRI in crowded environments. Some require initial calibration or static backgrounds. Others deal poorly with occlusions, illumination changes, or real-time operation requirements. This work proposes algorithms that fuse multiple sensors to remove these restrictions and increase the accuracy over the state-of-the-art. The main contributions of this thesis are: A hand and body detection method, with a probabilistic algorithm for their real-time association when multiple users and hands are detected in crowded environments; An RGB-D sensor-fusion hand tracker, which increases position and velocity accuracy by combining a depth-image based hand detector with Monte-Carlo updates using colour images; A sensor-fusion gaze estimation system, combining IR and depth cameras on a mobile robot to give better accuracy than traditional visual methods, without the constraints of traditional IR techniques; A group detection method, based on sociological concepts of static and dynamic interactions, which incorporates real-time gaze estimates to enhance detection accuracy.

Argumentation accelerated reinforcement learning

Gao, Yang January 2014 (has links)
Reinforcement Learning (RL) is a popular statistical Artificial Intelligence (AI) technique for building autonomous agents, but it suffers from the curse of dimensionality: the computational requirement for obtaining the optimal policies grows exponentially with the size of the state space. Integrating heuristics into RL has proven to be an effective approach to combat this curse, but deriving high-quality heuristics from people's (typically conflicting) domain knowledge is challenging, yet it received little research attention. Argumentation theory is a logic-based AI technique well-known for its conflict resolution capability and intuitive appeal. In this thesis, we investigate the integration of argumentation frameworks into RL algorithms, so as to improve the convergence speed of RL algorithms. In particular, we propose a variant of Value-based Argumentation Framework (VAF) to represent domain knowledge and to derive heuristics from this knowledge. We prove that the heuristics derived from this framework can effectively instruct individual learning agents as well as multiple cooperative learning agents. In addition,we propose the Argumentation Accelerated RL (AARL) framework to integrate these heuristics into different RL algorithms via Potential Based Reward Shaping (PBRS) techniques: we use classical PBRS techniques for flat RL (e.g. SARSA(λ)) based AARL, and propose a novel PBRS technique for MAXQ-0, a hierarchical RL (HRL) algorithm, so as to implement HRL based AARL. We empirically test two AARL implementations - SARSA(λ)-based AARL and MAXQ-based AARL - in multiple application domains, including single-agent and multi-agent learning problems. Empirical results indicate that AARL can improve the convergence speed of RL, and can also be easily used by people that have little background in Argumentation and RL.

Compositional behaviour and reliability models for adaptive component-based architectures

Fonseca Rodrigues, Pedro January 2015 (has links)
The increasing scale and distribution of modern pervasive computing and service-based platforms makes manual maintenance and evolution difficult and too slow. Systems should therefore be designed to self-adapt in response to environment changes, which requires the use of on-line models and analysis. Although there has been a considerable amount of work on architectural modelling and behavioural analysis of component-based systems, there is a need for approaches that integrate the architectural, behavioural and management aspects of a system. In particular, the lack of support for composability in probabilisitic behavioural models prevents their systematic use for adapting systems based on changes in their non-functional properties. Of these non-functional properties, this thesis focuses on reliability. We introduce Probabilistic Component Automata (PCA) for describing the probabilistic behaviour of those systems. Our formalism simultaneously overcomes three of the main limitations of existing work: it preserves a close correspondence between the behavioural and architectural views of a system in both abstractions and semantics; it is composable as behavioural models of composite components are automatically obtained by combining the models of their constituent parts; and lastly it is probabilistic thereby enabling analysis of non-functional properties. PCA also provide constructs for representing failure, failure propagation and failure handling in component-based systems in a manner that closely corresponds to the use of exceptions in programming languages. Although PCA is used throughout this thesis for reliability analysis, the model can also be seen as an abstract process algebra that may be applicable for analysis of other system properties. We further show how reliability analysis based on PCA models can be used to perform architectural adaptation on distributed component-based systems and evaluate the computational cost of decentralised adaptation decisions. To mitigate the state-explosion problem associated with composite models, we further introduce an algorithm to reduce a component's PCA model to one that only represents its interface behaviour. We formally show that such model preserves the properties of the original representation. By experiment, we show that the reduced models are significantly smaller than the original, achieving a reduction of more than 80\% on both the number of states and transitions. A further benefit of the approach is that it allows component profiling and probabilistic interface behaviour to be extracted independently for each component, thereby enabling its exchange between different organisations without revealing commercially sensitive aspects of the components' implementations. The contributions and results of this work are evaluated both through a series of small scale examples and through a larger case study of an e-Banking application derived from Java EE training materials. Our work shows how probabilistic non-functional properties can be integrated with the architectural and behavioural models of a system in an intuitive and scalable way that enables automated architecture reconfiguration based on reliability properties using composable models.

Fusion of wearable and visual sensors for human motion analysis

Wong, Charence Cheuk Lun January 2015 (has links)
Human motion analysis is concerned with the study of human activity recognition, human motion tracking, and the analysis of human biomechanics. Human motion analysis has applications within areas of entertainment, sports, and healthcare. For example, activity recognition, which aims to understand and identify different tasks from motion can be applied to create records of staff activity in the operating theatre at a hospital; motion tracking is already employed in some games to provide an improved user interaction experience and can be used to study how medical staff interact in the operating theatre; and human biomechanics, which is the study of the structure and function of the human body, can be used to better understand athlete performance, pathologies in certain patients, and assess the surgical skill of medical staff. As health services strive to improve the quality of patient care and meet the growing demands required to care for expanding populations around the world, solutions that can improve patient care, diagnosis of pathology, and the monitoring and training of medical staff are necessary. Surgical workflow analysis, for example, aims to assess and optimise surgical protocols in the operating theatre by evaluating the tasks that staff perform and measurable outcomes. Human motion analysis methods can be used to quantify the activities and performance of staff for surgical workflow analysis; however, a number of challenges must be overcome before routine motion capture of staff in an operating theatre becomes feasible. Current commercial human motion capture technologies have demonstrated that they are capable of acquiring human movement with sub-centimetre accuracy; however, the complicated setup procedures, size, and embodiment of current systems make them cumbersome and unsuited for routine deployment within an operating theatre. Recent advances in pervasive sensing have resulted in camera systems that can detect and analyse human motion, and small wear- able sensors that can measure a variety of parameters from the human body, such as heart rate, fatigue, balance, and motion. The work in this thesis investigates different methods that enable human motion to be more easily, reliably, and accurately captured through ambient and wearable sensor technologies to address some of the main challenges that have limited the use of motion capture technologies in certain areas of study. Sensor embodiment and accuracy of activity recognition is one of the challenges that affect the adoption of wearable devices for monitoring human activity. Using a single inertial sensor, which captures the movement of the subject, a variety of motion characteristics can be measured. For patients, wearable inertial sensors can be used in long-term activity monitoring to better understand the condition of the patient and potentially identify deviations from normal activity. For medical staff, inertial sensors can be used to capture tasks being performed for automated workflow analysis, which is useful for staff training, optimisation of existing processes, and early indications of complications within clinical procedures. Feature extraction and classification methods are introduced in thesis that demonstrate motion classification accuracies of over 90% for five different classes of walking motion using a single ear-worn sensor. To capture human body posture, current capture systems generally require a large number of sensors or reflective reference markers to be worn on the body, which presents a challenge for many applications, such as monitoring human motion in the operating theatre, as they may restrict natural movements and make setup complex and time consuming. To address this, a method is proposed, which uses a regression method to estimate motion using a subset of fewer wearable inertial sensors. This method is demonstrated using three sensors on the upper body and is shown to achieve mean estimation accuracies as low as 1.6cm, 1.1cm, and 1.4cm for the hand, elbow, and shoulders, respectively, when compared with the gold standard optical motion capture system. Using a subset of three sensors, mean errors for hand position reach 15.5cm. Unlike human motion capture systems that rely on vision and reflective reference point markers, commonly known as marker-based optical motion capture, wearable inertial sensors are prone to inaccuracies resulting from an accumulation of inaccurate measurements, which becomes increasingly prevalent over time. Two methods are introduced in this thesis, which aim to solve this challenge using visual rectification of the assumed state of the subject. Using a ceiling-mounted camera, a human detection and human motion tracking method is introduced to improve the average mean accuracy of tracking to within 5.8cm in a laboratory of 3m x 5m. To improve the accuracy of capturing the position of body parts and posture for human biomechanics, a camera is also utilised to track the body part movements and provide visual rectification of human pose estimates from inertial sensing. For most subjects, deviations of less than 10% from the ground truth are achieved for hand positions, which exhibit the greatest error, and the occurrence of sources of other common visual and inertial estimation errors, such as measurement noise, visual occlusion, and sensor calibration are shown to be reduced.

Fault localization in service-based systems hosted in mobile ad hoc networks

Novotny, Petr January 2014 (has links)
Fault localization in general refers to a technique for identifying the likely root causes of failures observed in systems formed from components. Fault localization in systems deployed on mobile ad hoc networks (MANETs) is a particularly challenging task because those systems are subject to a wider variety and higher incidence of faults than those deployed in fixed networks, the resources available to track fault symptoms are severely limited, and many of the sources of faults in MANETs are by their nature transient. We present a suite of three methods, each responsible for part of the overall task of localizing the faults occurring in service-based systems hosted on MANETs. First, we describe a dependence discovery method, designed specifically for this environment, yielding dynamic snapshots of dependence relationships discovered through decentralized observations of service interactions. Next, we present a method for localizing the faults occurring in service-based systems hosted on MANETs. We employ both Bayesian and timing-based reasoning techniques to analyze the dependence data produced by the dependence discovery method in the context of a specific fault propagation model, deriving a ranked list of candidate fault locations. In the third method, we present an epidemic protocol designed for transferring the dependence and symptom data between nodes of MANET networks with low connectivity. The protocol creates network wide synchronization overlay and transfers the data over intermediate nodes in periodic synchronization cycles. We introduce a new tool for simulation of service-based systems hosted on MANETs and use the tool for evaluation of several operational aspects of the methods. Next, we present implementation of the methods in Java EE and use emulation environment to evaluate the methods. We present the results of an extensive set of experiments exploring a wide range of operational conditions to evaluate the accuracy and performance of our methods.

Spatial and temporal analysis of facial actions

Jiang, Bihan January 2014 (has links)
Facial expression recognition has been an active topic in computer vision since 90s due to its wide applications in human-computer interaction, entertainment, security, and health care. Previous works on automatic analysis of facial expressions have focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. In contrast, the Facial Action Coding System (FACS) is one of the most comprehensive and objective ways to describe facial expressions. It associates facial expressions with the actions of the muscles that produce them by defining a set of atomic movements called Action Units (AUs). The system allows any facial expressions to be uniquely described by a combination of AUs. Over the past decades, extensive research has been conducted by psychologists and neuroscientists on various applications of facial expression analysis using FACS. Automating FACS coding would make this research faster and more widely applicable, opening up new avenues to understanding how we communicate through facial expressions. Morphology and dynamics are the two aspects of facial actions, that are crucial for the interpretation of human facial behaviour. The focus of this thesis is how to represent and learn the rich facial texture changes in both the spatial and temporal domain. The effectiveness of spatial and spatio-temporal facial representations and their roles in detecting the activation and temporal dynamics of facial actions are explored. In the spatial domain, a novel feature extraction strategy is proposed based on a heuristically defined regions from which a separate classifier is trained and fused in the decision-level. In the temporal domain, a novel dynamic appearance descriptor is presented by extending the static appearance descriptor Local Phase Quantisation (LPQ) to the temporal domain by using the Three Orthogonal Planes (TOP). The resulting dynamic appearance descriptor LPQ-TOP is applied to detect the latent temporal information representing facial appearance changes and explicitly model facial dynamics of AUs in terms of their temporal segments. Finally, a parametric temporal alignment method is proposed. Such strategy can accommodate very flexible time warp functions and is able to deal with both sequence-to-sequence and sub-sequence alignment. This method also opens up a new approach to the problem of AU temporal segment detection. This thesis contributes to facial action recognition by modelling the spatial and temporal texture changes for AU activation detection and AU temporal segmentation. We advance the performance of state-of-the-art facial action recognition systems and this has been demonstrated on a number of commonly used databases.

Uncovering disease associations via integration of biological networks

Sun, Kai January 2014 (has links)
Current understanding of how diseases are associated with each other is mainly based on the similarity of clinical phenotypes. However, without considering the underlying biological mechanisms of diseases, such knowledge is limited and can even be misleading. With a growing body of transcriptomic, proteomic, metabolomic and genomic data describing diseases, we proposed to gain insights into diseases and their relationships in the light of large-scale biological data. We modelled these data as networks of inter-connected elements, and developed computational methods for their analysis. We exploited systematic measures based on graphlets to uncover biological knowledge from network topology. Since recently some doubt had arisen concerning the applicability of graphlet-based measures to low edge density networks, we first evaluated the use of graphlet-based measures and demonstrated their suitability for biological network comparison. We also validated the use of graphlet-based measures for finding well-fitting random models for protein-protein interaction (PPI) networks, and demonstrated that five viral PPI networks are well fit by several theoretical models not previously tested. To gain novel insights into diseases and their relationships, we integrated different types of biological data and developed computational approaches to compare diseases based on their underlying mechanisms. We applied several similarity measures including standard methods and two novel network-based measures to estimate disease association scores. We showed that disease associations predicted by our measures are correlated with associations derived from standard disease classification systems, comorbidity data, genome-wide association studies and literature co-occurrence data significantly higher than expected at random, demonstrating the ability of our measures to recover known disease associations. Furthermore, we presented case studies to validate the use of our measures in identifying previously undiscovered disease associations. We believe novel associations uncovered in our studies can enhance our knowledge of disease relationships, and may further lead to improvements in disease diagnosis and treatment.

Page generated in 0.0334 seconds