31 
Approximate feedback solutions for differential games : theory and applicationsMylvaganam, Thulasi January 2014 (has links)
Differential games deal with problems involving multiple players, possibly competing, that influence common dynamics via their actions, commonly referred to as strategies. Thus, differential games introduce the notion of strategic decision making and have a wide range of applications. The work presented in this thesis has two aims. First, constructive approximate solutions to differential games are provided. Different areas of application for the theory are then suggested through a series of examples. Notably, multiagent systems are identified as a possible application domain for differential game theory. Problems involving multiagent systems may be formulated as nonlinear differential games for which closedform solutions do not exist in general, and in these cases the constructive approximate solutions may be useful. The thesis is commenced with an introduction to differential games, focusing on feedback Nash equilibrium solutions. Obtaining such solutions involves solving coupled partial differential equations. Since closedform solutions for these cannot, in general, be found two methods of constructing approximate solutions for a class of nonlinear, nonzerosum differential games are developed and applied to some illustrative examples, including the multiagent collision avoidance problem. The results are extended to a class of nonlinear Stackelberg differential games. The problem of monitoring a region using a team of agents is then formulated as a differential game for which adhoc solutions, using ideas introduced previously, are found. Finally meanfield games, which consider differential games with infinitely many players, are considered. It is shown that for a class of meanfield games, solutions rely on a set of ordinary differential equations in place of two coupled partial differential equations which normally characterise the problem.

32 
Kernelbased adaptive estimation : multidimensional and statespace approachesTobar Henriquez, Felipe January 2014 (has links)
Several disciplines, from engineering to social sciences, critically depend on adaptive signal estimation to either remove observation noise (filtering), or to approximate quantities before they become available (prediction). When an optimal estimator cannot be expressed in closed form, e.g. due to model uncertainty or complexity, machine learning algorithms have proven to successfully learn a model which captures rich relationships from large datasets. This thesis proposes two novel approaches to signal estimation based on support vector regression (SVR): highdimensional kernel learning (HDKL) and kernelbased statespaces modelling (KSSM). In realworld applications, signal dynamics usually depend on both time and the value of the signal itself. The HDKL concept extends the standard, singlekernel, SVR estimation approach by considering a feature space constructed as an ensemble of realvalued feature spaces; the resulting feature space provides highlylocalised estimation by averaging the subkernels estimates and is wellsuited for multichannel signals, as it captures interchannel datadependency. This thesis then provides a rigorous account for the existence of such higherdimensional RKHS and their corresponding kernels by considering the complex, quaternion and vectorvalued cases. Current kernel adaptive filters employ nonlinear autoregressive models and express the current value of the signal as a function of past values with added noise. The motivation for the second main contribution of this thesis is to depart from this class of models and propose a statespace model designed using kernels (KSSM), whereby the signal of interest is a latent state and the observations are noisy measurements of the hidden process. This formulation allows for jointly estimating the signal (state) and the parameters, and is robust to observation noise. The posterior density of the kernel mixing parameters is then found in an unsupervised fashion using Markov chain Monte Carlo and particle filters, and both the offline and online cases are addressed. The capabilities of the proposed algorithms are initially illustrated by simulation examples using synthetic data in a controlled environment. Finally, both the HDKL and the KSSM approaches are validated in the estimation of realworld signals including bodymotion trajectories, bivariate wind speed, pointofgaze location, and national grid frequency.

33 
Adaptive protection and control for widearea blackout preventionbin Mohd Ariff, Mohd January 2014 (has links)
Technical analyses of several recent power blackouts revealed that a group of generators going outofstep with the rest of the power system is often a precursor of a complete system collapse. Outofstep protection is designed to assess the stability of the evolving swing after a disturbance and take control action accordingly. However, the settings of outofstep relays are found to be unsatisfactory due to the fact that the electromechanical swings that occurred during relay commissioning are different in practice. These concerns motivated the development of a novel approach to recalculate the outofstep protection settings to suit the prevalent operating condition. With phasor measurement unit (PMU) technology, it is possible to adjust the setting of outofstep relay in realtime. The setting of outofstep relay is primarily determined by three dynamic parameters: direct axis transient reactance, quadrature axis speed voltage and generator inertia. In a complex power network, these parameters are the dynamic parameters of an equivalent model of a coherent group of generators. Hence, it is essential to identify the coherent group of generators and estimate the dynamic model parameters of each generator in the system first in order to form the dynamic model equivalent in the system. The work presented in this thesis develops a measurementbased technique to identify the coherent areas of power system network by analysing the measured data obtained from the system. The method is based on multivariate analysis of the signals, using independent component analysis (ICA). Also, a technique for estimating the dynamic model parameters of the generators in the system has been developed. The dynamic model parameters of synchronous generators are estimated by processing the PMU measurements using unscented Kalman filter (UKF).

34 
Assisting search and rescue through visual attentionMardell, James January 2014 (has links)
With the ubiquity of visual data being recorded, we now have the ability to view vast amounts of visual imagery. However, searching through imagery for an indeterminate target in tasks such as security baggage inspection, medical scan screening and Wilderness Search and Rescue (WiSAR), remains problematic for most people and cannot be automated. If the imagery was presented to account for the way in which humans cognitively process such visuals, then the success of these tasks might be improved. This thesis proposes and evaluates a series of presentation methods to manipulate imagery to seek this improvement. A series of user experience studies were conducted. Given the task of searching for inconspicuous 'lost' human beings in a WiSAR scenario, subjects observed multiple sequences of aerial photography embodied in six specially designed presentations. These presentations were designed following an analysis of existing visual attention literature. The first study to evaluate these methods compared the standard live (i.e. scrolling) view of the terrain to a static representation. This static portrayal of aerial search yielded an improved success rate for target location. The second method adapted the static representation, by segmenting the image into smaller tiles that were displayed for correspondingly shorter durations, while the third method enlarged the segmented tiles to fill the display. With increased segmentation, the ability for subjects to locate targets was broadly unaffected. The fourth study investigated two methods that use eyetracking equipment to dynamically enhance the display. Contained within this thesis are the findings from these four studies, which include the analysis of each subject's performance, opinions and eyemovement behaviour. The inspiration for each presentation method was the development of a proposed model for visual search. Ultimately, the static method is revealed as the most effective for the chosen scenario of WiSAR.

35 
Online timing slack measurement and its application in fieldprogrammable gate arraysLevine, Joshua M. January 2014 (has links)
Reliability, power consumption and timing performance are key concerns for today's integrated circuits. Measurement techniques capable of quantifying the timing characteristics of a circuit, while it is operating, facilitate a range of benefits. Delay variation due to environmental and operational conditions, and degradation can be monitored by tracking changes in timing performance. Using the measurements in a closedloop to control power supply voltage or clock frequency allows for the reduction of timing safety margins, leading to improvements in power consumption or throughput performance through the exploitation of betterthan worstcase operation. This thesis describes a novel online timing slack measurement method which can directly measure the timing performance of a circuit, accurately and with minimal overhead. Enhancements allow for the improvement of absolute accuracy and resolution. A compilation flow is reported that can automatically instrument arbitrary circuits on FPGAs with the measurement circuitry. On its own this measurement method is able to track the "health" of an integrated circuit, from commissioning through its lifetime, warning of impending failure or instigating preemptive degradation mitigation techniques. The use of the measurement method in a closedloop dynamic voltage and frequency scaling scheme has been demonstrated, achieving significant improvements in power consumption and throughput performance.

36 
Probabilistic learning by demonstration from complete and incomplete dataKorkinof, Dimitrios January 2015 (has links)
In recent years we have observed a convergence of the fields of robotics and machine learning initiated by technological advances bringing AI closer to the physical world. A prerequisite, however, for successful applications is to formulate reliable and precise offline algorithms, requiring minimal tuning, fast and adaptive online algorithms and finally effective ways of rectifying corrupt demonstrations. In this work we aim to address some of those challenges. We begin by employing two offline algorithms for the purpose of Learning by Demonstration (LbD). A Bayesian nonparametric approach, able to infer the optimal model size without compromising the model's descriptive power and a Quantum Statistical extension to the mixture model able to achieve high precision for a given model size. We explore the efficacy of those algorithms in several one and multishot LbD application achieving very promising results in terms of speed and and accuracy. Acknowledging that more realistic robotic applications also require more adaptive algorithmic approaches, we then introduce an online learning algorithm for quantum mixtures based on the online EM. The method exhibits high stability and precision, outperforming wellestablished online algorithms, as demonstrated for several regression benchmark datasets and a multishot trajectory LbD case study. Finally, aiming to account for data corruption due to sensor failures or occlusions, we propose a model for automatically rectifying damaged sequences in an unsupervised manner. In our approach we take into account the sequential nature of the data, the redundancy manifesting itself among repetitions of the same task and the potential of knowledge transfer across different tasks. We have devised a temporal factor model, with each factor modelling a single basic pattern in time and collectively forming a dictionary of fundamental trajectories shared across sequences. We have evaluated our method in a number of reallife datasets.

37 
Wireless sensor networks using network coding for structural health monitoringSkulic, Jelena January 2014 (has links)
Wireless Sensor Networks (WSNs) have been deployed for the purpose of structural health monitoring (SHM) of civil engineering structures, e.g. bridges. SHM applications can potentially produce a high volume of sensing data, which consumes much transmission power and thus decreases the lifetime of the batteryrun networks. We employ the network coding technique to improve the network efficiency and prolong its lifetime. By increasing the transmission power, we change the node connectivity and control the number of nodes that can overhear transmitted messages so as to hopefully realize the capacity gain by use of network coding. In Chapter 1, we present the background, to enable the reader to understand the need for SHM, advantages and drawbacks of WSNs and potential the application of network coding techniques has. In Chapter 2 we provide a review of related research explaining how it relates to our work, and why it is not fully applicable in our case. In Chapter 3, we propose to control transmission power as a means to adjust the number of nodes that can overhear a message transmission by a neighbouring node. However, too much of the overhearing by high power transmission consumes aggressively limited battery energy. We investigate the interplay between transmission power and network coding operations in Chapter 4. We show that our solution reduces the overall volume of data transfer, thus leading to significant energy savings and prolonged network lifetime. We present the mathematical analysis of our proposed algorithm. By simulation, we also study the tradeoffs between overhearing and power consumption for the network coding scheme. In Chapter 5, we propose a methodology for the optimal placement of sensor nodes in linear network topologies (e.g., along the length of a bridge), that aims to minimise the link connectivity problems and maximise the lifetime of the network. Both simple packet relay and network coding are considered for the routing of the collected data packets towards two sink nodes positioned at both ends of the bridge. Our mathematical analysis, verified by simulation results, shows that the proposed methodology can lead to significant energy saving and prolong the lifetime of the underlying wireless sensor network. Chapter 6 is dedicated to the delay analysis. We analytically calculate the gains in terms of packet delay obtained by the use of network coding in linear multihop wireless sensor network topologies. Moreover, we calculate the exact packet delay (from the packet generation time to the time it is delivered to the sink nodes) as a function of the location of the source sensor node within the linear network. The derived packet delay distribution formulas have been verified by simulations and can provide a benchmark for the delay performance of linear sensor networks. In the Chapter 7, we propose an adaptive version of network coding based algorithm. In the case of packet loss, nodes do not necessary retransmit messages as they are able to internally decide how to cope with the situation. The goal of this algorithm is to reduce the power consumption, and decrease delays whenever it can. This algorithm achieves the delay similar to that of threehop directconnectivity version of the deterministic algorithm, and consumes power almost like onehop directconnectivity version of deterministic algorithm. In very poor channel conditions, this protocol outperforms the deterministic algorithm both in terms of delay and power consumption. In Chapter 8, we explain the direction of our future work. Particularly, we are interested in the application of combined TDMA/FDMA technique to our algorithm.

38 
Construction of lattices for communications and securityYan, Yanfei January 2014 (has links)
In this thesis, we propose a new class of lattices based on polar codes, namely polar lattices. Polar lattices enjoy explicit construction and provable goodness for the additive white Gaussian noise (AWGN) channel, \textit{i.e.}, they are \emph{AWGNgood} lattices, in the sense that the error probability (for infinite lattice coding) vanishes for any fixed volumetonoise ratio (VNR) greater than $2\pi e$. Our construction is based on the multilevel approach of Forney \textit{et al.}, where on each level we construct a capacityachieving polar code. We show the component polar codes are naturally nested, thereby fulfilling the requirement of the multilevel lattice construction. We present a more precise analysis of the VNR of the resultant lattice, which is upperbounded in terms of the flatness factor and the capacity losses of the component codes. The proposed polar lattices are efficiently decodable by using multistage decoding. Design examples are presented to demonstrate the superior performance of polar lattices. However, there is no infinite lattice coding in the practical applications. We need to apply the power constraint on the polar lattices which generates the polar lattice codes. We prove polar lattice codes can achieve the capacity $\frac{1}{2}\log(1+\SNR)$ of the powerconstrained AWGN channel with a novel shaping scheme. The main idea is that by implementing the lattice Gaussian distribution over the AWGNgood polar lattices, the maximum errorfree transmission rate of the resultant coding scheme can be arbitrarily close to the capacity $\frac{1}{2}\log(1+\SNR)$. The shaping technique is based on discrete lattice Gaussian distribution, which leads to a binary asymmetric channel at each level for the multilevel lattice codes. Then it is straightforward to employ multilevel asymmetric polar codes which is a combination of polar lossless source coding and polar channel coding. The construction of polar codes for an asymmetric channel can be converted to that for a related symmetric channel, and it turns out that this symmetric channel is equivalent to an minimum meansquare error (MMSE) scaled $\Lambda/\Lambda'$ channel in lattice coding in terms of polarization, which eventually simplifies our coding design. Finally, we investigate the application of polar lattices in physical layer security. Polar lattice codes are proved to be able to achieve the strong secrecy capacity of the Mod$\Lambda$ AWGN wiretap channel. The Mod$\Lambda$ assumption was due to the fact that a practical shaping scheme aiming to achieve the optimum shaping gain was missing. In this thesis, we use our shaping scheme and extend polar lattice coding to the Gaussian wiretap channel. By employing the polar coding technique for asymmetric channels, we manage to construct an AWGNgood lattice and a secrecygood lattice with optimal shaping simultaneously. Then we prove the resultant wiretap coding scheme can achieve the strong secrecy capacity for the Gaussian wiretap channel.

39 
Multivariate timefrequency analysisAhrabian, Alireza January 2014 (has links)
Recent advances in timefrequency theory have led to the development of high resolution timefrequency algorithms, such as the empirical mode decomposition (EMD) and the synchrosqueezing transform (SST). These algorithms provide enhanced localization in representing time varying oscillatory components over conventional linear and quadratic timefrequency algorithms. However, with the emergence of low cost multichannel sensor technology, multivariate extensions of timefrequency algorithms are needed in order to exploit the interchannel dependencies that may arise for multivariate data. Applications of this framework range from filtering to the analysis of oscillatory components. To this end, this thesis first seeks to introduce a multivariate extension of the synchrosqueezing transform, so as to identify a set of oscillations common to the multivariate data. Furthermore, a new framework for multivariate timefrequency representations is developed using the proposed multivariate extension of the SST. The performance of the proposed algorithms are demonstrated on a wide variety of both simulated and real world data sets, such as in phase synchrony spectrograms and multivariate signal denoising. Finally, multivariate extensions of the EMD have been developed that capture the interchannel dependencies in multivariate data. This is achieved by processing such data directly in higher dimensional spaces where they reside, and by accounting for the power imbalance across multivariate data channels that are recorded from real world sensors, thereby preserving the multivariate structure of the data. These optimized performance of such data driven algorithms when processing multivariate data with power imbalances and interchannel correlations, and is demonstrated on the real world examples of Doppler radar processing.

40 
Dynamic service placement in mobile microcloudsWang, Shiqiang January 2015 (has links)
Cloud computing is an important enabling technique for running complicated applications on resourcelimited handheld devices, personal computers, or small enterprise servers, by offloading part of the computation and storage to the cloud. However, traditional centralized cloud architectures are incapable of coping with many emerging applications that are delaysensitive and require large amount of data exchange between the frontend and backend components of the application. To tackle these issues, the concept of mobile microcloud (MMC) has recently emerged. An MMC is typically connected directly to a network component, such as a wireless basestation, at the edge of the network and provides services to a small group of users. In this way, the communication distances between users and the cloud(s) hosting their services are reduced, and thus users can have more instantaneous access to cloud services. Several new challenges arise in the MMC context, which are mainly caused by the limited coverage area of basestations and the dynamic nature of mobile users, network background traffic, etc. Among these challenges, one important problem is where (on which cloud) to place the services (or, equivalently, to execute the service applications) to cope with the user demands and network dynamics. We focus on this problem in this thesis, and consider both the initial placement and subsequent migration of services, where migration may occur when the user location or network conditions change. The problem is investigated from a theoretical angle with practical considerations. We first abstract the service application and the physical cloud system as graphs, and propose online approximation algorithms for finding the placement of an incoming stream of application graphs onto the physical graph. Then, we consider the dynamic service migration problem, which we model as a Markov decision process (MDP). The state space of the MDP is large, making it difficult to solve in realtime. Therefore, we propose simplified solution approaches as well as approximation methods to make the problem tractable. Afterwards, we consider more general nonMarkovian scenarios but assume that we can predict the future costs with a known accuracy. We propose a method of dynamically placing each service instance upon its arrival and a way of finding the optimal lookahead window size for cost prediction. The results are verified using simulations driven by both synthetic and realworld data traces. Finally, a framework for emulating MMCs in a more practical setting is proposed. In our view, the proposed solutions can enrich the fundamental understanding of the service placement problem. It can also path the way for practical deployment of MMCs. Furthermore, various solution approaches proposed in this thesis can be applicable or generalized for solving a larger set of problems beyond the context of MMC.

Page generated in 0.1393 seconds