• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 11
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 227
  • 227
  • 52
  • 43
  • 41
  • 37
  • 31
  • 30
  • 29
  • 28
  • 27
  • 26
  • 25
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Control Design for Long Endurance Unmanned Underwater Vehicle Systems

Kleiber, Justin Tanner 24 May 2022 (has links)
In this thesis we demonstrate a technique for robust controller design for an autonomous underwater vehicle (AUV) that explicitly handles the trade-off between reference tracking, agility, and energy efficient performance. AUVs have many sources of modeling uncertainty that impact the uncertainty in maneuvering performance. A robust control design process is proposed to handle these uncertainties while meeting control system performance objectives. We investigate the relationships between linear system design parameters and the control performance of our vehicle in order to inform an H∞ controller synthesis problem with the objective of balancing these tradeoffs. We evaluate the controller based on its reference tracking performance, agility and energy efficiency, and show the efficacy of our control design strategy. / Master of Science / In this thesis we demonstrate a technique for autopilot design for an autonomous underwater vehicle (AUV) that explicitly handles the trade-off between three performance metrics. Mathematical models of AUVs are often unable to fully describe their many physical properties. The discrepancies between the mathematical model and reality impact how certain we can be about an AUV's behavior. Robust controllers are a class of controller that are designed to handle uncertainty. A robust control design process is proposed to handle these uncertainties while meeting vehicle performance objectives. We investigate the relationships between design parameters and the performance of our vehicle. We then use this relationship to inform the design of a controller. We evaluate this controller based on its energy efficiency, agility and ability to stay on course, and thus show the effectiveness of our control design strategy.
42

Essays on Attention Allocation and Factor Models

Scanlan, Susannah January 2024 (has links)
In the first chapter of this dissertation, I explore how forecaster attention, or the degree to which new information is incorporated into forecasts, is reflected at the lower-dimensional factor representation of multivariate forecast data. When information is costly to acquire, forecasters may pay more attention to some sources of information and ignore others. How much attention they pay will determine the strength of the forecast correlation (factor) structure. Using a factor model representation, I show that a forecast made by a rationally inattentive agent will include an extra shrinkage and thresholding "attention matrix" relative to a full information benchmark, and propose an econometric procedure to estimate it. Differences in the degree of forecaster attentiveness can explain observed differences in empirical shrinkage in professional macroeconomic forecasts relative to a consensus benchmark. Forecasters share the same reduced-form model, but differ in their measured attention. Better-performing forecasters have higher measured attention (lower shrinkage) than their poorly-performing peers. Measured forecaster attention to multiple dimensions of the information space can largely be captured by a single scalar cost parameter. I propose a new class of information cost functions for the classic multivariate linear-quadratic Gaussian tracking problem called separable spectral cost functions. The proposed measure of attention and mapping from theoretical model of attention allocation to factor structure in the first chapter is valid for this set of cost functions. These functions are defined over the eigenvalues of prior and posterior variance matrices. Separable spectral cost functions both nest known cost functions and are consistent with the definition of Uniformly Posterior Separable cost functions, which have desirable theoretical properties. The third chapter is coauthored work with Professor Serena Ng. We estimate higher frequency values of monthly macroeconomic data using different factor based imputation methods. Monthly and weekly economic indicators are often taken to be the largest common factor estimated from high and low frequency data, either separately or jointly. To incorporate mixed frequency information without directly modeling them, we target a low frequency diffusion index that is already available, and treat high frequency values as missing. We impute these values using multiple factors estimated from the high frequency data. In the empirical examples considered, static matrix completion that does not account for serial correlation in the idiosyncratic errors yields imprecise estimates of the missing values irrespective of how the factors are estimated. Single equation and systems-based dynamic procedures that account for serial correlation yield imputed values that are closer to the observed low frequency ones. This is the case in the counterfactual exercise that imputes the monthly values of consumer sentiment series before 1978 when the data was released only on a quarterly basis. This is also the case for a weekly version of the CFNAI index of economic activity that is imputed using seasonally unadjusted data. The imputed series reveals episodes of increased variability of weekly economic information that are masked by the monthly data, notably around the 2014-15 collapse in oil prices.
43

Terrain Aided Navigation for Autonomous Underwater Vehicles with Local Gaussian Processes

Chowdhary, Abhilash 28 June 2017 (has links)
Navigation of autonomous underwater vehicles (AUVs) in the subsea environment is particularly challenging due to the unavailability of GPS because of rapid attenuation of electromagnetic waves in water. As a result, the AUV requires alternative methods for position estimation. This thesis describes a terrain-aided navigation approach for an AUV where, with the help of a prior depth map, the AUV localizes itself using altitude measurements from a multibeam DVL. The AUV simultaneously builds a probabilistic depth map of the seafloor as it moves to unmapped locations. The main contribution of this thesis is a new, scalable, and on-line terrain-aided navigation solution for AUVs which does not require the assistance of a support surface vessel. Simulation results on synthetic data and experimental results from AUV field trials in Panama City, Florida are also presented. / Master of Science / Navigation of autonomous underwater vehicles (AUVs) in subsea environment is particularly challenging due to the unavailability of GPS because of rapid attenuation of electromagnetic waves in water. As a result, the AUV requires alternative methods for position estimation. This thesis describes a terrain-aided navigation approach for an AUV where, with the help of a prior depth map, the AUV localizes itself using altitude measurements from a multibeam DVL. The AUV simultaneously builds a probabilistic depth map of the seafloor as it moves to unmapped locations. The main contribution of this thesis is a new, scalable, and on-line terrain-aided navigation solution for AUVs which does not require assistance of a support surface vessel. Simulation results on synthetic data and experimental results from AUV field trials in Panama City, Florida are also presented.
44

Sparse Gaussian process approximations and applications

van der Wilk, Mark January 2019 (has links)
Many tasks in machine learning require learning some kind of input-output relation (function), for example, recognising handwritten digits (from image to number) or learning the motion behaviour of a dynamical system like a pendulum (from positions and velocities now to future positions and velocities). We consider this problem using the Bayesian framework, where we use probability distributions to represent the state of uncertainty that a learning agent is in. In particular, we will investigate methods which use Gaussian processes to represent distributions over functions. Gaussian process models require approximations in order to be practically useful. This thesis focuses on understanding existing approximations and investigating new ones tailored to specific applications. We advance the understanding of existing techniques first through a thorough review. We propose desiderata for non-parametric basis function model approximations, which we use to assess the existing approximations. Following this, we perform an in-depth empirical investigation of two popular approximations (VFE and FITC). Based on the insights gained, we propose a new inter-domain Gaussian process approximation, which can be used to increase the sparsity of the approximation, in comparison to regular inducing point approximations. This allows GP models to be stored and communicated more compactly. Next, we show that inter-domain approximations can also allow the use of models which would otherwise be impractical, as opposed to improving existing approximations. We introduce an inter-domain approximation for the Convolutional Gaussian process - a model that makes Gaussian processes suitable to image inputs, and which has strong relations to convolutional neural networks. This same technique is valuable for approximating Gaussian processes with more general invariance properties. Finally, we revisit the derivation of the Gaussian process State Space Model, and discuss some subtleties relating to their approximation. We hope that this thesis illustrates some benefits of non-parametric models and their approximation in a non-parametric fashion, and that it provides models and approximations that prove to be useful for the development of more complex and performant models in the future.
45

Airborne mapping using LIDAR / Luftburen kartering med LIDAR

Almqvist, Erik January 2010 (has links)
<p>Mapping is a central and common task in robotics research. Building an accurate map without human assistance provides several applications such as space missions, search and rescue, surveillance and can be used in dangerous areas. One application for robotic mapping is to measure changes in terrain volume. In Sweden there are over a hundred landfills that are regulated by laws that says that the growth of the landfill has to be measured at least once a year.</p><p>In this thesis, a preliminary study of methods for measuring terrain volume by the use of an Unmanned Aerial Vehicle (UAV) and a Light Detection And Ranging (LIDAR) sensor is done. Different techniques are tested, including data merging strategies and regression techniques by the use of Gaussian Processes. In the absence of real flight scenario data, an industrial robot has been used fordata acquisition. The result of the experiment was successful in measuring thevolume difference between scenarios in relation to the resolution of the LIDAR. However, for more accurate volume measurements and better evaluation of the algorithms, a better LIDAR is needed.</p> / <p>Kartering är ett centralt och vanligt förekommande problem inom robotik. Att bygga en korrekt karta av en robots omgivning utan mänsklig hjälp har en mängd tänkbara användningsområden. Exempel på sådana är rymduppdrag, räddningsoperationer,övervakning och användning i områden som är farliga för människor. En tillämpning för robotkartering är att mäta volymökning hos terräng över tiden. I Sverige finns det över hundra soptippar, och dessa soptippar är reglerade av lagar som säger att man måste mäta soptippens volymökning minst en gång om året.</p><p>I detta exjobb görs en undersökning av möjligheterna att göra dessa volymberäkningarmed hjälp av obemannade helikoptrar utrustade med en Light Detectionand Ranging (LIDAR) sensor. Olika tekniker har testats, både tekniker som slår ihop LIDAR data till en karta och regressionstekniker baserade på Gauss Processer. I avsaknad av data inspelad med riktig helikopter har ett experiment med en industri robot genomförts för att samla in data. Resultaten av volymmätningarnavar goda i förhållande till LIDAR-sensorns upplösning. För att få bättre volymmätningaroch bättre utvärderingar av de olika algoritmerna är en bättre LIDAR-sensor nödvändig.</p>
46

Airborne mapping using LIDAR / Luftburen kartering med LIDAR

Almqvist, Erik January 2010 (has links)
Mapping is a central and common task in robotics research. Building an accurate map without human assistance provides several applications such as space missions, search and rescue, surveillance and can be used in dangerous areas. One application for robotic mapping is to measure changes in terrain volume. In Sweden there are over a hundred landfills that are regulated by laws that says that the growth of the landfill has to be measured at least once a year. In this thesis, a preliminary study of methods for measuring terrain volume by the use of an Unmanned Aerial Vehicle (UAV) and a Light Detection And Ranging (LIDAR) sensor is done. Different techniques are tested, including data merging strategies and regression techniques by the use of Gaussian Processes. In the absence of real flight scenario data, an industrial robot has been used fordata acquisition. The result of the experiment was successful in measuring thevolume difference between scenarios in relation to the resolution of the LIDAR. However, for more accurate volume measurements and better evaluation of the algorithms, a better LIDAR is needed. / Kartering är ett centralt och vanligt förekommande problem inom robotik. Att bygga en korrekt karta av en robots omgivning utan mänsklig hjälp har en mängd tänkbara användningsområden. Exempel på sådana är rymduppdrag, räddningsoperationer,övervakning och användning i områden som är farliga för människor. En tillämpning för robotkartering är att mäta volymökning hos terräng över tiden. I Sverige finns det över hundra soptippar, och dessa soptippar är reglerade av lagar som säger att man måste mäta soptippens volymökning minst en gång om året. I detta exjobb görs en undersökning av möjligheterna att göra dessa volymberäkningarmed hjälp av obemannade helikoptrar utrustade med en Light Detectionand Ranging (LIDAR) sensor. Olika tekniker har testats, både tekniker som slår ihop LIDAR data till en karta och regressionstekniker baserade på Gauss Processer. I avsaknad av data inspelad med riktig helikopter har ett experiment med en industri robot genomförts för att samla in data. Resultaten av volymmätningarnavar goda i förhållande till LIDAR-sensorns upplösning. För att få bättre volymmätningaroch bättre utvärderingar av de olika algoritmerna är en bättre LIDAR-sensor nödvändig.
47

Utilisation de simulateurs multi-fidélité pour les études d'incertitudes dans les codes de caclul / Assessment of uncertainty in computer experiments when working with multifidelity simulators.

Zertuche, Federico 08 October 2015 (has links)
Les simulations par ordinateur sont un outil de grande importance pour les mathématiciens appliqués et les ingénieurs. Elles sont devenues plus précises mais aussi plus compliquées. Tellement compliquées, que le temps de lancement par calcul est prohibitif. Donc, plusieurs aspects de ces simulations sont mal compris. Par exemple, souvent ces simulations dépendent des paramètres qu'ont une valeur inconnue.Un metamodèle est une reconstruction de la simulation. Il produit des réponses proches à celles de la simulation avec un temps de calcul très réduit. Avec ce metamodèle il est possible d'étudier certains aspects de la simulation. Il est construit avec peu de données et son objectif est de remplacer la simulation originale.Ce travail est concerné avec la construction des metamodèles dans un cadre particulier appelé multi-fidélité. En multi-fidélité, le metamodèle est construit à partir des données produites par une simulation objective et des données qu'ont une relation avec cette simulation. Ces données approximées peuvent être générés par des versions dégradées de la simulation ; par des anciennes versions qu'ont été largement étudiées ou par une autre simulation dans laquelle une partie de la description est simplifiée.En apprenant la différence entre les données il est possible d'incorporer l'information approximée et ce ci peut nous conduire vers un metamodèle amélioré. Deux approches pour atteindre ce but sont décrites dans ce manuscrit : la première est basée sur des modèles avec des processus gaussiens et la seconde sur une décomposition à base d'ondelettes. La première montre qu'en estimant la relation il est possible d'incorporer des données qui n'ont pas de valeur autrement. Dans la seconde, les données sont ajoutées de façon adaptative pour améliorer le metamodèle.L'objet de ce travail est d'améliorer notre compréhension sur comment incorporer des données approximées pour produire des metamodèles plus précis. Travailler avec un metamodèle multi-fidélité nous aide à comprendre en détail ces éléments. A la fin une image globale des parties qui forment ce metamodèle commence à s'esquisser : les relations et différences entres les données deviennent plus claires. / A very important tool used by applied mathematicians and engineers to model the behavior of a system are computer simulations. They have become increasingly more precise but also more complicated. So much, that they are very slow to produce an output and thus difficult to sample so that many aspects of these simulations are not very well understood. For example, in many cases they depend on parameters whose value isA metamodel is a reconstruction of the simulation. It requires much less time to produce an output that is close to what the simulation would. By using it, some aspects of the original simulation can be studied. It is built with very few samples and its purpose is to replace the simulation.This thesis is concerned with the construction of a metamodel in a particular context called multi-fidelity. In multi-fidelity the metamodel is constructed using the data from the target simulation along other samples that are related. These approximate samples can come from a degraded version of the simulation; an old version that has been studied extensively or a another simulation in which a part of the description is simplified.By learning the difference between the samples it is possible to incorporate the information of the approximate data and this may lead to an enhanced metamodel. In this manuscript two approaches that do this are studied: one based on Gaussian process modeling and another based on a coarse to fine Wavelet decomposition. The fist method shows how by estimating the relationship between two data sets it is possible to incorporate data that would be useless otherwise. In the second method an adaptive procedure to add data systematically to enhance the metamodel is proposed.The object of this work is to better our comprehension of how to incorporate approximate data to enhance a metamodel. Working with a multi-fidelity metamodel helps us to understand in detail the data that nourish it. At the end a global picture of the elements that compose it is formed: the relationship and the differences between all the data sets become clearer.
48

On testing for the Cox model using resampling methods

Fang, Jing, 方婧 January 2007 (has links)
published_or_final_version / abstract / Statistics and Actuarial Science / Master / Master of Philosophy
49

Planning and exploring under uncertainty

Murphy, Elizabeth M. January 2010 (has links)
Scalable autonomy requires a robot to be able to recognize and contend with the uncertainty in its knowledge of the world stemming from its noisy sensors and actu- ators. The regions it chooses to explore, and the paths it takes to get there, must take this uncertainty into account. In this thesis we outline probabilistic approaches to represent that world; to construct plans over it; and to determine which part of it to explore next. We present a new technique to create probabilistic cost maps from overhead im- agery, taking into account the uncertainty in terrain classification and allowing for spatial variation in terrain cost. A probabilistic cost function combines the output of a multi-class classifier and a spatial probabilistic regressor to produce a probability density function over terrain for each grid cell in the map. The resultant cost map facilitates the discovery of not only the shortest path between points on the map, but also a distribution of likely paths between the points. These cost maps are used in a path planning technique which allows the user to trade-off the risk of returning a suboptimal path for substantial increases in search speed. We precompute a probability distribution which precisely approximates the true distance between any grid cell in the map and goal cell. This distribution under- pins a number of A* search heuristics we present, which can characterize and bound the risk we are prepared to take in gaining search efficiency while sacrificing optimal path length. Empirically, we report efficiency increases in excess of 70% over standard heuristic search methods. Finally, we present a global approach to the problem of robotic exploration, uti- lizing a hybrid of a topological data structure and an underlying metric mapping process. A ‘Gap Navigation Tree’ is used to motivate global target selection and occluded regions of the environment (‘gaps’) are tracked probabilistically using the metric map. In pursuing these gaps we are provided with goals to feed to the path planning process en route to a complete exploration of the environment. The combination of these three techniques represents a framework to facilitate robust exploration in a-priori unknown environments.
50

Gaussian processes for temporal and spatial pattern analysis in the MISR satellite land-surface data

Cuthbertson, Adrian John 31 July 2014 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 30th May 2014. / The Multi-Angle Imaging SpectroRadiometer (MISR) is an Earth observation instrument operated by NASA on its Terra satellite. The instrument is unique in imaging the Earth’s surface from nine cameras at different angles. An extended system MISR-HR, has been developed by the Joint Research Centre of the European Commission (JRC) and NASA, which derives many values describing the interaction between solar energy, the atmosphere and different surface characteristics. It also generates estimates of data at the native resolution of the instrument for 24 of the 36 camera bands for which on-board averaging has taken place prior to downloading of the data. MISR-HR data potentially yields high value information in agriculture, forestry, environmental studies, land management and other fields. The MISR-HR system and the data for the African continent have also been provided by NASA and the JRC to the South African National Space Agency (SANSA). Generally, satellite remote-sensing of the Earth’s surface is characterised by irregularity in the time-series of data due to atmospheric, environmental and other effects. Time-series methods, in particular for vegetation phenology applications, exist for estimating missing data values, filling gaps and discerning periodic structure in the data. Recent evaluations of the methods established a sound set of requirements that such methods should satisfy. Existing methods mostly meet the requirements, but choice of method would largely depend on the analysis goals and on the nature of the underlying processes. An alternative method for time-series exists in Gaussian Processes, a long established statistical method, but not previously a common method for satellite remote-sensing time-series. This dissertation asserts that Gaussian Process regression could also meet the aforementioned set of time-series requirements, and further provide benefits of a consistent framework rooted in Bayesian statistical methods. To assess this assertion, a data case study has been conducted for data provided by SANSA for the Kruger National Park in South Africa. The requirements have been posed as research questions and answered in the affirmative by analysing twelve years of historical data for seven sites differing in vegetation types, in and bordering the Park. A further contribution is made in that the data study was conducted using Gaussian Process software which was developed specifically for this project in the modern open language Julia. This software will be released in due course as open source.

Page generated in 0.0641 seconds