• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 11
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 223
  • 223
  • 50
  • 42
  • 39
  • 36
  • 31
  • 29
  • 29
  • 27
  • 26
  • 26
  • 25
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Deterministic annealing EM algorithm for robust learning of Gaussian mixture models

Wang, Bo Yu January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Electrical and Electronics Engineering
102

Large Scale Terrain Modelling for Autonomous Mining

Norberg, Johan January 2010 (has links)
This thesis is concerned with development of a terrain model using Gaussian Processes to support the automation of open-pit mines. Information can be provided from a variety of sources including GPS, laser scans and manual surveys. The information is then fused together into a single representation of the terrain together with a measure of uncertainty of the estimated model. The model is also used to detect and label specific features in the terrain. In the context of mining, theses features are edges known as toes and crests. A combination of clustering and classification using supervised learning detects and labels these regions. Data gathered from production iron ore mines in Western Australia and a farm in Marulan outside Sydney is used to demonstrate and verify the ability of Gaussian Processes to estimate a model of the terrain. The estimated terrain model is then used for detecting features of interest.Results show that the Gaussian Process correctly estimates the terrain and uncertainties, and provide a good representation of the area. Toes and crests are also successfully identified and labelled.
103

Low-Density Parity-Check Codes with Erasures and Puncturing

Ha, Jeongseok Ha 01 December 2003 (has links)
In this thesis, we extend applications of Low-Density Parity-Check (LDPC) codes to a combination of constituent sub-channels, which is a mixture of Gaussian channels with erasures. This model, for example, represents a common channel in magnetic recordings where thermal asperities in the system are detected and represented at the decoder as erasures. Although this channel is practically useful, we cannot find any previous work that evaluates performance of LDPC codes over this channel. We are also interested in practical issues such as designing robust LDPC codes for the mixture channel and predicting performance variations due to erasure patterns (random and burst), and finite block lengths. On time varying channels, a common error control strategy is to adapt the coding rate according to available channel state information (CSI). An effective way to realize this coding strategy is to use a single code and puncture it in a rate-compatible fashion, a so-called rate-compatible punctured code (RCPC). We are interested in the existence of good puncturing patterns for rate-changes that minimize performance loss. We show the existence of good puncturing patterns with analysis and verify the results with simulations. Universality of a channel code across a broad range of coding rates is a theoretically interesting topic. We are interested in the possibility of using the puncturing technique proposed in this thesis for designing universal LDPC codes. We also consider how to design high rate LDPC codes by puncturing low rate LDPC codes. The new design method can take advantage of longer effect block lengths, sparser parity-check matrices, and larger minimum distances of low rate LDPC codes.
104

Statistical validation and calibration of computer models

Liu, Xuyuan 21 January 2011 (has links)
This thesis deals with modeling, validation and calibration problems in experiments of computer models. Computer models are mathematic representations of real systems developed for understanding and investigating the systems. Before a computer model is used, it often needs to be validated by comparing the computer outputs with physical observations and calibrated by adjusting internal model parameters in order to improve the agreement between the computer outputs and physical observations. As computer models become more powerful and popular, the complexity of input and output data raises new computational challenges and stimulates the development of novel statistical modeling methods. One challenge is to deal with computer models with random inputs (random effects). This kind of computer models is very common in engineering applications. For example, in a thermal experiment in the Sandia National Lab (Dowding et al. 2008), the volumetric heat capacity and thermal conductivity are random input variables. If input variables are randomly sampled from particular distributions with unknown parameters, the existing methods in the literature are not directly applicable. The reason is that integration over the random variable distribution is needed for the joint likelihood and the integration cannot always be expressed in a closed form. In this research, we propose a new approach which combines the nonlinear mixed effects model and the Gaussian process model (Kriging model). Different model formulations are also studied to have an better understanding of validation and calibration activities by using the thermal problem. Another challenge comes from computer models with functional outputs. While many methods have been developed for modeling computer experiments with single response, the literature on modeling computer experiments with functional response is sketchy. Dimension reduction techniques can be used to overcome the complexity problem of function response; however, they generally involve two steps. Models are first fit at each individual setting of the input to reduce the dimensionality of the functional data. Then the estimated parameters of the models are treated as new responses, which are further modeled for prediction. Alternatively, pointwise models are first constructed at each time point and then functional curves are fit to the parameter estimates obtained from the fitted models. In this research, we first propose a functional regression model to relate functional responses to both design and time variables in one single step. Secondly, we propose a functional kriging model which uses variable selection methods by imposing a penalty function. we show that the proposed model performs better than dimension reduction based approaches and the kriging model without regularization. In addition, non-asymptotic theoretical bounds on the estimation error are presented.
105

A metamodeling approach for approximation of multivariate, stochastic and dynamic simulations

Hernandez Moreno, Andres Felipe 04 April 2012 (has links)
This thesis describes the implementation of metamodeling approaches as a solution to approximate multivariate, stochastic and dynamic simulations. In the area of statistics, metamodeling (or ``model of a model") refers to the scenario where an empirical model is build based on simulated data. In this thesis, this idea is exploited by using pre-recorded dynamic simulations as a source of simulated dynamic data. Based on this simulated dynamic data, an empirical model is trained to map the dynamic evolution of the system from the current discrete time step, to the next discrete time step. Therefore, it is possible to approximate the dynamics of the complex dynamic simulation, by iteratively applying the trained empirical model. The rationale in creating such approximate dynamic representation is that the empirical models / metamodels are much more affordable to compute than the original dynamic simulation, while having an acceptable prediction error. The successful implementation of metamodeling approaches, as approximations of complex dynamic simulations, requires understanding of the propagation of error during the iterative process. Prediction errors made by the empirical model at earlier times of the iterative process propagate into future predictions of the model. The propagation of error means that the trained empirical model will deviate from the expensive dynamic simulation because of its own errors. Based on this idea, Gaussian process model is chosen as the metamodeling approach for the approximation of expensive dynamic simulations in this thesis. This empirical model was selected not only for its flexibility and error estimation properties, but also because it can illustrate relevant issues to be considered if other metamodeling approaches were used for this purpose.
106

Computational solutions of linear systems and models of the human tear film

Maki, Kara Lee. January 2009 (has links)
Thesis (Ph.D.)--University of Delaware, 2009. / Principal faculty advisor: Richard J. Braun, Dept. of Mathematical Sciences. Includes bibliographical references.
107

Bayesian learning methods for neural coding

Park, Mi Jung 27 January 2014 (has links)
A primary goal in systems neuroscience is to understand how neural spike responses encode information about the external world. A popular approach to this problem is to build an explicit probabilistic model that characterizes the encoding relationship in terms of a cascade of stages: (1) linear dimensionality reduction of a high-dimensional stimulus space using a bank of filters or receptive fields (RFs); (2) a nonlinear function from filter outputs to spike rate; and (3) a stochastic spiking process with recurrent feedback. These models have described single- and multi-neuron spike responses in a wide variety of brain areas. This dissertation addresses Bayesian methods to efficiently estimate the linear and non-linear stages of the cascade encoding model. In the first part, the dissertation describes a novel Bayesian receptive field estimator based on a hierarchical prior that flexibly incorporates knowledge about the shapes of neural receptive fields. This estimator achieves error rates several times lower than existing methods, and can be applied to a variety of other neural inference problems such as extracting structure in fMRI data. The dissertation also presents active learning frameworks developed for receptive field estimation incorporating a hierarchical prior in real-time neurophysiology experiments. In addition, the dissertation describes a novel low-rank model for the high dimensional receptive field, combined with a hierarchical prior for more efficient receptive field estimation. In the second part, the dissertation describes new models for neural nonlinearities using Gaussian processes (GPs) and Bayesian active learning algorithms in closed-loop neurophysiology experiments to rapidly estimate neural nonlinearities. The dissertation also presents several stimulus selection criteria and compare their performance in neural nonlinearity estimation. Furthermore, the dissertation presents a variation of the new models by including an additional latent Gaussian noise source, to infer the degree of over-dispersion in neural spike responses. The proposed model successfully captures various mean-variance relationships in neural spike responses and achieves higher prediction accuracy than previous models. / text
108

Algorithms and data structures for cache-efficient computation: theory and experimental evaluation

Chowdhury, Rezaul Alam 28 August 2008 (has links)
Not available / text
109

Modeling of Magnetic Fields and Extended Objects for Localization Applications

Wahlström, Niklas January 2015 (has links)
The level of automation in our society is ever increasing. Technologies like self-driving cars, virtual reality, and fully autonomous robots, which all were unimaginable a few decades ago, are realizable today, and will become standard consumer products in the future. These technologies depend upon autonomous localization and situation awareness where careful processing of sensory data is required. To increase efficiency, robustness and reliability, appropriate models for these data are needed.In this thesis, such models are analyzed within three different application areas, namely (1) magnetic localization, (2) extended target tracking, and (3) autonomous learning from raw pixel information. Magnetic localization is based on one or more magnetometers measuring the induced magnetic field from magnetic objects. In this thesis we present a model for determining the position and the orientation of small magnets with an accuracy of a few millimeters. This enables three-dimensional interaction with computer programs that cannot be handled with other localization techniques. Further, an additional model is proposed for detecting wrong-way drivers on highways based on sensor data from magnetometers deployed in the vicinity of traffic lanes. Models for mapping complex magnetic environments are also analyzed. Such magnetic maps can be used for indoor localization where other systems, such as GPS, do not work. In the second application area, models for tracking objects from laser range sensor data are analyzed. The target shape is modeled with a Gaussian process and is estimated jointly with target position and orientation. The resulting algorithm is capable of tracking various objects with different shapes within the same surveillance region. In the third application area, autonomous learning based on high-dimensional sensor data is considered. In this thesis, we consider one instance of this challenge, the so-called pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only. To solve this problem, high-dimensional time series are described using a low-dimensional dynamical model. Techniques from machine learning together with standard tools from control theory are used to autonomously design a controller for the system without any prior knowledge. System models used in the applications above are often provided in continuous time. However, a major part of the applied theory is developed for discrete-time systems. Discretization of continuous-time models is hence fundamental. Therefore, this thesis ends with a method for performing such discretization using Lyapunov equations together with analytical solutions, enabling efficient implementation in software. / Hur kan man få en dator att följa pucken i bordshockey för att sammanställa match-statistik, en pensel att måla virtuella vattenfärger, en skalpell för att digitalisera patologi, eller ett multi-verktyg för att skulptera i 3D?  Detta är fyra applikationer som bygger på den patentsökta algoritm som utvecklats i avhandlingen. Metoden bygger på att man gömmer en liten magnet i verktyget, och placerar ut ett antal tre-axliga magnetometrar - av samma slag som vi har i våra smarta telefoner - i ett nätverk kring vår arbetsyta. Magnetens magnetfält ger upphov till en unik signatur i sensorerna som gör att man kan beräkna magnetens position i tre frihetsgrader, samt två av dess vinklar. Avhandlingen tar fram ett komplett ramverk för dessa beräkningar och tillhörande analys. En annan tillämpning som studerats baserat på denna princip är detektion och klassificering av fordon. I ett samarbete med Luleå tekniska högskola med projektpartners har en algoritm tagits fram för att klassificera i vilken riktning fordonen passerar enbart med hjälp av mätningar från en två-axlig magnetometer. Tester utanför Luleå visar på i princip 100% korrekt klassificering. Att se ett fordon som en struktur av magnetiska dipoler i stället för en enda stor, är ett exempel på ett så kallat utsträckt mål. I klassisk teori för att följa flygplan, båtar mm, beskrivs målen som en punkt, men många av dagens allt noggrannare sensorer genererar flera mätningar från samma mål. Genom att ge målen en geometrisk utsträckning eller andra attribut (som dipols-strukturer) kan man inte enbart förbättra målföljnings-algoritmerna och använda sensordata effektivare, utan också klassificera målen effektivare. I avhandlingen föreslås en modell som beskriver den geometriska formen på ett mer flexibelt sätt och med en högre detaljnivå än tidigare modeller i litteraturen. En helt annan tillämpning som studerats är att använda maskininlärning för att lära en dator att styra en plan pendel till önskad position enbart genom att analysera pixlarna i video-bilder. Metodiken går ut på att låta datorn få studera mängder av bilder på en pendel, i det här fallet 1000-tals, för att förstå dynamiken av hur en känd styrsignal påverkar pendeln, för att sedan kunna agera autonomt när inlärningsfasen är klar. Tekniken skulle i förlängningen kunna användas för att utveckla autonoma robotar. / <p>In the electronic version figure 2.2a is corrected.</p> / COOPLOC
110

Transfer learning for classification of spatially varying data

Jun, Goo 13 December 2010 (has links)
Many real-world datasets have spatial components that provide valuable information about characteristics of the data. In this dissertation, a novel framework for adaptive models that exploit spatial information in data is proposed. The proposed framework is mainly based on development and applications of Gaussian processes. First, a supervised learning method is proposed for the classification of hyperspectral data with spatially adaptive model parameters. The proposed algorithm models spatially varying means of each spectral band of a given class using a Gaussian process regression model. For a given location, the predictive distribution of a given class is modeled by a multivariate Gaussian distribution with spatially adjusted parameters obtained from the proposed algorithm. The Gaussian process model is generally regarded as a good tool for interpolation, but not for extrapolation. Moreover, the uncertainty of the predictive distribution increases as the distance from the training instances increases. To overcome this problem, a semi-supervised learning algorithm is presented for the classification of hyperspectral data with spatially adaptive model parameters. This algorithm fits the test data with a spatially adaptive mixture-of-Gaussians model, where the spatially varying parameters of each component are obtained by Gaussian process regressions with soft memberships using the mixture-of-Gaussian-processes model. The proposed semi-supervised algorithm assumes a transductive setting, where the unlabeled data is considered to be similar to the training data. This is not true in general, however, since one may not know how many classes may existin the unexplored regions. A spatially adaptive nonparametric Bayesian framework is therefore proposed by applying spatially adaptive mechanisms to the mixture model with infinitely many components. In this method, each component in the mixture has spatially adapted parameters estimated by Gaussian process regressions, and spatial correlations between indicator variables are also considered. In addition to land cover and land use classification applications based on hyperspectral imagery, the Gaussian process-based spatio-temporal model is also applied to predict ground-based aerosol optical depth measurements from satellite multispectral images, and to select the most informative ground-based sites by active learning. In this application, heterogeneous features with spatial and temporal information are incorporated together by employing a set of covariance functions, and it is shown that the spatio-temporal information exploited in this manner substantially improves the regression model. The conventional meaning of spatial information usually refers to actual spatio-temporal locations in the physical world. In the final chapter of this dissertation, the meaning of spatial information is generalized to the parametrized low-dimensional representation of data in feature space, and a corresponding spatial modeling technique is exploited to develop a nearest-manifold classification algorithm. / text

Page generated in 0.0628 seconds