• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 12
  • 10
  • 10
  • 3
  • 1
  • Tagged with
  • 203
  • 203
  • 52
  • 47
  • 45
  • 43
  • 41
  • 38
  • 36
  • 36
  • 30
  • 28
  • 28
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Machine Unlearning and hyperparameters optimization in Gaussian Process regression / Avinlärning och hyperparameteroptimering i regression av Gaussiska processer

Manthe, Matthis January 2021 (has links)
The establishment of the General Data Protection Regulation (GDPR) in Europe in 2018, including the "Right to be Forgotten" poses important questions about the necessity of efficient data deletion techniques for trained Machine Learning models to completely enforce this right, since retraining from scratch such models whenever a data point must be deleted seems impractical. We tackle such a problem for Gaussian Process Regression and define in this paper an efficient exact unlearning technique for Gaussian Process Regression which completely include the optimization of the hyperparameters of the kernel function. The method is based on an efficient retracing of past optimizations by the Resilient Backpropagation (Rprop) algorithm through the online formulation of a Gaussian Process regression. Furthermore, we develop an extension of the proposed method to the Product-of-Experts and Bayesian Committee Machines types of local approximations of Gaussian Process Regression, further enhancing the unlearning capabilities through a random partitioning of the dataset. The performance of the proposed method is largely dependent on the regression task. We show through multiple experiments on different problems that several iterations of such optimization can be recomputed without any need for kernel matrix inversions, at the cost of saving intermediate states of the training phase. We also offer different ideas to extend this method to an approximate unlearning scheme, even further improving its computational complexity. / Införandet av Dataskyddsförordningen (DSF) i Europa 2018, inklusive rätten att bli bortglömd, ställer viktiga frågor om nödvändigheten av effektiva dataraderingtekniker för tränade maskininlärningsmodeller för att följa denna rättighet, detta eftersom omskolning från grunden av tränade modeller när en datapunkt måste raderas verkar opraktiskt. Vi tacklar dataraderingsproblemet för regression av Gaussiska processer och vi definierar i detta dokument en effektiv exakt avlärningsteknik för Gaussisk process regression som inkluderar optimeringen av kärnfunktionens hyperparametrarna. Metoden är baserad på en effektiv omberäkning av tidigare optimeringar genom Resilient Backpropagation (Rprop)-algoritmen tack vare onlineformuleringen medelst en Gaussisk processregression. Dessutom utvecklar vi en utvidgning av den föreslagna metoden till produkter-av-experter och Bayesianska kommittémaskiner av lokala approximationer av Gaussiska processregression, för att förbättra avlärningskapaciteten genom att använda en slumpmässig partitionering av datasetet. Metodernas prestanda beror till stor del på regressionsuppgiften. Vi visar med flera experiment på olika problem att flera iterationer av optimeringarna kan omberäknas utan behov av kärnmatrisinversioner, men på bekostnad av att spara mellanstatus i träningsfasen. Vi föreslår också olika idéer för att utvidga denna metod till en approximativ avlärningsteknik, för att förbättra dess beräkningskomplexitet. / L’établissement du Règlement Général sur la Protection des Données (RGPD) en Europe en 2018, incluant le "Droit à l’Oubli" pose de sérieuses questions vis-à-vis de l’importance du développement de techniques permettant le "désapprentissage" de données specifiques d’un modéle entrainé. Réentrainer un modèle "from scratch" dés qu’une donnée doit être supprimée pose problème en pratique, ce qui justifie le besoin de méthodes plus efficaces pour répondre à ce problème. Nous abordons ce problème dans le contexte d’une Gaussian Process Regression, et définissons dans ce rapport une méthode efficace et exacte de désapprentissage pour une Gaussian Process Regression incluant l’optimisation des hyperparamètres du noyau. La méthode est basée sur un traçage efficace de l’optimisation faite par l’algorithme de Resilient Backpropagation (Rprop) grâce à la formulation Online d’une Gaussian Process Regression. De plus, nous développons une extension de cette première méthode pour la rendre applicable à des approximations locales telles que les Product-of-Experts ou Bayesian Committee Machines, ce qui permet d’améliorer d’avantage les performance de désapprentissage grâce à partitionement aléatoire du jeu de données. Du fait de la forte dépendence des performances de désapprentissage à la tâche de regression, nous montrons à travers de multiples expériences sur différents jeux de données qu’un nombre conséquent d’itérations peut être recalculé efficacement sans nécessiter d’inversion de matrices, au prix de la sauvegarde des états intermédiaires de la phase d’apprentissage.Nous donnons finalement des idées pour étendre cette méthode vers un désapprentissage approximatif, afin d’améliorer une fois de plus le temps de désapprentissage.
32

Gaussian Process Regression-based GPS Variance Estimation and Trajectory Forecasting / Regression med Gaussiska Processer för Estimering av GPS Varians och Trajektoriebaserade Tidtabellsprognoser

Kortesalmi, Linus January 2018 (has links)
Spatio-temporal data is a commonly used source of information. Using machine learning to analyse this kind of data can lead to many interesting and useful insights. In this thesis project, a novel public transportation spatio-temporal dataset is explored and analysed. The dataset contains 282 GB of positional events, spanning two weeks of time, from all public transportation vehicles in Östergötland county, Sweden.  From the data exploration, three high-level problems are formulated: bus stop detection, GPS variance estimation, and arrival time prediction, also called trajectory forecasting. The bus stop detection problem is briefly discussed and solutions are proposed. Gaussian process regression is an effective method for solving regression problems. The GPS variance estimation problem is solved via the use of a mixture of Gaussian processes. A mixture of Gaussian processes is also used to predict the arrival time for public transportation buses. The arrival time prediction is from one bus stop to the next, not for the whole trajectory.  The result from the arrival time prediction is a distribution of arrival times, which can easily be applied to determine the earliest and latest expected arrival to the next bus stop, alongside the most probable arrival time. The naïve arrival time prediction model implemented has a root mean square error of 5 to 19 seconds. In general, the absolute error of the prediction model decreases over time in each respective segment. The results from the GPS variance estimation problem is a model which can compare the variance for different environments along the route on a given trajectory.
33

Nonlinear Prediction in Credit Forecasting and Cloud Computing Deployment Optimization

Jarrett, Nicholas Walton Daniel January 2015 (has links)
<p>This thesis presents data analysis and methodology for two prediction problems. The first problem is forecasting midlife credit ratings from personality information collected during early adulthood. The second problem is analysis of matrix multiplication in cloud computing.</p><p>The goal of the credit forecasting problem is to determine if there is a link between personality assessments of young adults with their propensity to develop credit in middle age. The data we use is from a long term longitudinal study of over 40 years. We do find an association between credit risk and personality in this cohort Such a link has obvious implications for lenders but also can be used to improve social utility via more efficient resource allocation</p><p>We analyze matrix multiplication in the cloud and model I/O and local computation for individual tasks. We established conditions for which the distribution of job completion times can be explicitly obtained. We further generalize these results to cases where analytic derivations are intractable.</p><p>We develop models that emulate the multiplication procedure, allowing job times for different deployment parameter settings to be emulated after only witnessing a subset of tasks, or subsets of tasks for nearby deployment parameter settings. </p><p>The modeling framework developed sheds new light on the problem of determining expected job completion time for sparse matrix multiplication.</p> / Dissertation
34

Branching Gaussian Process Models for Computer Vision

Simek, Kyle January 2016 (has links)
Bayesian methods provide a principled approach to some of the hardest problems in computer vision—low signal-to-noise ratios, ill-posed problems, and problems with missing data. This dissertation applies Bayesian modeling to infer multidimensional continuous manifolds (e.g., curves, surfaces) from image data using Gaussian process priors. Gaussian processes are ideal priors in this setting, providing a stochastic model over continuous functions while permitting efficient inference. We begin by introducing a formal mathematical representation of branch curvilinear structures called a curve tree and we define a novel family of Gaussian processes over curve trees called branching Gaussian processes. We define two types of branching Gaussian properties and show how to extend them to branching surfaces and hypersurfaces. We then apply Gaussian processes in three computer vision applications. First, we perform 3D reconstruction of moving plants from 2D images. Using a branching Gaussian process prior, we recover high quality 3D trees while being robust to plant motion and camera calibration error. Second, we perform multi-part segmentation of plant leaves from highly occluded silhouettes using a novel Gaussian process model for stochastic shape. Our method obtains good segmentations despite highly ambiguous shape evidence and minimal training data. Finally, we estimate 2D trees from microscope images of neurons with highly ambiguous branching structure. We first fit a tree to a blurred version of the image where structure is less ambiguous. Then we iteratively deform and expand the tree to fit finer images, using a branching Gaussian process regularizing prior for deformation. Our method infers natural tree topologies despite ambiguous branching and image data containing loops. Our work shows that Gaussian processes can be a powerful building block for modeling complex structure, and they perform well in computer vision problems having significant noise and ambiguity.
35

Statistical adjustment, calibration, and uncertainty quantification of complex computer models

Yan, Huan 27 August 2014 (has links)
This thesis consists of three chapters on the statistical adjustment, calibration, and uncertainty quantification of complex computer models with applications in engineering. The first chapter systematically develops an engineering-driven statistical adjustment and calibration framework, the second chapter deals with the calibration of potassium current model in a cardiac cell, and the third chapter develops an emulator-based approach for propagating input parameter uncertainty in a solid end milling process. Engineering model development involves several simplifying assumptions for the purpose of mathematical tractability which are often not realistic in practice. This leads to discrepancies in the model predictions. A commonly used statistical approach to overcome this problem is to build a statistical model for the discrepancies between the engineering model and observed data. In contrast, an engineering approach would be to find the causes of discrepancy and fix the engineering model using first principles. However, the engineering approach is time consuming, whereas the statistical approach is fast. The drawback of the statistical approach is that it treats the engineering model as a black box and therefore, the statistically adjusted models lack physical interpretability. In the first chapter, we propose a new framework for model calibration and statistical adjustment. It tries to open up the black box using simple main effects analysis and graphical plots and introduces statistical models inside the engineering model. This approach leads to simpler adjustment models that are physically more interpretable. The approach is illustrated using a model for predicting the cutting forces in a laser-assisted mechanical micromachining process and a model for predicting the temperature of outlet air in a fluidized-bed process. The second chapter studies the calibration of a computer model of potassium currents in a cardiac cell. The computer model is expensive to evaluate and contains twenty-four unknown parameters, which makes the calibration challenging for the traditional methods using kriging. Another difficulty with this problem is the presence of large cell-to-cell variation, which is modeled through random effects. We propose physics-driven strategies for the approximation of the computer model and an efficient method for the identification and estimation of parameters in this high-dimensional nonlinear mixed-effects statistical model. Traditional sampling-based approaches to uncertainty quantification can be slow if the computer model is computationally expensive. In such cases, an easy-to-evaluate emulator can be used to replace the computer model to improve the computational efficiency. However, the traditional technique using kriging is found to perform poorly for the solid end milling process. In chapter three, we develop a new emulator, in which a base function is used to capture the general trend of the output. We propose optimal experimental design strategies for fitting the emulator. We call our proposed emulator local base emulator. Using the solid end milling example, we show that the local base emulator is an efficient and accurate technique for uncertainty quantification and has advantages over the other traditional tools.
36

Bayesovská optimalizace hyperparametrů pomocí Gaussovských procesů / Bayesian Optimization of Hyperparameters Using Gaussian Processes

Arnold, Jakub January 2019 (has links)
The goal of this thesis was to implement a practical tool for optimizing hy- perparameters of neural networks using Bayesian optimization. We show the theoretical foundations of Bayesian optimization, including the necessary math- ematical background for Gaussian Process regression, and some extensions to Bayesian optimization. In order to evaluate the performance of Bayesian op- timization, we performed multiple real-world experiments with different neural network architectures. In our comparison to a random search, Bayesian opti- mization usually obtained a higher objective function value, and achieved lower variance in repeated experiments. Furthermore, in three out of four experi- ments, the hyperparameters discovered by Bayesian optimization outperformed the manually designed ones. We also show how the underlying Gaussian Process regression can be a useful tool for visualizing the effects of each hyperparameter, as well as possible relationships between multiple hyperparameters. 1
37

Gaussian Process Multiclass Classification : Evaluation of Binarization Techniques and Likelihood Functions

Ringdahl, Benjamin January 2019 (has links)
In binary Gaussian process classification the prior class membership probabilities are obtained by transforming a Gaussian process to the unit interval, typically either with the logistic likelihood function or the cumulative Gaussian likelihood function. Multiclass classification problems can be handled by any binary classifier by means of so-called binarization techniques, which reduces the multiclass problem into a number of binary problems. Other than introducing the mathematics behind the theory and methods behind Gaussian process classification, we compare the binarization techniques one-against-all and one-against-one in the context of Gaussian process classification, and we also compare the performance of the logistic likelihood and the cumulative Gaussian likelihood. This is done by means of two experiments: one general experiment where the methods are tested on several publicly available datasets, and one more specific experiment where the methods are compared with respect to class imbalance and class overlap on several artificially generated datasets. The results indicate that there is no significant difference in the choices of binarization technique and likelihood function for typical datasets, although the one-against-one technique showed slightly more consistent performance. However the second experiment revealed some differences in how the methods react to varying degrees of class imbalance and class overlap. Most notably the logistic likelihood was a dominant factor and the one-against-one technique performed better than one-against-all.
38

Simulation and optimization of steam-cracking processes

Campet, Robin 17 January 2019 (has links) (PDF)
Thermal cracking is an industrial process sensitive to both temperature and pressure operating conditions. The use of internally ribbed reactors is a passive method to enhance the chemical selectivity of the process, thanks to a significant increase of heat transfer. However, this method also induces an increase in pressure loss, which is damageable to the chemical yield and must be quantified. Because of the complexity of turbulence and chemical kinetics, and as detailed experimental measurements are difficult to conduct, the real advantage of such geometries in terms of selectivity is however poorly known and difficult to assess. This work aims both at evaluating the real benefits of internally ribbed reactors in terms of chemical yields and at proposing innovative and optimized reactor designs. This is made possible using the Large Eddy Simulation (LES) approach, which allows to study in detail the reactive flow inside several reactor geometries. The AVBP code, which solves the Navier-Stokes compressible equations for turbulent flows, is used in order to simulate thermal cracking thanks to a dedicated numerical methodology. In particular, the effect of pressure loss and heat transfer on chemical conversion is compared for both a smooth and a ribbed reactor in order to conclude about the impact of wall roughness in industrial operating conditions. An optimization methodology, based on series of LES and Gaussian process, is finally developed and an innovative reactor design for thermal cracking applications, which maximizes the chemical yield, is proposed
39

Optimizing process parameters to increase the quality of the output in a separator : An application of Deep Kernel Learning in combination with the Basin-hopping optimizer

Herwin, Eric January 2019 (has links)
Achieving optimal efficiency of production in the industrial sector is a process that is continuously under development. In several industrial installations separators, produced by Alfa Laval, may be found, and therefore it is of interest to make these separators operate more efficiently. The separator that is investigated separates impurities and water from crude oil. The separation performance is partially affected by the settings of process parameters. In this thesis it is investigated whether optimal or near optimal process parametersettings, which minimize the water content in the output, can be obtained.Furthermore, it is also investigated if these settings of a session can be testedto conclude about their suitability for the separator. The data that is usedin this investigation originates from sensors of a factory-installed separator.It consists of five variables which are related to the water content in theoutput. Two additional variables, related to time, are created to enforce thisrelationship. Using this data, optimal or near optimal process parameter settings may be found with an optimization technique. For this procedure, a Gaussian Process with the Deep Kernel Learning extension (GP-DKL) is used to model the relationship between the water content and the sensor data. Three models with different kernel functions are evaluated and the GP-DKL with a Spectral Mixture kernel is demonstrated to be the most suitable option. This combination is used as the objective function in a Basin-hopping optimizer, resulting in settings which correspond to a lower water content.Thus, it is concluded that optimal or near optimal settings can be obtained. Furthermore, the process parameter settings of a session can be tested by utilizing the Bayesian properties of the GP-DKL model. However, due to large posterior variance of the model, it can not be determined if the process parameter settings are suitable for the separator.
40

Geração de mapas de ambiente de rádio em sistemas de comunicações sem fio com incerteza de localização. / Generation of radio environment maps in wireless communications systems with location uncertainly.

Silva Junior, Ricardo Augusto da 17 December 2018 (has links)
A geração e o uso dos mapas de ambiente de rádio (REM - Radio Environment Map) em sistemas de comunicações sem fio vêm sendo alvo de pesquisas recentes na literatura científica. Dentre as possíveis aplicações, o REM fornece informações importantes para os processos de predição e otimização de cobertura em sistemas de comunicações sem fio, pois é baseado em medidas coletadas diretamente da rede. Neste contexto, a geração do REM depende do processamento das medidas e suas localizações para a construção dos mapas, por meio de predições espaciais. Entretanto, a incerteza de localização das medidas coletadas pode degradar a acurácia das predições de forma significativa e, consequentemente, impactar as posteriores decições baseadas no REM. Este trabalho aborda o problema de geração do REM de forma mais realística, formulando um modelo de predição espacial que introduz erros de localização no ambiente de rádio de um sistema de comunicação sem fio. As investigações mostram que os impactos provocados pela incerteza de localização na geração do REM são significativos, especialmente nas técnicas de estimação utilizadas para a aprendizagem de parâmetros do modelo de predição espacial. Com isso, uma técnica de predição espacial é proposta e utiliza ferramentas da área geoestatística para superar os efeitos negativos causados pela incerteza de localização nas medidas. Simulações computacionais são desenvolvidas para a avaliação de desempenho das principais técnicas de predição no contexto de geração do REM, considerando o problema da incerteza de localização. Os resultados de simulação da técnica proposta são promissores e mostram que levar em conta a distribuição estatística dos erros de localização pode resultar em predições com maior acurácia para a geração do REM. A influência de diferentes aspectos da modelagem do ambiente de rádio também é analisada e reforçam a ideia de que a aprendizagem de parâmetros do ambiente de rádio tem um papel importante na acurácia das predições espaciais, que são fundamentais para a geração confiável do REM. Finalmente, um estudo experimental do REM é realizado por meio de uma campanha de medidas, permitindo explorar o desempenho dos algoritmos de aprendizagem de parâmetros e predições desenvolvidos neste trabalho. / The generation and use of radio environment maps (REM) in wireless systems has been the subject of recent research in the scientific literature. Among the possible applications, the REM provides important information for the coverage predicfition and optimization processes in wireless systems, since it is based on measurements collected directly on the network. In this context, the REM generation process depends on the processing of the measurements and their locations for the construction of the maps through spatial predictions. However, the location uncertainty related to the measurements collected can signicantly degrade the accuracy of the spatial predictions and, consequently, impact the decisions based on REM. This work addresses the problem of the REM generation in a more realistic way, through the formulation of a spatial prediction model that introduces location errors in the radio environment of a wireless communication system. The investigations show that the impacts of the location uncertainty on the REM generation are significant, especially in the estimation techniques used to learn the parameters of the spatial prediction model. Thus, a spatial prediction technique is proposed, based on geostatistical tools, to overcome the negative effects caused by the location uncertainty of the REM measurements. Computational simulations are developed for the performance evaluation of the main prediction techniques in the context of REM generation, considering the problem of location uncertainty. The simulation results of the proposed technique are promising and show that taking into account the statistical distribution of location errors can result in more accurate predictions for the REM generation process. The influence of different aspects of the radio environment modeling is also analyzed and reinforce the idea that the learning of radio environment parameters plays an important role in the accuracy of spatial predictions, which are fundamental for the reliable REM generation. Finally, an experimental study is carried out through a measurement campaign with the purpose of generating the REM in practice and to explore the performance of the learning and prediction algorithms developed in this work.

Page generated in 0.4608 seconds