• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 9
  • 9
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Probabilistic machine learning for circular statistics : models and inference using the multivariate Generalised von Mises distribution

Wu Navarro, Alexandre Khae January 2018 (has links)
Probabilistic machine learning and circular statistics—the branch of statistics concerned with data as angles and directions—are two research communities that have grown mostly in isolation from one another. On the one hand, probabilistic machine learning community has developed powerful frameworks for problems whose data lives on Euclidean spaces, such as Gaussian Processes, but have generally neglected other topologies studied by circular statistics. On the other hand, the approximate inference frameworks from probabilistic machine learning have only recently started to the circular statistics landscape. This thesis intends to redress the gap between these two fields by contributing to both fields with models and approximate inference algorithms. In particular, we introduce the multivariate Generalised von Mises distribution (mGvM), which allows the use of kernels in circular statistics akin to Gaussian Processes, and an augmented representation. These models account for a vast number of applications comprising both latent variable modelling and regression of circular data. Then, we propose methods to conduct approximate inference on these models. In particular, we investigate the use of Variational Inference, Expectation Propagation and Markov chain Monte Carlo methods. The variational inference route taken was a mean field approach to efficiently leverage the mGvM tractable conditionals and create a baseline for comparison with other methods. Then, an Expectation Propagation approach is presented drawing on the Expectation Consistent Framework for Ising models and connecting the approximations used to the augmented model presented. In the final MCMC chapter, efficient Gibbs and Hamiltonian Monte Carlo samplers are derived for the mGvM and the augmented model.
2

Efficient deterministic approximate Bayesian inference for Gaussian process models

Bui, Thang Duc January 2018 (has links)
Gaussian processes are powerful nonparametric distributions over continuous functions that have become a standard tool in modern probabilistic machine learning. However, the applicability of Gaussian processes in the large-data regime and in hierarchical probabilistic models is severely limited by analytic and computational intractabilities. It is, therefore, important to develop practical approximate inference and learning algorithms that can address these challenges. To this end, this dissertation provides a comprehensive and unifying perspective of pseudo-point based deterministic approximate Bayesian learning for a wide variety of Gaussian process models, which connects previously disparate literature, greatly extends them and allows new state-of-the-art approximations to emerge. We start by building a posterior approximation framework based on Power-Expectation Propagation for Gaussian process regression and classification. This framework relies on a structured approximate Gaussian process posterior based on a small number of pseudo-points, which is judiciously chosen to summarise the actual data and enable tractable and efficient inference and hyperparameter learning. Many existing sparse approximations are recovered as special cases of this framework, and can now be understood as performing approximate posterior inference using a common approximate posterior. Critically, extensive empirical evidence suggests that new approximation methods arisen from this unifying perspective outperform existing approaches in many real-world regression and classification tasks. We explore the extensions of this framework to Gaussian process state space models, Gaussian process latent variable models and deep Gaussian processes, which also unify many recently developed approximation schemes for these models. Several mean-field and structured approximate posterior families for the hidden variables in these models are studied. We also discuss several methods for approximate uncertainty propagation in recurrent and deep architectures based on Gaussian projection, linearisation, and simple Monte Carlo. The benefit of the unified inference and learning frameworks for these models are illustrated in a variety of real-world state-space modelling and regression tasks.
3

Approximate inference : new visions

Li, Yingzhen January 2018 (has links)
Nowadays machine learning (especially deep learning) techniques are being incorporated to many intelligent systems affecting the quality of human life. The ultimate purpose of these systems is to perform automated decision making, and in order to achieve this, predictive systems need to return estimates of their confidence. Powered by the rules of probability, Bayesian inference is the gold standard method to perform coherent reasoning under uncertainty. It is generally believed that intelligent systems following the Bayesian approach can better incorporate uncertainty information for reliable decision making, and be less vulnerable to attacks such as data poisoning. Critically, the success of Bayesian methods in practice, including the recent resurgence of Bayesian deep learning, relies on fast and accurate approximate Bayesian inference applied to probabilistic models. These approximate inference methods perform (approximate) Bayesian reasoning at a relatively low cost in terms of time and memory, thus allowing the principles of Bayesian modelling to be applied to many practical settings. However, more work needs to be done to scale approximate Bayesian inference methods to big systems such as deep neural networks and large-scale dataset such as ImageNet. In this thesis we develop new algorithms towards addressing the open challenges in approximate inference. In the first part of the thesis we develop two new approximate inference algorithms, by drawing inspiration from the well known expectation propagation and message passing algorithms. Both approaches provide a unifying view of existing variational methods from different algorithmic perspectives. We also demonstrate that they lead to better calibrated inference results for complex models such as neural network classifiers and deep generative models, and scale to large datasets containing hundreds of thousands of data-points. In the second theme of the thesis we propose a new research direction for approximate inference: developing algorithms for fitting posterior approximations of arbitrary form, by rethinking the fundamental principles of Bayesian computation and the necessity of algorithmic constraints in traditional inference schemes. We specify four algorithmic options for the development of such new generation approximate inference methods, with one of them further investigated and applied to Bayesian deep learning tasks.
4

Approximate inference in graphical models

Hennig, Philipp January 2011 (has links)
Probability theory provides a mathematically rigorous yet conceptually flexible calculus of uncertainty, allowing the construction of complex hierarchical models for real-world inference tasks. Unfortunately, exact inference in probabilistic models is often computationally expensive or even intractable. A close inspection in such situations often reveals that computational bottlenecks are confined to certain aspects of the model, which can be circumvented by approximations without having to sacrifice the model's interesting aspects. The conceptual framework of graphical models provides an elegant means of representing probabilistic models and deriving both exact and approximate inference algorithms in terms of local computations. This makes graphical models an ideal aid in the development of generalizable approximations. This thesis contains a brief introduction to approximate inference in graphical models (Chapter 2), followed by three extensive case studies in which approximate inference algorithms are developed for challenging applied inference problems. Chapter 3 derives the first probabilistic game tree search algorithm. Chapter 4 provides a novel expressive model for inference in psychometric questionnaires. Chapter 5 develops a model for the topics of large corpora of text documents, conditional on document metadata, with a focus on computational speed. In each case, graphical models help in two important ways: They first provide important structural insight into the problem; and then suggest practical approximations to the exact probabilistic solution.
5

Stochastic m-estimators: controlling accuracy-cost tradeoffs in machine learning

Dillon, Joshua V. 15 November 2011 (has links)
m-Estimation represents a broad class of estimators, including least-squares and maximum likelihood, and is a widely used tool for statistical inference. Its successful application however, often requires negotiating physical resources for desired levels of accuracy. These limiting factors, which we abstractly refer as costs, may be computational, such as time-limited cluster access for parameter learning, or they may be financial, such as purchasing human-labeled training data under a fixed budget. This thesis explores these accuracy- cost tradeoffs by proposing a family of estimators that maximizes a stochastic variation of the traditional m-estimator. Such "stochastic m-estimators" (SMEs) are constructed by stitching together different m-estimators, at random. Each such instantiation resolves the accuracy-cost tradeoff differently, and taken together they span a continuous spectrum of accuracy-cost tradeoff resolutions. We prove the consistency of the estimators and provide formulas for their asymptotic variance and statistical robustness. We also assess their cost for two concerns typical to machine learning: computational complexity and labeling expense. For the sake of concreteness, we discuss experimental results in the context of a variety of discriminative and generative Markov random fields, including Boltzmann machines, conditional random fields, model mixtures, etc. The theoretical and experimental studies demonstrate the effectiveness of the estimators when computational resources are insufficient or when obtaining additional labeled samples is necessary. We also demonstrate that in some cases the stochastic m-estimator is associated with robustness thereby increasing its statistical accuracy and representing a win-win.
6

Accelerating Monte Carlo methods for Bayesian inference in dynamical models

Dahlin, Johan January 2016 (has links)
Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal. / Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
7

Le statisticien neuronal : comment la perspective bayésienne peut enrichir les neurosciences / The neuronal statistician : how the Bayesian perspective can enrich neuroscience

Dehaene, Guillaume 09 September 2016 (has links)
L'inférence bayésienne répond aux questions clés de la perception, comme par exemple : "Que faut-il que je crois étant donné ce que j'ai perçu ?". Elle est donc par conséquent une riche source de modèles pour les sciences cognitives et les neurosciences (Knill et Richards, 1996). Cette thèse de doctorat explore deux modèles bayésiens. Dans le premier, nous explorons un problème de codage efficace, et répondons à la question de comment représenter au mieux une information probabiliste dans des neurones pas parfaitement fiables. Nous innovons par rapport à l'état de l'art en modélisant une information d'entrée finie dans notre modèle. Nous explorons ensuite un nouveau modèle d'observateur optimal pour la localisation d'une source sonore grâce à l’écart temporel interaural, alors que les modèles actuels sont purement phénoménologiques. Enfin, nous explorons les propriétés de l'algorithme d'inférence approximée "Expectation Propagation", qui est très prometteur à la fois pour des applications en apprentissage automatique et pour la modélisation de populations neuronales, mais qui est aussi actuellement très mal compris. / Bayesian inference answers key questions of perception such as: "What should I believe given what I have perceived ?". As such, it is a rich source of models for cognitive science and neuroscience (Knill and Richards, 1996). This PhD manuscript explores two such models. We first investigate an efficient coding problem, asking the question of how to best represent probabilistic information in unrealiable neurons. We innovate compared to older such models by introducing limited input information in our own. We then explore a brand new ideal observer model of localization of sounds using the Interaural Time Difference cue, when current models are purely descriptive models of the electrophysiology. Finally, we explore the properties of the Expectation Propagation approximate-inference algorithm, which offers great potential for both practical machine-learning applications and neuronal population models, but is currently very poorly understood.
8

Parameter Estimation and Simulation of Driving Datasets / Parameteruppskattning och simulering av kördatauppsättningar

Qu, Bojian January 2023 (has links)
The development of autonomous driving in recent years has been in full swing and one of the aspects that Autonomous Vehicles (AVs) should always focus on is safety. Although the corresponding technology has gradually matured, and AVs have performed well in a large number of tests, people are still uncertain whether AVs can cope with all possible situations. This world is complex and ever-changing, experiencing countless disturbances every moment, and according to The Butterfly Effect, even the most insignificant small disturbance may set off a huge storm in the near future. If AVs really enter people’s daily lives, they will inevitably encounter many unexpected situations that have never been experienced before. Thus how to ensure that AVs can handle these well has become the most important issue at the moment. It is necessary to give the Automated Driving System (ADS) sufficient challenges during training and testing for acceptable safety and stability. However, dangerous and extreme driving scenarios in the real world are very rare, and it is also very expensive for such a test to be carried out in reality. Therefore, artificially creating a series of critical driving scenarios then training and testing the ADS in a simulation environment has become the current mainstream solution. This thesis project builds a complete framework for the automatic generation, simulation, and analysis of safety-critical driving scenarios. First, the specified scenarios and features are sequentially extracted from the naturalistic driving dataset through pre-defined rules; then a Density Estimation Model is adopted to learn the features, trying to find the distribution of the specified scenarios; after the distribution is obtained, synthetic driving scenarios can be obtained by sampling. Finally, visualize these synthetic scenarios via simulation for safety assessment and data analysis. / Utvecklingen av självkörande fordon har varit i full gång de senaste åre och en av aspekterna som självkörande alltid bör fokusera på är säkerheten. Även om motsvarande teknik gradvis har mognat, och självkörande har presterat bra i ett stort antal tester, är människor fortfarande osäkra på om självkörande klarar av alla möjliga situationer. Den här världen är komplex och ständigt föränderlig, upplever otaliga störningar varje ögonblick, och enligt The Butterfly Effect kan även den mest obetydliga lilla störningen sätta igång en enorm storm inom en snar framtid. Om självkörande verkligen kommer in i människors dagliga liv kommer de oundvikligen att möta många oväntade situationer som aldrig har upplevts tidigare. Så hur man säkerställer att självkörande kan hantera dessa väl har blivit den viktigaste frågan för tillfället. Det är nödvändigt att ge självkörande tillräckliga utmaningar underträning och testning för acceptabel säkerhet och stabilitet. Men farliga och extrema körscenarier i den verkliga världen är mycket sällsynta, och det är också mycket dyrt att genomföra ett sådant test i verkligheten. Att på konstgjord väg skapa en serie kritiska körscenarier och sedan träna och testa det automatiserade körsystemet i en simuleringsmiljö har därför blivit den nuvarande vanliga lösningen. Detta examensarbete bygger ett komplett ramverk för automatisk generering, simulering och analys av säkerhetskritiska körscenarier. Först extraheras de specificerade scenarierna och funktionerna sekventiellt från den naturalistiska kördatauppsättningen genom fördefinierade regler; sedan antas en densitetsuppskattningsmodell för att lära sig funktionerna och försöka hitta fördelningen av de specificerade scenarierna; efter att fördelningen erhållits kan syntetiska körscenarier erhållas genom provtagning. Slutligen, visualisera dessa syntetiska scenarier via simulering för säkerhetsbedömning och dataanalys.
9

Decision making under uncertainty

McInerney, Robert E. January 2014 (has links)
Operating and interacting in an environment requires the ability to manage uncertainty and to choose definite courses of action. In this thesis we look to Bayesian probability theory as the means to achieve the former, and find that through rigorous application of the rules it prescribes we can, in theory, solve problems of decision making under uncertainty. Unfortunately such methodology is intractable in realworld problems, and thus approximation of one form or another is inevitable. Many techniques make use of heuristic procedures for managing uncertainty. We note that such methods suffer unreliable performance and rely on the specification of ad-hoc variables. Performance is often judged according to long-term asymptotic performance measures which we also believe ignores the most complex and relevant parts of the problem domain. We therefore look to develop principled approximate methods that preserve the meaning of Bayesian theory but operate with the scalability of heuristics. We start doing this by looking at function approximation in continuous state and action spaces using Gaussian Processes. We develop a novel family of covariance functions which allow tractable inference methods to accommodate some of the uncertainty lost by not following full Bayesian inference. We also investigate the exploration versus exploitation tradeoff in the context of the Multi-Armed Bandit, and demonstrate that principled approximations behave close to optimal behaviour and perform significantly better than heuristics on a range of experimental test beds.

Page generated in 0.0977 seconds