Spelling suggestions: "subject:"expectations propagation""
1 |
Le statisticien neuronal : comment la perspective bayésienne peut enrichir les neurosciences / The neuronal statistician : how the Bayesian perspective can enrich neuroscienceDehaene, Guillaume 09 September 2016 (has links)
L'inférence bayésienne répond aux questions clés de la perception, comme par exemple : "Que faut-il que je crois étant donné ce que j'ai perçu ?". Elle est donc par conséquent une riche source de modèles pour les sciences cognitives et les neurosciences (Knill et Richards, 1996). Cette thèse de doctorat explore deux modèles bayésiens. Dans le premier, nous explorons un problème de codage efficace, et répondons à la question de comment représenter au mieux une information probabiliste dans des neurones pas parfaitement fiables. Nous innovons par rapport à l'état de l'art en modélisant une information d'entrée finie dans notre modèle. Nous explorons ensuite un nouveau modèle d'observateur optimal pour la localisation d'une source sonore grâce à l’écart temporel interaural, alors que les modèles actuels sont purement phénoménologiques. Enfin, nous explorons les propriétés de l'algorithme d'inférence approximée "Expectation Propagation", qui est très prometteur à la fois pour des applications en apprentissage automatique et pour la modélisation de populations neuronales, mais qui est aussi actuellement très mal compris. / Bayesian inference answers key questions of perception such as: "What should I believe given what I have perceived ?". As such, it is a rich source of models for cognitive science and neuroscience (Knill and Richards, 1996). This PhD manuscript explores two such models. We first investigate an efficient coding problem, asking the question of how to best represent probabilistic information in unrealiable neurons. We innovate compared to older such models by introducing limited input information in our own. We then explore a brand new ideal observer model of localization of sounds using the Interaural Time Difference cue, when current models are purely descriptive models of the electrophysiology. Finally, we explore the properties of the Expectation Propagation approximate-inference algorithm, which offers great potential for both practical machine-learning applications and neuronal population models, but is currently very poorly understood.
|
2 |
Branching Gaussian Process Models for Computer VisionSimek, Kyle January 2016 (has links)
Bayesian methods provide a principled approach to some of the hardest problems in computer vision—low signal-to-noise ratios, ill-posed problems, and problems with missing data. This dissertation applies Bayesian modeling to infer multidimensional continuous manifolds (e.g., curves, surfaces) from image data using Gaussian process priors. Gaussian processes are ideal priors in this setting, providing a stochastic model over continuous functions while permitting efficient inference. We begin by introducing a formal mathematical representation of branch curvilinear structures called a curve tree and we define a novel family of Gaussian processes over curve trees called branching Gaussian processes. We define two types of branching Gaussian properties and show how to extend them to branching surfaces and hypersurfaces. We then apply Gaussian processes in three computer vision applications. First, we perform 3D reconstruction of moving plants from 2D images. Using a branching Gaussian process prior, we recover high quality 3D trees while being robust to plant motion and camera calibration error. Second, we perform multi-part segmentation of plant leaves from highly occluded silhouettes using a novel Gaussian process model for stochastic shape. Our method obtains good segmentations despite highly ambiguous shape evidence and minimal training data. Finally, we estimate 2D trees from microscope images of neurons with highly ambiguous branching structure. We first fit a tree to a blurred version of the image where structure is less ambiguous. Then we iteratively deform and expand the tree to fit finer images, using a branching Gaussian process regularizing prior for deformation. Our method infers natural tree topologies despite ambiguous branching and image data containing loops. Our work shows that Gaussian processes can be a powerful building block for modeling complex structure, and they perform well in computer vision problems having significant noise and ambiguity.
|
3 |
Robust Prediction of Large Spatio-Temporal DatasetsChen, Yang 24 May 2013 (has links)
This thesis describes a robust and efficient design of Student-t based Robust Spatio-Temporal Prediction, namely, St-RSTP, to provide estimation based on observations over spatio-temporal neighbors. It is crucial to many applications in geographical information systems, medical imaging, urban planning, economy study, and climate forecasting. The proposed St-RSTP is more resilient to outliers or other small departures from model assumptions than its ancestor, the Spatio-Temporal Random Effects (STRE) model. STRE is a statistical model with linear order complexity for processing large scale spatiotemporal data.
However, STRE has been shown sensitive to outliers or anomaly observations. In our design, the St-RSTP model assumes that the measurement error follows Student's t-distribution, instead of a traditional Gaussian distribution. To handle the analytical intractable inference of Student's t model, we propose an approximate inference algorithm in the framework of Expectation Propagation (EP). Extensive experimental evaluations, based on both simulation and real-life data sets, demonstrated the robustness and the efficiency of our Student-t prediction model compared with the STRE model. / Master of Science
|
4 |
Solving Linear and Bilinear Inverse Problems using Approximate Message Passing MethodsSarkar, Subrata January 2020 (has links)
No description available.
|
5 |
Inference on Markov random fields : methods and applicationsLienart, Thibaut January 2017 (has links)
This thesis considers the problem of performing inference on undirected graphical models with continuous state spaces. These models represent conditional independence structures that can appear in the context of Bayesian Machine Learning. In the thesis, we focus on computational methods and applications. The aim of the thesis is to demonstrate that the factorisation structure corresponding to the conditional independence structure present in high-dimensional models can be exploited to decrease the computational complexity of inference algorithms. First, we consider the smoothing problem on Hidden Markov Models (HMMs) and discuss novel algorithms that have sub-quadratic computational complexity in the number of particles used. We show they perform on par with existing state-of-the-art algorithms with a quadratic complexity. Further, a novel class of rejection free samplers for graphical models known as the Local Bouncy Particle Sampler (LBPS) is explored and applied on a very large instance of the Probabilistic Matrix Factorisation (PMF) problem. We show the method performs slightly better than Hamiltonian Monte Carlo methods (HMC). It is also the first such practical application of the method to a statistical model with hundreds of thousands of dimensions. In a second part of the thesis, we consider approximate Bayesian inference methods and in particular the Expectation Propagation (EP) algorithm. We show it can be applied as the backbone of a novel distributed Bayesian inference mechanism. Further, we discuss novel variants of the EP algorithms and show that a specific type of update mechanism, analogous to the mirror descent algorithm outperforms all existing variants and is robust to Monte Carlo noise. Lastly, we show that EP can be used to help the Particle Belief Propagation (PBP) algorithm in order to form cheap and adaptive proposals and significantly outperform classical PBP.
|
Page generated in 0.1776 seconds