Spelling suggestions: "subject:"markov chain fonte carlo."" "subject:"markov chain fonte sarlo.""
81 |
Integrated modelling and Bayesian inference applied to population and disease dynamics in wildlife : M.bovis in badgers in Woodchester ParkZijerveld, Leonardus Jacobus Johannes January 2013 (has links)
Understanding demographic and disease processes in wildlife populations tends to be hampered by incomplete observations which can include significant errors. Models provide useful insights into the potential impacts of key processes and the value of such models greatly improves through integration with available data in a way that includes all sources of stochasticity and error. To date, the impact on disease of spatial and social structures observed in wildlife populations has not been widely addressed in modelling. I model the joint effects of differential fecundity and spatial heterogeneity on demography and disease dynamics, using a stochastic description of births, deaths, social-geographic migration, and disease transmission. A small set of rules governs the rates of births and movements in an environment where individuals compete for improved fecundity. This results in realistic population structures which, depending on the mode of disease transmission can have a profound effect on disease persistence and therefore has an impact on disease control strategies in wildlife populations. I also apply a simple model with births, deaths and disease events to the long-term observations of TB (Mycobacterium bovis) in badgers in Woodchester Park. The model is a continuous time, discrete state space Markov chain and is fitted to the data using an implementation of Bayesian parameter inference with an event-based likelihood. This provides a flexible framework to combine data with expert knowledge (in terms of model structure and prior distributions of parameters) and allows us to quantify the model parameters and their uncertainties. Ecological observations tend to be restricted in terms of scope and spatial temporal coverage and estimates are also affected by trapping efficiency and disease test sensitivity. My method accounts for such limitations as well as the stochastic nature of the processes. I extend the likelihood function by including an error term that depends on the difference between observed and inferred state space variables. I also demonstrate that the estimates improve by increasing observation frequency, combining the likelihood of more than one group and including variation of parameter values through the application of hierarchical priors.
|
82 |
Kernel Selection for Convergence and Efficiency in Markov Chain Monte CarolPotter, Christopher C. J. 24 April 2013 (has links)
Markov Chain Monte Carlo (MCMC) is a technique for sampling from a target probability distribution, and has risen in importance as faster computing hardware has made possible the exploration of hitherto difficult distributions. Unfortunately, this powerful technique is often misapplied by poor selection of transition kernel for the Markov chain that is generated by the simulation.
Some kernels are used without being checked against the convergence requirements for MCMC (total balance and ergodicity), but in this work we prove the existence of a simple proxy for total balance that is not as demanding as detailed balance, the most widely used standard. We show that, for discrete-state MCMC, that if a transition kernel is equivalent when it is “reversed” and applied to data which is also “reversed”, then it satisfies total balance. We go on to prove that the sequential single-variable update Metropolis kernel, where variables are simply updated in order, does indeed satisfy total balance for many discrete target distributions, such as the Ising model with uniform exchange constant.
Also, two well-known papers by Gelman, Roberts, and Gilks (GRG)[1, 2] have proposed the application of the results of an interesting mathematical proof to the realistic optimization of Markov Chain Monte Carlo computer simulations. In particular, they advocated tuning the simulation parameters to select an acceptance ratio of 0.234 .
In this paper, we point out that although the proof is valid, its result’s application to practical computations is not advisable, as the simulation algorithm considered in the proof is so inefficient that it produces very poor results under all circumstances. The algorithm used by Gelman, Roberts, and Gilks is also shown to introduce subtle time-dependent correlations into the simulation of intrinsically independent variables. These correlations are of particular interest since they will be present in all simulations that use multi-dimensional MCMC moves.
|
83 |
Bayesian Model Discrimination and Bayes Factors for Normal Linear State Space ModelsFrühwirth-Schnatter, Sylvia January 1993 (has links) (PDF)
It is suggested to discriminate between different state space models for a given time series by means of a Bayesian approach which chooses the model that minimizes the expected loss. Practical implementation of this procedures requires a fully Bayesian analysis for both the state vector and the unknown hyperparameters which is carried out by Markov chain Monte Carlo methods. Application to some non-standard situations such as testing hypotheses on the boundary of the parameter space, discriminating non-nested models and discrimination of more than two models is discussed in detail. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
|
84 |
Multivariate Longitudinal Data Analysis with Mixed Effects Hidden Markov ModelsRaffa, Jesse Daniel January 2012 (has links)
Longitudinal studies, where data on study subjects are collected over time, is increasingly involving multivariate longitudinal responses. Frequently, the heterogeneity observed in a multivariate longitudinal response can be attributed to underlying unobserved disease states in addition to any between-subject differences. We propose modeling such disease states using a hidden Markov model (HMM) approach and expand upon previous work, which incorporated random effects into HMMs for the analysis of univariate longitudinal data, to the setting of a multivariate longitudinal response. Multivariate longitudinal data are modeled jointly using separate but correlated random effects between longitudinal responses of mixed data types in addition to a shared underlying hidden process. We use a computationally efficient Bayesian approach via Markov chain Monte Carlo (MCMC) to fit such models. We apply this methodology to bivariate longitudinal response data from a smoking cessation clinical trial. Under these models, we examine how to incorporate a treatment effect on the disease states, as well as develop methods to classify observations by disease state and to attempt to understand patient dropout. Simulation studies were performed to evaluate the properties of such models and their applications under a variety of realistic situations.
|
85 |
Model Discrimination Using Markov Chain Monte Carlo MethodsMasoumi, Samira 24 April 2013 (has links)
Model discrimination deals with situations where there are several candidate models available to represent a system. The objective is to find the “best” model among rival models with respect to prediction of system behavior. Empirical and mechanistic models are two important categories of models. Mechanistic models are developed based on physical mechanisms. These types of models can be applied for prediction purposes, but they are also developed to gain improved understanding of the underlying physical mechanism or to estimate physico-chemical parameters of interest. When model discrimination is applied to mechanistic models, the main goal is typically to determine the “correct” underlying physical mechanism. This study focuses on mechanistic models and presents a model discrimination procedure which is applicable to mechanistic models for the purpose of studying the underlying physical mechanism.
Obtaining the data needed from the real system is one of the challenges particularly in applications where experiments are expensive or time consuming. Therefore, it is beneficial to get the maximum information possible from the real system using the least possible number of experiments.
In this research a new approach to model discrimination is presented that takes advantage of Monte Carlo (MC) methods. It combines a design of experiments (DOE) method with an adaptation of MC model selection methods to obtain a sequential Bayesian Markov Chain Monte Carlo model discrimination framework which is general and usable for a wide range of model discrimination problems.
The procedure has been applied to chemical engineering case studies and the promising results have been discussed. Four case studies, order of reaction, rate of FeIII formation, copolymerization, and RAFT polymerization, are presented in this study.
The first three benchmark problems allowed us to refine the proposed approach. Moreover, applying the Sequential Bayesian Monte Carlo model discrimination framework in the RAFT problem made a contribution to the polymer community by recommending analysis an approach to selecting the correct mechanism.
|
86 |
Bayesian Analysis for Large Spatial DataPark, Jincheol 2012 August 1900 (has links)
The Gaussian geostatistical model has been widely used in Bayesian modeling of spatial data. A core difficulty for this model is at inverting the n x n covariance matrix, where n is a sample size. The computational complexity of matrix inversion increases as O(n3). This difficulty is involved in almost all statistical inferences approaches of the model, such as Kriging and Bayesian modeling. In Bayesian inference, the inverse of covariance matrix needs to be evaluated at each iteration in posterior simulations, so Bayesian approach is infeasible for large sample size n due to the current computational power limit.
In this dissertation, we propose two approaches to address this computational issue, namely, the auxiliary lattice model (ALM) approach and the Bayesian site selection (BSS) approach. The key feature of ALM is to introduce a latent regular lattice which links Gaussian Markov Random Field (GMRF) with Gaussian Field (GF) of the observations. The GMRF on the auxiliary lattice represents an approximation to the Gaussian process. The distinctive feature of ALM from other approximations lies in that ALM avoids completely the problem of the matrix inversion by using analytical likelihood of GMRF. The computational complexity of ALM is rather attractive, which increase linearly with sample size.
The second approach, Bayesian site selection (BSS), attempts to reduce the dimension of data through a smart selection of a representative subset of the observations. The BSS method first split the observations into two parts, the observations near the target prediction sites (part I) and their remaining (part II). Then, by treating the observations in part I as response variable and those in part II as explanatory variables, BSS forms a regression model which relates all observations through a conditional likelihood derived from the original model. The dimension of the data can then be reduced by applying a stochastic variable selection procedure to the regression model, which selects only a subset of the part II data as explanatory data. BSS can provide us more understanding to the underlying true Gaussian process, as it directly works on the original process without any approximations involved.
The practical performance of ALM and BSS will be illustrated with simulated data and real data sets.
|
87 |
Structural Performance Evaluation of Actual Bridges by means of Modal Parameter-based FE Model Updating / モーダルパラメータベースのFEモデルアップデートによる実際の橋の構造性能評価Zhou, Xin 23 March 2022 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第23858号 / 工博第4945号 / 新制||工||1772(附属図書館) / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 KIM Chul-Woo, 教授 高橋 良和, 准教授 北根 安雄 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
88 |
Exact Markov chain Monte Carlo and Bayesian linear regressionBentley, Jason Phillip January 2009 (has links)
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
|
89 |
Auxiliary variable Markov chain Monte Carlo methodsGraham, Matthew McKenzie January 2018 (has links)
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
|
90 |
Programming language semantics as a foundation for Bayesian inferenceSzymczak, Marcin January 2018 (has links)
Bayesian modelling, in which our prior belief about the distribution on model parameters is updated by observed data, is a popular approach to statistical data analysis. However, writing specific inference algorithms for Bayesian models by hand is time-consuming and requires significant machine learning expertise. Probabilistic programming promises to make Bayesian modelling easier and more accessible by letting the user express a generative model as a short computer program (with random variables), leaving inference to the generic algorithm provided by the compiler of the given language. However, it is not easy to design a probabilistic programming language correctly and define the meaning of programs expressible in it. Moreover, the inference algorithms used by probabilistic programming systems usually lack formal correctness proofs and bugs have been found in some of them, which limits the confidence one can have in the results they return. In this work, we apply ideas from the areas of programming language theory and statistics to show that probabilistic programming can be a reliable tool for Bayesian inference. The first part of this dissertation concerns the design, semantics and type system of a new, substantially enhanced version of the Tabular language. Tabular is a schema-based probabilistic language, which means that instead of writing a full program, the user only has to annotate the columns of a schema with expressions generating corresponding values. By adopting this paradigm, Tabular aims to be user-friendly, but this unusual design also makes it harder to define the syntax and semantics correctly and reason about the language. We define the syntax of a version of Tabular extended with user-defined functions and pseudo-deterministic queries, design a dependent type system for this language and endow it with a precise semantics. We also extend Tabular with a concise formula notation for hierarchical linear regressions, define the type system of this extended language and show how to reduce it to pure Tabular. In the second part of this dissertation, we present the first correctness proof for a Metropolis-Hastings sampling algorithm for a higher-order probabilistic language. We define a measure-theoretic semantics of the language by means of an operationally-defined density function on program traces (sequences of random variables) and a map from traces to program outputs. We then show that the distribution of samples returned by our algorithm (a variant of “Trace MCMC” used by the Church language) matches the program semantics in the limit.
|
Page generated in 0.0698 seconds