Spelling suggestions: "subject:"markov chain fonte carlo."" "subject:"markov chain fonte sarlo.""
71 |
Critical behavior for the model of random spatial permutationsKerl, John R. January 2010 (has links)
We examine a phase transition in a model of random spatial permutations which originates in a study of the interacting Bose gas. Permutations are weighted according to point positions; the low-temperature onset of the appearance of arbitrarily long cycles is connected to the phase transition of Bose-Einstein condensates. In our simplified model, point positions are held fixed on the fully occupied cubic lattice and interactions are expressed as Ewens-type weights on cycle lengths of permutations. The critical temperature of the transition to long cycles depends on an interaction-strength parameter α. For weak interactions, the shift in critical temperature is expected to be linear in α with constant of linearity c. Using Markov chain Monte Carlo methods and finite-size scaling, we find c = 0.618 ± 0.086. This finding matches a similar analytical result of Ueltschi and Betz. We also examine the mean longest cycle length as a fraction of the number of sites in long cycles, recovering an earlier result of Shepp and Lloyd for non-spatial permutations. The plan of this paper is as follows. We begin with a non-technical discussion of the historical context of the project, along with a mention of alternative approaches. Relevant previous works are cited, thus annotating the bibliography. The random-cycle approach to the BEC problem requires a model of spatial permutations. This model it is of its own probabilistic interest; it is developed mathematically, without reference to the Bose gas. Our Markov-chain Monte Carlo algorithms for sampling from the random-cycle distribution - the swap-only, swap-and-reverse, band-update, and worm algorithms - are presented, compared, and contrasted. Finite-size scaling techniques are used to obtain information about infinite-volume quantities from finite-volume computational data.
|
72 |
Latent Conditional Individual-Level Models and Related Topics in Infectious Disease ModelingDeeth, Lorna E. 15 October 2012 (has links)
Individual-level models are a class of complex statistical models, often fitted within a Bayesian Markov chain Monte Carlo framework, that have been effectively used to model the spread of infectious diseases. The ability of these models to incorporate individual-level covariate information allows them to be highly flexible, and to account for such characteristics as population heterogeneity. However, these models can be subject to inherent uncertainties often found in infectious disease data. As well, their complex nature can lead to a significant computational expense when fitting these models to epidemic data, particularly for large populations.
An individual-level model that incorporates a latent grouping structure into the modeling procedure, based on some heterogeneous population characteristics, is investigated. The dependence of this latent conditional individual-level model on a discrete latent grouping variable alleviates the need for explicit, although possibly unreliable, covariate information. A simulation study is used to assess the posterior predictive ability of this model, in comparison to individual-level models that utilize the full covariate information, or that assume population homogeneity. These models are also applied to data from the 2001 UK foot-and-mouth disease epidemic.
When attempting to compare complex models fitted within the Bayesian framework, the identification of appropriate model selection tools would be beneficial. The use of deviance information criterion (DIC) as model comparison tool, particularly for the latent conditional individual-level models, is investigated. A simulation study is used to compare five variants of the DIC, and the ability of each DIC variant to select the true model is determined.
Finally, an investigation into methods to reduce the computational burden associated with individual-level models is carried out, based on an individual-level model that also incorporates population heterogeneity through a discrete grouping variable. A simulation study is used to determine the effect of reducing the overall population size by aggregating the data into spatial clusters. Reparameterized individual-level models, accounting for the aggregation effect, are fitted to the aggregated data. The effect of data aggregation on the ability of two reparameterized individual-level models to identify a covariate effect, as well as on the computational expense of the model fitting procedure, is explored.
|
73 |
MODIFIED INDIVIDUAL-LEVEL MODELS OF INFECTIOUS DISEASEFang, Mingying 15 September 2011 (has links)
Infectious disease models can be used to understand mechanisms of the spread of diseases and thus, may effectively guide control policies for potential outbreaks. Deardon et al. (2010) introduced a class of individual-level models (ILMs) which are highly flexible. Parameter estimates for ILMs can be achieved by means of Markov chain Monte Carlo (MCMC) methods within a Bayesian framework. Here, we introduce an extended form of ILM, described by
Deardon et al. (2010), and compare this model with the original ILM in the context of a simple
spatial system. The two spatial ILMs are fitted to 70 simulated data sets and a real data set on
tomato spotted wilt virus (TSWV) in pepper plants (Hughes et al., 1997). We find that the
modified ILM is more flexible than the original ILM and may fit some data sets better.
|
74 |
Issues of Computational Efficiency and Model Approximation for Spatial Individual-Level Infectious Disease ModelsDobbs, Angie 06 January 2012 (has links)
Individual-level models (ILMs) are models that can use the spatial-temporal nature of disease data to capture the disease dynamics. Parameter estimation is usually done via Markov chain Monte Carlo (MCMC) methods, but correlation between model parameters negatively affects MCMC mixing. Introducing a normalization constant to alleviate the correlation results in MCMC convergence over fewer iterations, however this negatively effects computation time.
It is important that model fitting is done as efficiently as possible. An upper-truncated distance kernel is introduced to quicken the computation of the likelihood, but this causes a loss in goodness-of-fit.
The normalization constant and upper-truncated distance kernel are evaluated as components in various ILMs via a simulation study. The normalization constant is seen not to be worthwhile, as the effect of increased computation time is not outweighed by the reduced correlation. The upper-truncated distance kernel reduces computation time but worsens model fit as the truncation distance decreases. / Studies have been funded by OMAFRA & NSERC, with computing equipment provided by CSI.
|
75 |
An Analysis of the 3-He Proportional Counter Data from the Sudbury Neutrino Observatory Using Pulse Shape DiscriminationMartin, RYAN 22 September 2009 (has links)
This thesis presents an independent analysis of the data from 3-He-filled proportional counters from the third phase of the Sudbury Neutrino Observatory (SNO) data. These counters were deployed in SNO's heavy water to independently detect neutrons produced by the neutral current interaction of 8-B solar neutrinos with deuterium. Previously published results from this phase were based on a spectral analysis of the energy deposited in the proportional counters. The work in this thesis introduces a new observable based on the time-profile of the ionization in the counters. The inclusion of this observable in a maximum-likelihood fit increases the potential to distinguish neutrons from backgrounds which are primarily due to alpha-decays. The combination of this new observable with the energy deposited in the counters results in a more accurate determination of the number of neutrons.
The analysis presented in this thesis was limited to one third of the data from the proportional counters, uniformly distributed in time. This limitation was imposed to reconcile different time-lines between the submission of this thesis, a thorough review of this work by the SNO Collaboration and results from an independent analysis that is still underway. Analysis of this reduced data set determined that 398 +/- 29 (stat.) +/- 9 (sys.) neutrons were detected in this reduced data-set. The number compares well to the previous analysis of the data, based only on a spectral analysis of the deposited energy, which determined that 410 +/- 44 (stat.) +/- 9 (sys.) were detected in the same time period. The analysis presented here has led to a substantial increase in the statistical accuracy. Assuming that the statistical accuracy will increase when the full data set is analyzed, the results from this thesis would bring the uncertainty in the 8-B solar neutrino flux to down 6.8% from 8.5% in the previously published results. The work from the thesis is intended to be included in a future analysis of the SNO data and will result in a more accurate measurement of the total flux of solar neutrinos from 8-B as well as reduce the uncertainty in the $\theta_{12}$ neutrino oscillation mixing angle. / Thesis (Ph.D, Physics, Engineering Physics and Astronomy) -- Queen's University, 2009-09-16 15:56:28.195
|
76 |
Monitoring and Improving Markov Chain Monte Carlo Convergence by PartitioningVanDerwerken, Douglas January 2015 (has links)
<p>Since Bayes' Theorem was first published in 1762, many have argued for the Bayesian paradigm on purely philosophical grounds. For much of this time, however, practical implementation of Bayesian methods was limited to a relatively small class of "conjugate" or otherwise computationally tractable problems. With the development of Markov chain Monte Carlo (MCMC) and improvements in computers over the last few decades, the number of problems amenable to Bayesian analysis has increased dramatically. The ensuing spread of Bayesian modeling has led to new computational challenges as models become more complex and higher-dimensional, and both parameter sets and data sets become orders of magnitude larger. This dissertation introduces methodological improvements to deal with these challenges. These include methods for enhanced convergence assessment, for parallelization of MCMC, for estimation of the convergence rate, and for estimation of normalizing constants. A recurring theme across these methods is the utilization of one or more chain-dependent partitions of the state space.</p> / Dissertation
|
77 |
MCMC Estimation of Classical and Dynamic Switching and Mixture ModelsFrühwirth-Schnatter, Sylvia January 1998 (has links) (PDF)
In the present paper we discuss Bayesian estimation of a very general model class where the distribution of the observations is assumed to depend on a latent mixture or switching variable taking values in a discrete state space. This model class covers e.g. finite mixture modelling, Markov switching autoregressive modelling and dynamic linear models with switching. Joint Bayesian estimation of all latent variables, model parameters and parameters determining the probability law of the switching variable is carried out by a new Markov Chain Monte Carlo method called permutation sampling. Estimation of switching and mixture models is known to be faced with identifiability problems as switching and mixture are identifiable only up to permutations of the indices of the states. For a Bayesian analysis the posterior has to be constrained in such a way that identifiablity constraints are fulfilled. The permutation sampler is designed to sample efficiently from the constrained posterior, by first sampling from the unconstrained posterior - which often can be done in a convenient multimove manner - and then by applying a suitable permutation, if the identifiability constraint is violated. We present simple conditions on the prior which ensure that this method is a valid Markov Chain Monte Carlo method (that is invariance, irreducibility and aperiodicity hold). Three case studies are presented, including finite mixture modelling of fetal lamb data, Markov switching Autoregressive modelling of the U.S. quarterly real GDP data, and modelling the U .S./U.K. real exchange rate by a dynamic linear model with Markov switching heteroscedasticity. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
|
78 |
Monte Carlo Integration Using Importance Sampling and Gibbs SamplingHörmann, Wolfgang, Leydold, Josef January 2005 (has links) (PDF)
To evaluate the expectation of a simple function with respect to a complicated multivariate density Monte Carlo integration has become the main technique. Gibbs sampling and importance sampling are the most popular methods for this task. In this contribution we propose a new simple general purpose importance sampling procedure. In a simulation study we compare the performance of this method with the performance of Gibbs sampling and of importance sampling using a vector of independent variates. It turns out that the new procedure is much better than independent importance sampling; up to dimension five it is also better than Gibbs sampling. The simulation results indicate that for higher dimensions Gibbs sampling is superior. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
79 |
The Economic Role of Jumps and Recovery Rates in the Market for Corporate Default RiskSchneider, Paul, Sögner, Leopold, Veza, Tanja January 2010 (has links) (PDF)
Using an extensive cross-section of US corporate CDS this paper offers an economic understanding
of implied loss given default (LGD) and jumps in default risk. We formulate and underpin empirical stylized facts about CDS spreads, which are then reproduced in our affine intensity-based jump-diffusion model. Implied LGD is well identified, with obligors possessing substantial tangible assets expected to recover more. Sudden increases
in the default risk of investment-grade obligors are higher relative to speculative grade. The probability of structural migration to default is low for investment-grade
and heavily regulated obligors because investors fear distress rather through rare but devastating events. (authors' abstract)
|
80 |
Bayesian Variable Selection in Spatial Autoregressive ModelsCrespo Cuaresma, Jesus, Piribauer, Philipp 07 1900 (has links) (PDF)
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. We present two alternative approaches which can be implemented using Gibbs sampling methods in a straightforward way and allow us to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. In a simulation study we show that the variable selection approaches tend to outperform existing Bayesian model averaging techniques both in terms of in-sample predictive performance and computational efficiency.
(authors' abstract) / Series: Department of Economics Working Paper Series
|
Page generated in 0.0683 seconds