• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 287
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 666
  • 666
  • 359
  • 359
  • 150
  • 147
  • 101
  • 72
  • 66
  • 66
  • 65
  • 63
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Effects of sample size, ability distribution, and the length of Markov Chain Monte Carlo burn-in chains on the estimation of item and testlet parameters

Orr, Aline Pinto 25 July 2011 (has links)
Item Response Theory (IRT) models are the basis of modern educational measurement. In order to increase testing efficiency, modern tests make ample use of groups of questions associated with a single stimulus (testlets). This violates the IRT assumption of local independence. However, a set of measurement models, testlet response theory (TRT), has been developed to address such dependency issues. This study investigates the effects of varying sample sizes and Markov Chain Monte Carlo burn-in chain lengths on the accuracy of estimation of a TRT model’s item and testlet parameters. The following outcome measures are examined: Descriptive statistics, Pearson product-moment correlations between known and estimated parameters, and indices of measurement effectiveness for final parameter estimates. / text
142

Critical behavior for the model of random spatial permutations

Kerl, John R. January 2010 (has links)
We examine a phase transition in a model of random spatial permutations which originates in a study of the interacting Bose gas. Permutations are weighted according to point positions; the low-temperature onset of the appearance of arbitrarily long cycles is connected to the phase transition of Bose-Einstein condensates. In our simplified model, point positions are held fixed on the fully occupied cubic lattice and interactions are expressed as Ewens-type weights on cycle lengths of permutations. The critical temperature of the transition to long cycles depends on an interaction-strength parameter α. For weak interactions, the shift in critical temperature is expected to be linear in α with constant of linearity c. Using Markov chain Monte Carlo methods and finite-size scaling, we find c = 0.618 ± 0.086. This finding matches a similar analytical result of Ueltschi and Betz. We also examine the mean longest cycle length as a fraction of the number of sites in long cycles, recovering an earlier result of Shepp and Lloyd for non-spatial permutations. The plan of this paper is as follows. We begin with a non-technical discussion of the historical context of the project, along with a mention of alternative approaches. Relevant previous works are cited, thus annotating the bibliography. The random-cycle approach to the BEC problem requires a model of spatial permutations. This model it is of its own probabilistic interest; it is developed mathematically, without reference to the Bose gas. Our Markov-chain Monte Carlo algorithms for sampling from the random-cycle distribution - the swap-only, swap-and-reverse, band-update, and worm algorithms - are presented, compared, and contrasted. Finite-size scaling techniques are used to obtain information about infinite-volume quantities from finite-volume computational data.
143

Latent Conditional Individual-Level Models and Related Topics in Infectious Disease Modeling

Deeth, Lorna E. 15 October 2012 (has links)
Individual-level models are a class of complex statistical models, often fitted within a Bayesian Markov chain Monte Carlo framework, that have been effectively used to model the spread of infectious diseases. The ability of these models to incorporate individual-level covariate information allows them to be highly flexible, and to account for such characteristics as population heterogeneity. However, these models can be subject to inherent uncertainties often found in infectious disease data. As well, their complex nature can lead to a significant computational expense when fitting these models to epidemic data, particularly for large populations. An individual-level model that incorporates a latent grouping structure into the modeling procedure, based on some heterogeneous population characteristics, is investigated. The dependence of this latent conditional individual-level model on a discrete latent grouping variable alleviates the need for explicit, although possibly unreliable, covariate information. A simulation study is used to assess the posterior predictive ability of this model, in comparison to individual-level models that utilize the full covariate information, or that assume population homogeneity. These models are also applied to data from the 2001 UK foot-and-mouth disease epidemic. When attempting to compare complex models fitted within the Bayesian framework, the identification of appropriate model selection tools would be beneficial. The use of deviance information criterion (DIC) as model comparison tool, particularly for the latent conditional individual-level models, is investigated. A simulation study is used to compare five variants of the DIC, and the ability of each DIC variant to select the true model is determined. Finally, an investigation into methods to reduce the computational burden associated with individual-level models is carried out, based on an individual-level model that also incorporates population heterogeneity through a discrete grouping variable. A simulation study is used to determine the effect of reducing the overall population size by aggregating the data into spatial clusters. Reparameterized individual-level models, accounting for the aggregation effect, are fitted to the aggregated data. The effect of data aggregation on the ability of two reparameterized individual-level models to identify a covariate effect, as well as on the computational expense of the model fitting procedure, is explored.
144

MODIFIED INDIVIDUAL-LEVEL MODELS OF INFECTIOUS DISEASE

Fang, Mingying 15 September 2011 (has links)
Infectious disease models can be used to understand mechanisms of the spread of diseases and thus, may effectively guide control policies for potential outbreaks. Deardon et al. (2010) introduced a class of individual-level models (ILMs) which are highly flexible. Parameter estimates for ILMs can be achieved by means of Markov chain Monte Carlo (MCMC) methods within a Bayesian framework. Here, we introduce an extended form of ILM, described by Deardon et al. (2010), and compare this model with the original ILM in the context of a simple spatial system. The two spatial ILMs are fitted to 70 simulated data sets and a real data set on tomato spotted wilt virus (TSWV) in pepper plants (Hughes et al., 1997). We find that the modified ILM is more flexible than the original ILM and may fit some data sets better.
145

Issues of Computational Efficiency and Model Approximation for Spatial Individual-Level Infectious Disease Models

Dobbs, Angie 06 January 2012 (has links)
Individual-level models (ILMs) are models that can use the spatial-temporal nature of disease data to capture the disease dynamics. Parameter estimation is usually done via Markov chain Monte Carlo (MCMC) methods, but correlation between model parameters negatively affects MCMC mixing. Introducing a normalization constant to alleviate the correlation results in MCMC convergence over fewer iterations, however this negatively effects computation time. It is important that model fitting is done as efficiently as possible. An upper-truncated distance kernel is introduced to quicken the computation of the likelihood, but this causes a loss in goodness-of-fit. The normalization constant and upper-truncated distance kernel are evaluated as components in various ILMs via a simulation study. The normalization constant is seen not to be worthwhile, as the effect of increased computation time is not outweighed by the reduced correlation. The upper-truncated distance kernel reduces computation time but worsens model fit as the truncation distance decreases. / Studies have been funded by OMAFRA & NSERC, with computing equipment provided by CSI.
146

An Analysis of the 3-He Proportional Counter Data from the Sudbury Neutrino Observatory Using Pulse Shape Discrimination

Martin, RYAN 22 September 2009 (has links)
This thesis presents an independent analysis of the data from 3-He-filled proportional counters from the third phase of the Sudbury Neutrino Observatory (SNO) data. These counters were deployed in SNO's heavy water to independently detect neutrons produced by the neutral current interaction of 8-B solar neutrinos with deuterium. Previously published results from this phase were based on a spectral analysis of the energy deposited in the proportional counters. The work in this thesis introduces a new observable based on the time-profile of the ionization in the counters. The inclusion of this observable in a maximum-likelihood fit increases the potential to distinguish neutrons from backgrounds which are primarily due to alpha-decays. The combination of this new observable with the energy deposited in the counters results in a more accurate determination of the number of neutrons. The analysis presented in this thesis was limited to one third of the data from the proportional counters, uniformly distributed in time. This limitation was imposed to reconcile different time-lines between the submission of this thesis, a thorough review of this work by the SNO Collaboration and results from an independent analysis that is still underway. Analysis of this reduced data set determined that 398 +/- 29 (stat.) +/- 9 (sys.) neutrons were detected in this reduced data-set. The number compares well to the previous analysis of the data, based only on a spectral analysis of the deposited energy, which determined that 410 +/- 44 (stat.) +/- 9 (sys.) were detected in the same time period. The analysis presented here has led to a substantial increase in the statistical accuracy. Assuming that the statistical accuracy will increase when the full data set is analyzed, the results from this thesis would bring the uncertainty in the 8-B solar neutrino flux to down 6.8% from 8.5% in the previously published results. The work from the thesis is intended to be included in a future analysis of the SNO data and will result in a more accurate measurement of the total flux of solar neutrinos from 8-B as well as reduce the uncertainty in the $\theta_{12}$ neutrino oscillation mixing angle. / Thesis (Ph.D, Physics, Engineering Physics and Astronomy) -- Queen's University, 2009-09-16 15:56:28.195
147

Decision Making for Information Security Investments

Yeo, M. Lisa Unknown Date
No description available.
148

Méthodes de Monte Carlo EM et approximations particulaires : Application à la calibration d'un modèle de volatilité stochastique.

09 December 2013 (has links) (PDF)
Ce travail de thèse poursuit une perspective double dans l'usage conjoint des méthodes de Monte Carlo séquentielles (MMS) et de l'algorithme Espérance-Maximisation (EM) dans le cadre des modèles de Markov cachés présentant une structure de dépendance markovienne d'ordre supérieur à 1 au niveau de la composante inobservée. Tout d'abord, nous commençons par un exposé succinct de l'assise théorique des deux concepts statistiques à travers les chapitres 1 et 2 qui leurs sont consacrés. Dans un second temps, nous nous intéressons à la mise en pratique simultanée des deux concepts au chapitre 3 et ce dans le cadre usuel où la structure de dépendance est d'ordre 1. L'apport des méthodes MMS dans ce travail réside dans leur capacité à approximer efficacement des fonctionnelles conditionnelles bornées, notamment des quantités de filtrage et de lissage dans un cadre non linéaire et non gaussien. Quant à l'algorithme EM, il est motivé par la présence à la fois de variables observables et inobservables (ou partiellement observées) dans les modèles de Markov Cachés et singulièrement les mdèles de volatilité stochastique étudié. Après avoir présenté aussi bien l'algorithme EM que les méthodes MCs ainsi que quelques unes de leurs propriétés dans les chapitres 1 et 2 respectivement, nous illustrons ces deux outils statistiques au travers de la calibration d'un modèle de volatilité stochastique. Cette application est effectuée pour des taux change ainsi que pour quelques indices boursiers au chapitre 3. Nous concluons ce chapitre sur un léger écart du modèle de volatilité stochastique canonique utilisé ainsi que des simulations de Monte Carlo portant sur le modèle résultant. Enfin, nous nous efforçons dans les chapitres 4 et 5 à fournir les assises théoriques et pratiques de l'extension des méthodes Monte Carlo séquentielles notamment le filtrage et le lissage particulaire lorsque la structure markovienne est plus prononcée. En guise d'illustration, nous donnons l'exemple d'un modèle de volatilité stochastique dégénéré dont une approximation présente une telle propriété de dépendance.
149

Monitoring and Improving Markov Chain Monte Carlo Convergence by Partitioning

VanDerwerken, Douglas January 2015 (has links)
<p>Since Bayes' Theorem was first published in 1762, many have argued for the Bayesian paradigm on purely philosophical grounds. For much of this time, however, practical implementation of Bayesian methods was limited to a relatively small class of "conjugate" or otherwise computationally tractable problems. With the development of Markov chain Monte Carlo (MCMC) and improvements in computers over the last few decades, the number of problems amenable to Bayesian analysis has increased dramatically. The ensuing spread of Bayesian modeling has led to new computational challenges as models become more complex and higher-dimensional, and both parameter sets and data sets become orders of magnitude larger. This dissertation introduces methodological improvements to deal with these challenges. These include methods for enhanced convergence assessment, for parallelization of MCMC, for estimation of the convergence rate, and for estimation of normalizing constants. A recurring theme across these methods is the utilization of one or more chain-dependent partitions of the state space.</p> / Dissertation
150

MCMC Estimation of Classical and Dynamic Switching and Mixture Models

Frühwirth-Schnatter, Sylvia January 1998 (has links) (PDF)
In the present paper we discuss Bayesian estimation of a very general model class where the distribution of the observations is assumed to depend on a latent mixture or switching variable taking values in a discrete state space. This model class covers e.g. finite mixture modelling, Markov switching autoregressive modelling and dynamic linear models with switching. Joint Bayesian estimation of all latent variables, model parameters and parameters determining the probability law of the switching variable is carried out by a new Markov Chain Monte Carlo method called permutation sampling. Estimation of switching and mixture models is known to be faced with identifiability problems as switching and mixture are identifiable only up to permutations of the indices of the states. For a Bayesian analysis the posterior has to be constrained in such a way that identifiablity constraints are fulfilled. The permutation sampler is designed to sample efficiently from the constrained posterior, by first sampling from the unconstrained posterior - which often can be done in a convenient multimove manner - and then by applying a suitable permutation, if the identifiability constraint is violated. We present simple conditions on the prior which ensure that this method is a valid Markov Chain Monte Carlo method (that is invariance, irreducibility and aperiodicity hold). Three case studies are presented, including finite mixture modelling of fetal lamb data, Markov switching Autoregressive modelling of the U.S. quarterly real GDP data, and modelling the U .S./U.K. real exchange rate by a dynamic linear model with Markov switching heteroscedasticity. (author's abstract) / Series: Forschungsberichte / Institut für Statistik

Page generated in 0.0453 seconds