• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 160
  • 16
  • 15
  • 11
  • 10
  • 8
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 318
  • 318
  • 318
  • 318
  • 142
  • 77
  • 72
  • 54
  • 45
  • 45
  • 42
  • 42
  • 40
  • 31
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A statistical framework for estimating output-specific efficiencies

Gstach, Dieter January 2003 (has links) (PDF)
This paper presents a statistical framework for estimating output-specific efficiencies for the 2-output case based upon a DEA frontier estimate. The key to the approach is the concept of target output-mix. Being usually unobserved, target output-mixes of firms are modelled as missing data. Using this concept the relevant data generating process can be formulated. The resulting likelihood function is analytically intractable, so a data augmented Bayesian approach is proposed for estimation purposes. This technique is adapted to the present purpose. Some implementation issues are discussed leading to an empirical Bayes setup with data informed priors. A prove of scale invariance is provided. (author's abstract) / Series: Department of Economics Working Paper Series
52

Annealing and Tempering for Sampling and Counting

Bhatnagar, Nayantara 09 July 2007 (has links)
The Markov Chain Monte Carlo (MCMC) method has been widely used in practice since the 1950's in areas such as biology, statistics, and physics. However, it is only in the last few decades that powerful techniques for obtaining rigorous performance guarantees with respect to the running time have been developed. Today, with only a few notable exceptions, most known algorithms for approximately uniform sampling and approximate counting rely on the MCMC method. This thesis focuses on algorithms that use MCMC combined with an algorithm from optimization called simulated annealing, for sampling and counting problems. Annealing is a heuristic for finding the global optimum of a function over a large search space. It has recently emerged as a powerful technique used in conjunction with the MCMC method for sampling problems, for example in the estimation of the permanent and in algorithms for computing the volume of a convex body. We examine other applications of annealing to sampling problems as well as scenarios when it fails to converge in polynomial time. We consider the problem of randomly generating 0-1 contingency tables. This is a well-studied problem in statistics, as well as the theory of random graphs, since it is also equivalent to generating a random bipartite graph with a prescribed degree sequence. Previously, the only algorithm known for all degree sequences was by reduction to approximating the permanent of a 0-1 matrix. We give a direct and more efficient combinatorial algorithm which relies on simulated annealing. Simulated tempering is a variant of annealing used for sampling in which a temperature parameter is randomly raised or lowered during the simulation. The idea is that by extending the state space of the Markov chain to a polynomial number of progressively smoother distributions, parameterized by temperature, the chain could cross bottlenecks in the original space which cause slow mixing. We show that simulated tempering mixes torpidly for the 3-state ferromagnetic Potts model on the complete graph. Moreover, we disprove the conventional belief that tempering can slow fixed temperature algorithms by at most a polynomial in the number of temperatures and show that it can converge at a rate that is slower by at least an exponential factor.
53

Bayesian multivariate spatial models and their applications

Song, Joon Jin 15 November 2004 (has links)
Univariate hierarchical Bayes models are being vigorously researched for use in disease mapping, engineering, geology, and ecology. This dissertation shows how the models can also be used to build modelbased risk maps for areabased roadway traffic crashes. Countylevel vehicle crash records and roadway data from Texas are used to illustrate the method. A potential extension that uses univariate hierarchical models to develop networkbased risk maps is also discussed. Several Bayesian multivariate spatial models for estimating the traffic crash rates from different types of crashes simultaneously are then developed. The specific class of spatial models considered is conditional autoregressive (CAR) model. The univariate CAR model is generalized for several multivariate cases. A general theorem for each case is provided to ensure that the posterior distribution is proper under improper and flat prior. The performance of various multivariate spatial models is compared using a Bayesian information criterion. The Markov chain Monte Carlo (MCMC) computational techniques are used for the model parameter estimation and statistical inference. These models are illustrated and compared again with the Texas crash data. There are many directions in which this study can be extended. This dissertation concludes with a short summary of this research and recommends several promising extensions.
54

Probability calculations of orthologous genes

Lagervik Öster, Alice January 2005 (has links)
<p>The aim of this thesis is to formulate and implement an algorithm that calculates the probability for two genes being orthologs, given a gene tree and a species tree. To do this, reconciliations between the gene tree and the species trees are used. A birth and death process is used to model the evolution, and used to calculate the orthology probability. The birth and death parameters are approximated with a Markov Chain Monte Carlo (MCMC). A MCMC framework for probability calculations of reconciliations written by Arvestad et al. (2003) is used. Rules for orthologous reconciliations are developed and implemented to calculate the probability for the reconciliations that have two genes as orthologs. The rules where integrated with the Arvestad et al. (2003) framework, and the algorithm was then validated and tested.</p>
55

Ideology and interests : a hierarchical Bayesian approach to spatial party preferences

Mohanty, Peter Cushner 04 December 2013 (has links)
This paper presents a spatial utility model of support for multiple political parties. The model includes a "valence" term, which I reparameterize to include both party competence and the voters' key sociodemographic concerns. The paper shows how this spatial utility model can be interpreted as a hierarchical model using data from the 2009 European Elections Study. I estimate this model via Bayesian Markov Chain Monte Carlo (MCMC) using a block Gibbs sampler and show that the model can capture broad European-wide trends while allowing for significant amounts of heterogeneity. This approach, however, which assumes a normal dependent variable, is only able to partially reproduce the data generating process. I show that the data generating process can be reproduced more accurately with an ordered probit model. Finally, I discuss trade-offs between parsimony and descriptive richness and other practical challenges that may be encountered when v building models of party support and make recommendations for capturing the best of both approaches. / text
56

Bayesian parsimonious covariance estimation for hierarchical linear mixed models

Frühwirth-Schnatter, Sylvia, Tüchler, Regina January 2004 (has links) (PDF)
We considered a non-centered parameterization of the standard random-effects model, which is based on the Cholesky decomposition of the variance-covariance matrix. The regression type structure of the non-centered parameterization allows to choose a simple, conditionally conjugate normal prior on the Cholesky factor. Based on the non-centered parameterization, we search for a parsimonious variance-covariance matrix by identifying the non-zero elements of the Cholesky factors using Bayesian variable selection methods. With this method we are able to learn from the data for each effect, whether it is random or not, and whether covariances among random effects are zero or not. An application in marketing shows a substantial reduction of the number of free elements of the variance-covariance matrix. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
57

Accelerating Markov chain Monte Carlo via parallel predictive prefetching

Angelino, Elaine Lee 21 October 2014 (has links)
We present a general framework for accelerating a large class of widely used Markov chain Monte Carlo (MCMC) algorithms. This dissertation demonstrates that MCMC inference can be accelerated in a model of parallel computation that uses speculation to predict and complete computational work ahead of when it is known to be useful. By exploiting fast, iterative approximations to the target density, we can speculatively evaluate many potential future steps of the chain in parallel. In Bayesian inference problems, this approach can accelerate sampling from the target distribution, without compromising exactness, by exploiting subsets of data. It takes advantage of whatever parallel resources are available, but produces results exactly equivalent to standard serial execution. In the initial burn-in phase of chain evaluation, it achieves speedup over serial evaluation that is close to linear in the number of available cores. / Engineering and Applied Sciences
58

A review on computation methods for Bayesian state-space model with case studies

Yang, Mengta, 1979- 24 November 2010 (has links)
Sequential Monte Carlo (SMC) and Forward Filtering Backward Sampling (FFBS) are the two most often seen algorithms for Bayesian state space models analysis. Various results regarding the applicability has been either claimed or shown. It is said that SMC would excel under nonlinear, non-Gaussian situations, and less computationally expansive. On the other hand, it has been shown that with techniques such as Grid approximation (Hore et al. 2010), FFBS based methods would do no worse, though still can be computationally expansive, but provide more exact information. The purpose of this report to compare the two methods with simulated data sets, and further explore whether there exist some clear criteria that may be used to determine a priori which methods would suit the study better. / text
59

Effects of sample size, ability distribution, and the length of Markov Chain Monte Carlo burn-in chains on the estimation of item and testlet parameters

Orr, Aline Pinto 25 July 2011 (has links)
Item Response Theory (IRT) models are the basis of modern educational measurement. In order to increase testing efficiency, modern tests make ample use of groups of questions associated with a single stimulus (testlets). This violates the IRT assumption of local independence. However, a set of measurement models, testlet response theory (TRT), has been developed to address such dependency issues. This study investigates the effects of varying sample sizes and Markov Chain Monte Carlo burn-in chain lengths on the accuracy of estimation of a TRT model’s item and testlet parameters. The following outcome measures are examined: Descriptive statistics, Pearson product-moment correlations between known and estimated parameters, and indices of measurement effectiveness for final parameter estimates. / text
60

Critical behavior for the model of random spatial permutations

Kerl, John R. January 2010 (has links)
We examine a phase transition in a model of random spatial permutations which originates in a study of the interacting Bose gas. Permutations are weighted according to point positions; the low-temperature onset of the appearance of arbitrarily long cycles is connected to the phase transition of Bose-Einstein condensates. In our simplified model, point positions are held fixed on the fully occupied cubic lattice and interactions are expressed as Ewens-type weights on cycle lengths of permutations. The critical temperature of the transition to long cycles depends on an interaction-strength parameter α. For weak interactions, the shift in critical temperature is expected to be linear in α with constant of linearity c. Using Markov chain Monte Carlo methods and finite-size scaling, we find c = 0.618 ± 0.086. This finding matches a similar analytical result of Ueltschi and Betz. We also examine the mean longest cycle length as a fraction of the number of sites in long cycles, recovering an earlier result of Shepp and Lloyd for non-spatial permutations. The plan of this paper is as follows. We begin with a non-technical discussion of the historical context of the project, along with a mention of alternative approaches. Relevant previous works are cited, thus annotating the bibliography. The random-cycle approach to the BEC problem requires a model of spatial permutations. This model it is of its own probabilistic interest; it is developed mathematically, without reference to the Bose gas. Our Markov-chain Monte Carlo algorithms for sampling from the random-cycle distribution - the swap-only, swap-and-reverse, band-update, and worm algorithms - are presented, compared, and contrasted. Finite-size scaling techniques are used to obtain information about infinite-volume quantities from finite-volume computational data.

Page generated in 0.0461 seconds