• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 287
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 666
  • 666
  • 359
  • 359
  • 150
  • 147
  • 101
  • 72
  • 66
  • 66
  • 65
  • 63
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Annealing and Tempering for Sampling and Counting

Bhatnagar, Nayantara 09 July 2007 (has links)
The Markov Chain Monte Carlo (MCMC) method has been widely used in practice since the 1950's in areas such as biology, statistics, and physics. However, it is only in the last few decades that powerful techniques for obtaining rigorous performance guarantees with respect to the running time have been developed. Today, with only a few notable exceptions, most known algorithms for approximately uniform sampling and approximate counting rely on the MCMC method. This thesis focuses on algorithms that use MCMC combined with an algorithm from optimization called simulated annealing, for sampling and counting problems. Annealing is a heuristic for finding the global optimum of a function over a large search space. It has recently emerged as a powerful technique used in conjunction with the MCMC method for sampling problems, for example in the estimation of the permanent and in algorithms for computing the volume of a convex body. We examine other applications of annealing to sampling problems as well as scenarios when it fails to converge in polynomial time. We consider the problem of randomly generating 0-1 contingency tables. This is a well-studied problem in statistics, as well as the theory of random graphs, since it is also equivalent to generating a random bipartite graph with a prescribed degree sequence. Previously, the only algorithm known for all degree sequences was by reduction to approximating the permanent of a 0-1 matrix. We give a direct and more efficient combinatorial algorithm which relies on simulated annealing. Simulated tempering is a variant of annealing used for sampling in which a temperature parameter is randomly raised or lowered during the simulation. The idea is that by extending the state space of the Markov chain to a polynomial number of progressively smoother distributions, parameterized by temperature, the chain could cross bottlenecks in the original space which cause slow mixing. We show that simulated tempering mixes torpidly for the 3-state ferromagnetic Potts model on the complete graph. Moreover, we disprove the conventional belief that tempering can slow fixed temperature algorithms by at most a polynomial in the number of temperatures and show that it can converge at a rate that is slower by at least an exponential factor.
132

Improved cement quality and grinding efficiency by means of closed mill circuit modeling

Mejeoumov, Gleb Gennadievich 15 May 2009 (has links)
Grinding of clinker is the last and most energy-consuming stage of the cement manufacturing process, drawing on average 40% of the total energy required to produce one ton of cement. During this stage, the clinker particles are substantially reduced in size to generate a certain level of fineness as it has a direct influence on such performance characteristics of the final product as rate of hydration, water demand, strength development, and other. The grinding objectives tying together the energy and fineness requirements were formulated based on a review of the state of the art of clinker grinding and numerical simulation employing the Markov chain theory. The literature survey revealed that not only the specific surface of the final product, but also the shape of its particle size distribution (PSD) is responsible for the cement performance characteristics. While it is feasible to engineer the desired PSD in the laboratory, the process-specific recommendations on how to generate the desired PSD in the industrial mill are not available. Based on a population balance principle and stochastic representation of the particle movement within the grinding system, the Markov chain model for the circuit consisting of a tube ball mill and a high efficiency separator was introduced through the matrices of grinding and classification. The grinding matrix was calculated using the selection and breakage functions, whereas the classification matrix was defined from the Tromp curve of the separator. The results of field experiments carried out at a pilot cement plant were used to identify the model's parameters. The retrospective process data pertaining to the operation of the pilot grinding circuit was employed to validate the model and define the process constraints. Through numerical simulation, the relationships between the controlled (fresh feed rate; separator cut size) and observed (fineness characteristics of cement; production rate; specific energy consumption) parameters of the circuit were defined. The analysis of the simulation results allowed formulation of the process control procedures with the objectives of decreasing the specific energy consumption of the mill, maintaining the targeted specific surface area of the final product, and governing the shape of its PSD.
133

Medium Access Control in Wireless Networks with Multipacket Reception and Queueing

Chen, Guan-Mei 26 July 2005 (has links)
In this thesis, we propose the predictive multicast polling scheme for medium access control in wireless networks with multipacket reception capability. We concentrate on the case in which the packet arrival process is general and the maximum queue size is finite but larger than one. We derive both analytical results and simulation results. We use the theory of discrete-time Markov chain to analyze the evolution of the system state. In addition, we propose to use Markov reward processes to calculate the throughput. Furthermore, we obtain the average system size, the packet blocking probability, and the average packet delay. The proposed analysis approach is applicable no matter whether perfect state information is available to the controller or not. We also use simulation results to justify the usage of the proposed approach. Our study shows that the system performance can be significantly improved with a few additional buffers in the queues. The proposed medium access control scheme can be used in the single-hop wireless local area networks and the multi-hop wireless mesh networks.
134

A Preemptive Channel Allocation Mechanism for GSM/GPRS Cellular Networks

Yang, Wei-Chun 23 August 2001 (has links)
In the near future, the integration of GSM and GPRS services will bring the wireless personal communication networks into a new era. With the extreme growth in the number of users for contending limited resources, an efficient channel allocation scheme for GSM/GPRS users become very important. Currently, existing channel allocation schemes do not consider the various characteristics of traffic classes. Consequently, users can not obtain their optimal channel resources in delivering different types of traffic. In this thesis, a preemptive channel allocation mechanism is introduced for GSM/GPRS cellular networks. Based on the call requests, for different types of services, we classify the traffic into GSM, real-time GPRS and non-real-time GPRS. Two channel thresholds are defined. TGSM/GPRS is used to separate the channels between GSM and GPRS users, while TGPRS_rt is used to separate the channels between real-time and non-real-time GPRS users. Since the two thresholds can be dynamically adjusted based on the number of call requests, the channel utilization is increased and less resources are wasted. Note that in our proposed scheme, high-priority users¡]i.e., GSM handoff calls¡^can preempt the channels being used by low-priority users¡]i.e., non-real-time GPRS calls¡^. Hence, the call blocking probability of high-priority calls can be significantly reduced and their quality of services can be guaranteed as well. We build a 3-D Markov Chain mathematical model to analyze our proposed channel allocation schemes. The parameters of our interests include the call blocking probability, the average number of active calls, the average call completion rate and the overall channel utilization. To verify our mathematical results, we employ OPNET simulator to simulate the proposed schemes. Through the mathematical and simulation results, we have observed that with the preemptive channel allocation, the high-priority calls¡]i.e., GSM and real-time GPRS¡^can achieve relatively low blocking probability while slightly increasing the blocking probability of non-real-time GPRS calls. Besides, the overall channel utilization is greatly improved due to the appropriate channel allocation.
135

Bayesian multivariate spatial models and their applications

Song, Joon Jin 15 November 2004 (has links)
Univariate hierarchical Bayes models are being vigorously researched for use in disease mapping, engineering, geology, and ecology. This dissertation shows how the models can also be used to build modelbased risk maps for areabased roadway traffic crashes. Countylevel vehicle crash records and roadway data from Texas are used to illustrate the method. A potential extension that uses univariate hierarchical models to develop networkbased risk maps is also discussed. Several Bayesian multivariate spatial models for estimating the traffic crash rates from different types of crashes simultaneously are then developed. The specific class of spatial models considered is conditional autoregressive (CAR) model. The univariate CAR model is generalized for several multivariate cases. A general theorem for each case is provided to ensure that the posterior distribution is proper under improper and flat prior. The performance of various multivariate spatial models is compared using a Bayesian information criterion. The Markov chain Monte Carlo (MCMC) computational techniques are used for the model parameter estimation and statistical inference. These models are illustrated and compared again with the Texas crash data. There are many directions in which this study can be extended. This dissertation concludes with a short summary of this research and recommends several promising extensions.
136

Probability calculations of orthologous genes

Lagervik Öster, Alice January 2005 (has links)
<p>The aim of this thesis is to formulate and implement an algorithm that calculates the probability for two genes being orthologs, given a gene tree and a species tree. To do this, reconciliations between the gene tree and the species trees are used. A birth and death process is used to model the evolution, and used to calculate the orthology probability. The birth and death parameters are approximated with a Markov Chain Monte Carlo (MCMC). A MCMC framework for probability calculations of reconciliations written by Arvestad et al. (2003) is used. Rules for orthologous reconciliations are developed and implemented to calculate the probability for the reconciliations that have two genes as orthologs. The rules where integrated with the Arvestad et al. (2003) framework, and the algorithm was then validated and tested.</p>
137

Ideology and interests : a hierarchical Bayesian approach to spatial party preferences

Mohanty, Peter Cushner 04 December 2013 (has links)
This paper presents a spatial utility model of support for multiple political parties. The model includes a "valence" term, which I reparameterize to include both party competence and the voters' key sociodemographic concerns. The paper shows how this spatial utility model can be interpreted as a hierarchical model using data from the 2009 European Elections Study. I estimate this model via Bayesian Markov Chain Monte Carlo (MCMC) using a block Gibbs sampler and show that the model can capture broad European-wide trends while allowing for significant amounts of heterogeneity. This approach, however, which assumes a normal dependent variable, is only able to partially reproduce the data generating process. I show that the data generating process can be reproduced more accurately with an ordered probit model. Finally, I discuss trade-offs between parsimony and descriptive richness and other practical challenges that may be encountered when v building models of party support and make recommendations for capturing the best of both approaches. / text
138

Bayesian parsimonious covariance estimation for hierarchical linear mixed models

Frühwirth-Schnatter, Sylvia, Tüchler, Regina January 2004 (has links) (PDF)
We considered a non-centered parameterization of the standard random-effects model, which is based on the Cholesky decomposition of the variance-covariance matrix. The regression type structure of the non-centered parameterization allows to choose a simple, conditionally conjugate normal prior on the Cholesky factor. Based on the non-centered parameterization, we search for a parsimonious variance-covariance matrix by identifying the non-zero elements of the Cholesky factors using Bayesian variable selection methods. With this method we are able to learn from the data for each effect, whether it is random or not, and whether covariances among random effects are zero or not. An application in marketing shows a substantial reduction of the number of free elements of the variance-covariance matrix. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
139

Accelerating Markov chain Monte Carlo via parallel predictive prefetching

Angelino, Elaine Lee 21 October 2014 (has links)
We present a general framework for accelerating a large class of widely used Markov chain Monte Carlo (MCMC) algorithms. This dissertation demonstrates that MCMC inference can be accelerated in a model of parallel computation that uses speculation to predict and complete computational work ahead of when it is known to be useful. By exploiting fast, iterative approximations to the target density, we can speculatively evaluate many potential future steps of the chain in parallel. In Bayesian inference problems, this approach can accelerate sampling from the target distribution, without compromising exactness, by exploiting subsets of data. It takes advantage of whatever parallel resources are available, but produces results exactly equivalent to standard serial execution. In the initial burn-in phase of chain evaluation, it achieves speedup over serial evaluation that is close to linear in the number of available cores. / Engineering and Applied Sciences
140

A review on computation methods for Bayesian state-space model with case studies

Yang, Mengta, 1979- 24 November 2010 (has links)
Sequential Monte Carlo (SMC) and Forward Filtering Backward Sampling (FFBS) are the two most often seen algorithms for Bayesian state space models analysis. Various results regarding the applicability has been either claimed or shown. It is said that SMC would excel under nonlinear, non-Gaussian situations, and less computationally expansive. On the other hand, it has been shown that with techniques such as Grid approximation (Hore et al. 2010), FFBS based methods would do no worse, though still can be computationally expansive, but provide more exact information. The purpose of this report to compare the two methods with simulated data sets, and further explore whether there exist some clear criteria that may be used to determine a priori which methods would suit the study better. / text

Page generated in 0.0484 seconds