• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 311
  • 67
  • 48
  • 32
  • 31
  • 18
  • 16
  • 14
  • 14
  • 9
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 710
  • 710
  • 375
  • 375
  • 154
  • 153
  • 106
  • 80
  • 69
  • 69
  • 66
  • 66
  • 64
  • 63
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Slice Sampling with Multivariate Steps

Thompson, Madeleine 11 January 2012 (has links)
Markov chain Monte Carlo (MCMC) allows statisticians to sample from a wide variety of multidimensional probability distributions. Unfortunately, MCMC is often difficult to use when components of the target distribution are highly correlated or have disparate variances. This thesis presents three results that attempt to address this problem. First, it demonstrates a means for graphical comparison of MCMC methods, which allows researchers to compare the behavior of a variety of samplers on a variety of distributions. Second, it presents a collection of new slice-sampling MCMC methods. These methods either adapt globally or use the adaptive crumb framework for sampling with multivariate steps. They perform well with minimal tuning on distributions when popular methods do not. Methods in the first group learn an approximation to the covariance of the target distribution and use its eigendecomposition to take non-axis-aligned steps. Methods in the second group use the gradients at rejected proposed moves to approximate the local shape of the target distribution so that subsequent proposals move more efficiently through the state space. Finally, this thesis explores the scaling of slice sampling with multivariate steps with respect to dimension, resulting in a formula for optimally choosing scale tuning parameters. It shows that the scaling of untransformed methods can sometimes be improved by alternating steps from those methods with radial steps based on those of the polar slice sampler.
152

Slice Sampling with Multivariate Steps

Thompson, Madeleine 11 January 2012 (has links)
Markov chain Monte Carlo (MCMC) allows statisticians to sample from a wide variety of multidimensional probability distributions. Unfortunately, MCMC is often difficult to use when components of the target distribution are highly correlated or have disparate variances. This thesis presents three results that attempt to address this problem. First, it demonstrates a means for graphical comparison of MCMC methods, which allows researchers to compare the behavior of a variety of samplers on a variety of distributions. Second, it presents a collection of new slice-sampling MCMC methods. These methods either adapt globally or use the adaptive crumb framework for sampling with multivariate steps. They perform well with minimal tuning on distributions when popular methods do not. Methods in the first group learn an approximation to the covariance of the target distribution and use its eigendecomposition to take non-axis-aligned steps. Methods in the second group use the gradients at rejected proposed moves to approximate the local shape of the target distribution so that subsequent proposals move more efficiently through the state space. Finally, this thesis explores the scaling of slice sampling with multivariate steps with respect to dimension, resulting in a formula for optimally choosing scale tuning parameters. It shows that the scaling of untransformed methods can sometimes be improved by alternating steps from those methods with radial steps based on those of the polar slice sampler.
153

Phase transitions in spin systems: uniqueness, reconstruction and mixing time

Yang, Linji 02 April 2013 (has links)
Spin systems are powerful mathematical models widely used and studied in Statistical Physics and Computer Science. This thesis focuses the study of spin systems on colorings and weighted independent sets (the hard-core model). In many spin systems, there exist phase transition phenomena: there is a threshold value of a parameter such that when the parameter is on one side of the threshold, the system exhibits the so-called spatial decay of correlation, i.e., the influence from a set of vertices to another set of vertices diminishes as the distance between the two sets grows; when the parameter is on the other side, long range correlations persist. The uniqueness problem and the reconstruction problem are two major threshold problems that are concerned with the decay of correlations in the Gibbs measure from different perspectives. In Computer Science, the study of spin systems mainly focused on finding an efficient algorithm that samples the configurations from a distribution that is very close to the Gibbs measure. Glauber dynamics is a typical Markov chain algorithm for performing sampling. In many systems, the convergence time of the Glauber dynamics also exhibits a threshold behavior: the speed of convergence experiences a dramatic change around the threshold of the parameter. The first two parts of this thesis focus on making connections between the phase transition of the convergence time of the dynamics and the phase transition of the reconstruction phenomenon in both colorings and the hard-core model on regular trees. A relatively sharp threshold is established for the change of the convergence time, which coincides with the reconstruction threshold. A general technique of upper bounding the conductance of the dynamics via analyzing the sensitivity of the reconstruction algorithm is proposed and proven to be very effective for lower bounding the convergence time of the dynamics. The third part of the thesis provides an innovative analytical method for establishing a strong version of the decay of correlation of the Gibbs distributions for many two spin systems on various classes of graphs. In particular, the method is applied to the hard-core model on the square lattice, a very important graph that is of great interest in both Statistical Physics and Computer Science. As a result, we significantly improve the lower bound of the uniqueness threshold on the square lattice and extend the range of parameter where the Glauber dynamics is rapidly mixing.
154

Model Discrimination Using Markov Chain Monte Carlo Methods

Masoumi, Samira 24 April 2013 (has links)
Model discrimination deals with situations where there are several candidate models available to represent a system. The objective is to find the “best” model among rival models with respect to prediction of system behavior. Empirical and mechanistic models are two important categories of models. Mechanistic models are developed based on physical mechanisms. These types of models can be applied for prediction purposes, but they are also developed to gain improved understanding of the underlying physical mechanism or to estimate physico-chemical parameters of interest. When model discrimination is applied to mechanistic models, the main goal is typically to determine the “correct” underlying physical mechanism. This study focuses on mechanistic models and presents a model discrimination procedure which is applicable to mechanistic models for the purpose of studying the underlying physical mechanism. Obtaining the data needed from the real system is one of the challenges particularly in applications where experiments are expensive or time consuming. Therefore, it is beneficial to get the maximum information possible from the real system using the least possible number of experiments. In this research a new approach to model discrimination is presented that takes advantage of Monte Carlo (MC) methods. It combines a design of experiments (DOE) method with an adaptation of MC model selection methods to obtain a sequential Bayesian Markov Chain Monte Carlo model discrimination framework which is general and usable for a wide range of model discrimination problems. The procedure has been applied to chemical engineering case studies and the promising results have been discussed. Four case studies, order of reaction, rate of FeIII formation, copolymerization, and RAFT polymerization, are presented in this study. The first three benchmark problems allowed us to refine the proposed approach. Moreover, applying the Sequential Bayesian Monte Carlo model discrimination framework in the RAFT problem made a contribution to the polymer community by recommending analysis an approach to selecting the correct mechanism.
155

A statistical framework for estimating output-specific efficiencies

Gstach, Dieter January 2003 (has links) (PDF)
This paper presents a statistical framework for estimating output-specific efficiencies for the 2-output case based upon a DEA frontier estimate. The key to the approach is the concept of target output-mix. Being usually unobserved, target output-mixes of firms are modelled as missing data. Using this concept the relevant data generating process can be formulated. The resulting likelihood function is analytically intractable, so a data augmented Bayesian approach is proposed for estimation purposes. This technique is adapted to the present purpose. Some implementation issues are discussed leading to an empirical Bayes setup with data informed priors. A prove of scale invariance is provided. (author's abstract) / Series: Department of Economics Working Paper Series
156

Provision Quality-of-Service Controlled Content Distribution in Vehicular Ad Hoc Networks

Luan, Hao 23 August 2012 (has links)
By equipping vehicles with the on-board wireless facility, the newly emerged vehicular networking targets to provision the broadband serves to vehicles. As such, a variety of novel and exciting applications can be provided to vehicular users to enhance their road safety and travel comfort, and finally raise a complete change to their on-road life. As the content distribution and media/video streaming, such as Youtube, Netflix, nowadays have become the most popular Internet applications, to enable the efficient content distribution and audio/video streaming services is thus of the paramount importance to the success of the vehicular networking. This, however, is fraught with fundamental challenges due to the distinguished natures of vehicular networking. On one hand, the vehicular communication is challenged by the spotty and volatile wireless connections caused by the high mobility of vehicles. This makes the download performance of connections very unstable and dramatically change over time, which directly threats to the on-top media applications. On the other hand, a vehicular network typically involves an extremely large-scale node population (e.g., hundreds or thousandths of vehicles in a region) with intense spatial and temporal variations across the network geometry at different times. This dictates any designs to be scalable and fully distributed which should not only be resilient to the network dynamics, but also provide the guaranteed quality-of-service (QoS) to users. The purpose of this dissertation is to address the challenges of the vehicular networking imposed by its intrinsic dynamic and large-scale natures, and build the efficient, scalable and, more importantly, practical systems to enable the cost-effective and QoS guaranteed content distribution and media streaming services to vehicular users. Note that to effective- ly deliver the content from the remote Internet to in-motion vehicles, it typically involves three parts as: 1.) an infrastructure grid of gateways which behave as the data depots or injection points of Internet contents and services to vehicles, 2.) protocol at gateways which schedules the bandwidth resource at gateways and coordinates the parallel transmissions to different vehicles, and 3.) the end-system control mechanism at receivers which adapts the receiver’s content download/playback strategy based on the available network throughput to provide users with the desired service experience. With above three parts in mind, the entire research work in this dissertation casts a systematic view to address each part in one topic with: 1.) design of large-scale cost-effective content distribution infrastructure, 2.) MAC (media access control) performance evaluation and channel time scheduling, and 3.) receiver adaptation and adaptive playout in dynamic download environment. In specific, in the first topic, we propose a practical solution to form a large-scale and cost-effective content distribution infrastructure in the city. We argue that a large-scale infrastructure with the dedicated resources, including storage, computing and communication capacity, is necessary for the vehicular network to become an alternative of 3G/4G cellular network as the dominating approach of ubiquitous content distribution and data services to vehicles. On addressing this issue, we propose a fully distributed scheme to form a large-scale infrastructure by the contributions of individual entities in the city, such as grocery stores, movie theaters, etc. That is to say, the installation and maintenance costs are shared by many individuals. In this topic, we explain the design rationale on how to motivate individuals to contribute, and specify the detailed design of the system, which is embodied with distributed protocols and performance evaluation. The second topic investigates on the MAC throughput performance of the vehicle-to- infrastructure (V2I) communications when vehicles drive through RSUs, namely drive-thru Internet. Note that with a large-scale population of fast-motion nodes contending the chan- nel for transmissions, the MAC performance determines the achievable nodal throughput and is crucial to the on-top applications. In this topic, using a simple yet accurate Marko- vian model, we first show the impacts of mobility (characterized by node velocity and moving directions) on the nodal and system throughput performance, respectively. Based on this analysis, we then propose three enhancement schemes to timely adjust the MAC parameters in tune with the vehicle mobility to achieve the maximal the system throughput. The last topic investigates on the end-system design to deliver the user desired media streaming services in the vehicular environment. In specific, the vehicular communications are notoriously known for the intermittent connectivity and dramatically varying throughput. Video streaming on top of vehicular networks therefore inevitably suffers from the severe network dynamics, resulting in the frequent jerkiness or even freezing video playback. To address this issue, an analytical model is first developed to unveil the impacts of network dynamics on the resultant video performance to users in terms of video start-up delay and smoothness of playback. Based on the analysis, the adaptive playout buffer mechanism is developed to adapt the video playback strategy at receivers towards the user-defined video quality. The proposals developed in the three topics are validated with the extensive and high fidelity simulations. We believe that our analysis developed in the dissertation can provide insightful lights on understanding the fundamental performance of the vehicular content distribution networks from the aspects of session-level download performance in urban vehicular networks (topic 1), MAC throughput performance (topic 2), and user perceived media quality (topic 3). The protocols developed in the three topics, respectively, offer practical and efficient solutions to build and optimize the vehicular content distribution networks.
157

Knotting statistics after a local strand passage in unknotted self-avoiding polygons in Z<sup>3</sup>

Szafron, Michael Lorne 15 April 2009 (has links)
We study here a model for a strand passage in a ring polymer about a randomly chosen location at which two strands of the polymer have been brought gcloseh together. The model is based on ¦-SAPs, which are unknotted self-avoiding polygons in Z^3 that contain a fixed structure ¦ that forces two segments of the polygon to be close together. To study this model, the Composite Markov Chain Monte Carlo (CMCMC) algorithm, referred to as the CMC ¦-BFACF algorithm, that I developed and proved to be ergodic for unknotted ¦-SAPs in my M. Sc. Thesis, is used. Ten simulations (each consisting of 9.6~10^10 time steps) of the CMC ¦-BFACF algorithm are performed and the results from a statistical analysis of the simulated data are presented. To this end, a new maximum likelihood method, based on previous work of Berretti and Sokal, is developed for obtaining maximum likelihood estimates of the growth constants and critical exponents associated respectively with the numbers of unknotted (2n)-edge ¦-SAPs, unknotted (2n)-edge successful-strand-passage ¦-SAPs, unknotted (2n)-edge failed-strand-passage ¦-SAPs, and (2n)-edge after-strand-passage-knot-type-K unknotted successful-strand-passage ¦-SAPs. The maximum likelihood estimates are consistent with the result (proved here) that the growth constants are all equal, and provide evidence that the associated critical exponents are all equal.<p> We then investigate the question gGiven that a successful local strand passage occurs at a random location in a (2n)-edge knot-type K ¦-SAP, with what probability will the ¦-SAP have knot-type Kf after the strand passage?h. To this end, the CMCMC data is used to obtain estimates for the probability of knotting given a (2n)-edge successful-strand-passage ¦-SAP and the probability of an after-strand-passage polygon having knot-type K given a (2n)-edge successful-strand-passage ¦-SAP. The computed estimates numerically support the unproven conjecture that these probabilities, in the n¨ limit, go to a value lying strictly between 0 and 1. We further prove here that the rate of approach to each of these limits (should the limits exist) is less than exponential.<p> We conclude with a study of whether or not there is a difference in the gsizeh of an unknotted successful-strand-passage ¦-SAP whose after-strand-passage knot-type is K when compared to the gsizeh of a ¦-SAP whose knot-type does not change after strand passage. The two measures of gsizeh used are the expected lengths of, and the expected mean-square radius of gyration of, subsets of ¦-SAPs. How these two measures of gsizeh behave as a function of a polygonfs length and its after-strand-passage knot-type is investigated.
158

Conditions for Rapid and Torpid Mixing of Parallel and Simulated Tempering on Multimodal Distributions

Woodard, Dawn Banister 14 September 2007 (has links)
Stochastic sampling methods are ubiquitous in statistical mechanics, Bayesian statistics, and theoretical computer science. However, when the distribution that is being sampled is multimodal, many of these techniques converge slowly, so that a great deal of computing time is necessary to obtain reliable answers. Parallel and simulated tempering are sampling methods that are designed to converge quickly even for multimodal distributions. In this thesis, we assess the extent to which this goal is acheived.We give conditions under which a Markov chain constructed via parallel or simulated tempering is guaranteed to be rapidly mixing, meaning that it converges quickly. These conditions are applicable to a wide range of multimodal distributions arising in Bayesian statistical inference and statistical mechanics. We provide lower bounds on the spectral gaps of parallel and simulated tempering. These bounds imply a single set of sufficient conditions for rapid mixing of both techniques. A direct consequence of our results is rapid mixing of parallel and simulated tempering for several normal mixture models in R^M as M increases, and for the mean-field Ising model.We also obtain upper bounds on the convergence rates of parallel and simulated tempering, yielding a single set of sufficient conditions for torpid mixing of both techniques. These conditions imply torpid mixing of parallel and simulated tempering on a normal mixture model with unequal covariances in $\R^M$ as $M$ increases and on the mean-field Potts model with $q \geq 3$, regardless of the number and choice of temperatures, as well as on the mean-field Ising model if an insufficient (fixed) set of temperatures is used. The latter result is in contrast to the rapid mixing of parallel and simulated tempering on the mean-field Ising model with a linearly increasing set of temperatures. / Dissertation
159

Annealing and Tempering for Sampling and Counting

Bhatnagar, Nayantara 09 July 2007 (has links)
The Markov Chain Monte Carlo (MCMC) method has been widely used in practice since the 1950's in areas such as biology, statistics, and physics. However, it is only in the last few decades that powerful techniques for obtaining rigorous performance guarantees with respect to the running time have been developed. Today, with only a few notable exceptions, most known algorithms for approximately uniform sampling and approximate counting rely on the MCMC method. This thesis focuses on algorithms that use MCMC combined with an algorithm from optimization called simulated annealing, for sampling and counting problems. Annealing is a heuristic for finding the global optimum of a function over a large search space. It has recently emerged as a powerful technique used in conjunction with the MCMC method for sampling problems, for example in the estimation of the permanent and in algorithms for computing the volume of a convex body. We examine other applications of annealing to sampling problems as well as scenarios when it fails to converge in polynomial time. We consider the problem of randomly generating 0-1 contingency tables. This is a well-studied problem in statistics, as well as the theory of random graphs, since it is also equivalent to generating a random bipartite graph with a prescribed degree sequence. Previously, the only algorithm known for all degree sequences was by reduction to approximating the permanent of a 0-1 matrix. We give a direct and more efficient combinatorial algorithm which relies on simulated annealing. Simulated tempering is a variant of annealing used for sampling in which a temperature parameter is randomly raised or lowered during the simulation. The idea is that by extending the state space of the Markov chain to a polynomial number of progressively smoother distributions, parameterized by temperature, the chain could cross bottlenecks in the original space which cause slow mixing. We show that simulated tempering mixes torpidly for the 3-state ferromagnetic Potts model on the complete graph. Moreover, we disprove the conventional belief that tempering can slow fixed temperature algorithms by at most a polynomial in the number of temperatures and show that it can converge at a rate that is slower by at least an exponential factor.
160

Improved cement quality and grinding efficiency by means of closed mill circuit modeling

Mejeoumov, Gleb Gennadievich 15 May 2009 (has links)
Grinding of clinker is the last and most energy-consuming stage of the cement manufacturing process, drawing on average 40% of the total energy required to produce one ton of cement. During this stage, the clinker particles are substantially reduced in size to generate a certain level of fineness as it has a direct influence on such performance characteristics of the final product as rate of hydration, water demand, strength development, and other. The grinding objectives tying together the energy and fineness requirements were formulated based on a review of the state of the art of clinker grinding and numerical simulation employing the Markov chain theory. The literature survey revealed that not only the specific surface of the final product, but also the shape of its particle size distribution (PSD) is responsible for the cement performance characteristics. While it is feasible to engineer the desired PSD in the laboratory, the process-specific recommendations on how to generate the desired PSD in the industrial mill are not available. Based on a population balance principle and stochastic representation of the particle movement within the grinding system, the Markov chain model for the circuit consisting of a tube ball mill and a high efficiency separator was introduced through the matrices of grinding and classification. The grinding matrix was calculated using the selection and breakage functions, whereas the classification matrix was defined from the Tromp curve of the separator. The results of field experiments carried out at a pilot cement plant were used to identify the model's parameters. The retrospective process data pertaining to the operation of the pilot grinding circuit was employed to validate the model and define the process constraints. Through numerical simulation, the relationships between the controlled (fresh feed rate; separator cut size) and observed (fineness characteristics of cement; production rate; specific energy consumption) parameters of the circuit were defined. The analysis of the simulation results allowed formulation of the process control procedures with the objectives of decreasing the specific energy consumption of the mill, maintaining the targeted specific surface area of the final product, and governing the shape of its PSD.

Page generated in 0.0465 seconds