• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 287
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 666
  • 666
  • 359
  • 359
  • 150
  • 147
  • 101
  • 72
  • 66
  • 66
  • 65
  • 63
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Model Discrimination Using Markov Chain Monte Carlo Methods

Masoumi, Samira 24 April 2013 (has links)
Model discrimination deals with situations where there are several candidate models available to represent a system. The objective is to find the “best” model among rival models with respect to prediction of system behavior. Empirical and mechanistic models are two important categories of models. Mechanistic models are developed based on physical mechanisms. These types of models can be applied for prediction purposes, but they are also developed to gain improved understanding of the underlying physical mechanism or to estimate physico-chemical parameters of interest. When model discrimination is applied to mechanistic models, the main goal is typically to determine the “correct” underlying physical mechanism. This study focuses on mechanistic models and presents a model discrimination procedure which is applicable to mechanistic models for the purpose of studying the underlying physical mechanism. Obtaining the data needed from the real system is one of the challenges particularly in applications where experiments are expensive or time consuming. Therefore, it is beneficial to get the maximum information possible from the real system using the least possible number of experiments. In this research a new approach to model discrimination is presented that takes advantage of Monte Carlo (MC) methods. It combines a design of experiments (DOE) method with an adaptation of MC model selection methods to obtain a sequential Bayesian Markov Chain Monte Carlo model discrimination framework which is general and usable for a wide range of model discrimination problems. The procedure has been applied to chemical engineering case studies and the promising results have been discussed. Four case studies, order of reaction, rate of FeIII formation, copolymerization, and RAFT polymerization, are presented in this study. The first three benchmark problems allowed us to refine the proposed approach. Moreover, applying the Sequential Bayesian Monte Carlo model discrimination framework in the RAFT problem made a contribution to the polymer community by recommending analysis an approach to selecting the correct mechanism.
162

Bayesian Analysis for Large Spatial Data

Park, Jincheol 2012 August 1900 (has links)
The Gaussian geostatistical model has been widely used in Bayesian modeling of spatial data. A core difficulty for this model is at inverting the n x n covariance matrix, where n is a sample size. The computational complexity of matrix inversion increases as O(n3). This difficulty is involved in almost all statistical inferences approaches of the model, such as Kriging and Bayesian modeling. In Bayesian inference, the inverse of covariance matrix needs to be evaluated at each iteration in posterior simulations, so Bayesian approach is infeasible for large sample size n due to the current computational power limit. In this dissertation, we propose two approaches to address this computational issue, namely, the auxiliary lattice model (ALM) approach and the Bayesian site selection (BSS) approach. The key feature of ALM is to introduce a latent regular lattice which links Gaussian Markov Random Field (GMRF) with Gaussian Field (GF) of the observations. The GMRF on the auxiliary lattice represents an approximation to the Gaussian process. The distinctive feature of ALM from other approximations lies in that ALM avoids completely the problem of the matrix inversion by using analytical likelihood of GMRF. The computational complexity of ALM is rather attractive, which increase linearly with sample size. The second approach, Bayesian site selection (BSS), attempts to reduce the dimension of data through a smart selection of a representative subset of the observations. The BSS method first split the observations into two parts, the observations near the target prediction sites (part I) and their remaining (part II). Then, by treating the observations in part I as response variable and those in part II as explanatory variables, BSS forms a regression model which relates all observations through a conditional likelihood derived from the original model. The dimension of the data can then be reduced by applying a stochastic variable selection procedure to the regression model, which selects only a subset of the part II data as explanatory data. BSS can provide us more understanding to the underlying true Gaussian process, as it directly works on the original process without any approximations involved. The practical performance of ALM and BSS will be illustrated with simulated data and real data sets.
163

Algebraic Multigrid for Markov Chains and Tensor Decomposition

Miller, Killian January 2012 (has links)
The majority of this thesis is concerned with the development of efficient and robust numerical methods based on adaptive algebraic multigrid to compute the stationary distribution of Markov chains. It is shown that classical algebraic multigrid techniques can be applied in an exact interpolation scheme framework to compute the stationary distribution of irreducible, homogeneous Markov chains. A quantitative analysis shows that algebraically smooth multiplicative error is locally constant along strong connections in a scaled system operator, which suggests that classical algebraic multigrid coarsening and interpolation can be applied to the class of nonsymmetric irreducible singular M-matrices with zero column sums. Acceleration schemes based on fine-level iterant recombination, and over-correction of the coarse-grid correction are developed to improve the rate of convergence and scalability of simple adaptive aggregation multigrid methods for Markov chains. Numerical tests over a wide range of challenging nonsymmetric test problems demonstrate the effectiveness of the proposed multilevel method and the acceleration schemes. This thesis also investigates the application of adaptive algebraic multigrid techniques for computing the canonical decomposition of higher-order tensors. The canonical decomposition is formulated as a least squares optimization problem, for which local minimizers are computed by solving the first-order optimality equations. The proposed multilevel method consists of two phases: an adaptive setup phase that uses a multiplicative correction scheme in conjunction with bootstrap algebraic multigrid interpolation to build the necessary operators on each level, and a solve phase that uses additive correction cycles based on the full approximation scheme to efficiently obtain an accurate solution. The alternating least squares method, which is a standard one-level iterative method for computing the canonical decomposition, is used as the relaxation scheme. Numerical tests show that for certain test problems arising from the discretization of high-dimensional partial differential equations on regular lattices the proposed multilevel method significantly outperforms the standard alternating least squares method when a high level of accuracy is required.
164

Structural Performance Evaluation of Actual Bridges by means of Modal Parameter-based FE Model Updating / モーダルパラメータベースのFEモデルアップデートによる実際の橋の構造性能評価

Zhou, Xin 23 March 2022 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第23858号 / 工博第4945号 / 新制||工||1772(附属図書館) / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 KIM Chul-Woo, 教授 高橋 良和, 准教授 北根 安雄 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
165

Exact Markov chain Monte Carlo and Bayesian linear regression

Bentley, Jason Phillip January 2009 (has links)
In this work we investigate the use of perfect sampling methods within the context of Bayesian linear regression. We focus on inference problems related to the marginal posterior model probabilities. Model averaged inference for the response and Bayesian variable selection are considered. Perfect sampling is an alternate form of Markov chain Monte Carlo that generates exact sample points from the posterior of interest. This approach removes the need for burn-in assessment faced by traditional MCMC methods. For model averaged inference, we find the monotone Gibbs coupling from the past (CFTP) algorithm is the preferred choice. This requires the predictor matrix be orthogonal, preventing variable selection, but allowing model averaging for prediction of the response. Exploring choices of priors for the parameters in the Bayesian linear model, we investigate sufficiency for monotonicity assuming Gaussian errors. We discover that a number of other sufficient conditions exist, besides an orthogonal predictor matrix, for the construction of a monotone Gibbs Markov chain. Requiring an orthogonal predictor matrix, we investigate new methods of orthogonalizing the original predictor matrix. We find that a new method using the modified Gram-Schmidt orthogonalization procedure performs comparably with existing transformation methods, such as generalized principal components. Accounting for the effect of using an orthogonal predictor matrix, we discover that inference using model averaging for in-sample prediction of the response is comparable between the original and orthogonal predictor matrix. The Gibbs sampler is then investigated for sampling when using the original predictor matrix and the orthogonal predictor matrix. We find that a hybrid method, using a standard Gibbs sampler on the orthogonal space in conjunction with the monotone CFTP Gibbs sampler, provides the fastest computation and convergence to the posterior distribution. We conclude the hybrid approach should be used when the monotone Gibbs CFTP sampler becomes impractical, due to large backwards coupling times. We demonstrate large backwards coupling times occur when the sample size is close to the number of predictors, or when hyper-parameter choices increase model competition. The monotone Gibbs CFTP sampler should be taken advantage of when the backwards coupling time is small. For the problem of variable selection we turn to the exact version of the independent Metropolis-Hastings (IMH) algorithm. We reiterate the notion that the exact IMH sampler is redundant, being a needlessly complicated rejection sampler. We then determine a rejection sampler is feasible for variable selection when the sample size is close to the number of predictors and using Zellner’s prior with a small value for the hyper-parameter c. Finally, we use the example of simulating from the posterior of c conditional on a model to demonstrate how the use of an exact IMH view-point clarifies how the rejection sampler can be adapted to improve efficiency.
166

A Markov Chain Approach to IEEE 802.11WLAN Performance Analysis

Xiong, Lixiang January 2008 (has links)
Doctor of Philosopy (PhD) / Wireless communication always attracts extensive research interest, as it is a core part of modern communication technology. During my PhD study, I have focused on two research areas of wireless communication: IEEE 802.11 network performance analysis, and wireless cooperative retransmission. The first part of this thesis focuses on IEEE 802.11 network performance analysis. Since IEEE 802.11 technology is the most popular wireless access technology, IEEE 802.11 network performance analysis is always an important research area. In this area, my work includes the development of three analytical models for various aspects of IEEE 802.11 network performance analysis. First, a two-dimensional Markov chain model is proposed for analysing the performance of IEEE 802.11e EDCA (Enhanced Distributed Channel Access). With this analytical model, the saturated throughput is obtained. Compared with the existing analytical models of EDCA, the proposed model includes more correct details of EDCA, and accordingly its results are more accurate. This better accuracy is also proved by the simulation study. Second, another two-dimensional Markov chain model is proposed for analysing the coexistence performance of IEEE 802.11 DCF (Distributed Coordination Function) and IEEE 802.11e EDCA wireless devices. The saturated throughput is obtained with the proposed analytical model. The simulation study verifies the proposed analytical model, and it shows that the channel access priority of DCF is similar to that of the best effort access category in EDCA in the coexistence environment. The final work in this area is a hierarchical Markov chain model for investigating the impact of data-rate switching on the performance of IEEE 802.11 DCF. With this analytical model,the saturated throughput can be obtained. The simulation study verifies the accuracy of the model and shows the impact of the data-rate switching under different network conditions. A series of threshold values for the channel condition as well as the number of stations are obtained to decide whether the data-rate switching should be active or not. The second part of this thesis focuses on wireless cooperative retransmission. In this thesis, two uncoordinated distributed wireless cooperative retransmission strategies for single-hop connection are presented. In the proposed strategies, each uncoordinated cooperative neighbour randomly decide whether it should transmit to help the frame delivery depending on some pre-calculated optimal transmission probabilities. In Strategy 1, the source only transmits once in the first slot, and only the neighbours are involved in the retransmission attempts in the subsequent slots. In Strategy 2, both the source and the neighbours participate in the retransmission attempts. Both strategies are first analysed with a simple memoryless channel model, and the results show the superior performance of Strategy 2. With the elementary results for the memoryless channel model, a more realistic two-state Markov fading channel model is used to investigate the performance of Strategy 2. The simulation study verifies the accuracy of our analysis and indicates the superior performance of Strategy 2 compared with the simple retransmission strategy and the traditional two-hop strategy.
167

Planejamento econômico de gráficos de controle X para monitoramento de processos autocorrelacionados /

Franco, Bruno Chaves. January 2011 (has links)
Resumo: Esta pesquisa propõe o planejamento econômico de gráficos de controle ̅ para o monitoramento de uma característica de qualidade na qual as observações se ajustam a um modelo autorregressivo de primeira ordem com erro adicional. O modelo de custos de Duncan é usado para selecionar os parâmetros do gráfico, tamanho da amostra, intervalo de amostragem e os limites de controle. Utiliza-se o algoritmo genético na busca do custo mínimo de monitoramento. Para determinação do número médio de amostras até o sinal e o número de alarmes falsos são utilizadas Cadeias de Markov. Uma análise de sensibilidade mostrou que a autocorrelação provoca efeito adverso nos parâmetros do gráfico elevando seu custo de monitoramento e reduzindo sua eficiência / Abstract: This research proposes an economic design of a ̅ control chart used to monitor a quality characteristic whose observations fit to a first-order autoregressive model with additional error. The Duncan's cost model is used to select the control chart parameters, namely the sample size, the sampling interval and the control limit coefficient, using genetic algorithm in order to search the minimum cost. The Markov chain is used to determine the average number of samples until the signal and the number of false alarms. A sensitivity analysis showed that the autocorrelation causes adverse effect on the parameters of the control chart increasing monitoring cost and reducing significantly their efficiency / Orientadora: Marcela Aparecida Guerreiro Machado / Coorientadora: Antonio Fernando Branco Costa / Banca: Fernando Augusto Silva Marins / Banca: Anderson Paula de Paiva / Mestre
168

Auxiliary variable Markov chain Monte Carlo methods

Graham, Matthew McKenzie January 2018 (has links)
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
169

Programming language semantics as a foundation for Bayesian inference

Szymczak, Marcin January 2018 (has links)
Bayesian modelling, in which our prior belief about the distribution on model parameters is updated by observed data, is a popular approach to statistical data analysis. However, writing specific inference algorithms for Bayesian models by hand is time-consuming and requires significant machine learning expertise. Probabilistic programming promises to make Bayesian modelling easier and more accessible by letting the user express a generative model as a short computer program (with random variables), leaving inference to the generic algorithm provided by the compiler of the given language. However, it is not easy to design a probabilistic programming language correctly and define the meaning of programs expressible in it. Moreover, the inference algorithms used by probabilistic programming systems usually lack formal correctness proofs and bugs have been found in some of them, which limits the confidence one can have in the results they return. In this work, we apply ideas from the areas of programming language theory and statistics to show that probabilistic programming can be a reliable tool for Bayesian inference. The first part of this dissertation concerns the design, semantics and type system of a new, substantially enhanced version of the Tabular language. Tabular is a schema-based probabilistic language, which means that instead of writing a full program, the user only has to annotate the columns of a schema with expressions generating corresponding values. By adopting this paradigm, Tabular aims to be user-friendly, but this unusual design also makes it harder to define the syntax and semantics correctly and reason about the language. We define the syntax of a version of Tabular extended with user-defined functions and pseudo-deterministic queries, design a dependent type system for this language and endow it with a precise semantics. We also extend Tabular with a concise formula notation for hierarchical linear regressions, define the type system of this extended language and show how to reduce it to pure Tabular. In the second part of this dissertation, we present the first correctness proof for a Metropolis-Hastings sampling algorithm for a higher-order probabilistic language. We define a measure-theoretic semantics of the language by means of an operationally-defined density function on program traces (sequences of random variables) and a map from traces to program outputs. We then show that the distribution of samples returned by our algorithm (a variant of “Trace MCMC” used by the Church language) matches the program semantics in the limit.
170

Analysing plant closure effects using time-varying mixture-of-experts Markov chain clustering

Frühwirth-Schnatter, Sylvia, Pittner, Stefan, Weber, Andrea, Winter-Ebmer, Rudolf January 2018 (has links) (PDF)
In this paper we study data on discrete labor market transitions from Austria. In particular, we follow the careers of workers who experience a job displacement due to plant closure and observe - over a period of 40 quarters - whether these workers manage to return to a steady career path. To analyse these discrete-valued panel data, we apply a new method of Bayesian Markov chain clustering analysis based on inhomogeneous first order Markov transition processes with time-varying transition matrices. In addition, a mixtureof- experts approach allows us to model the probability of belonging to a certain cluster as depending on a set of covariates via a multinomial logit model. Our cluster analysis identifies five career patterns after plant closure and reveals that some workers cope quite easily with a job loss whereas others suffer large losses over extended periods of time.

Page generated in 0.0757 seconds