• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 17
  • 15
  • 11
  • 10
  • 8
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 334
  • 334
  • 146
  • 79
  • 73
  • 54
  • 47
  • 46
  • 44
  • 42
  • 42
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The Application of Markov Chain Monte Carlo Techniques in Non-Linear Parameter Estimation for Chemical Engineering Models

Mathew, Manoj January 2013 (has links)
Modeling of chemical engineering systems often necessitates using non-linear models. These models can range in complexity, from a simple analytical equation to a system of differential equations. Regardless of what type of model is being utilized, determining parameter estimates is essential in everyday chemical engineering practice. One promising approach to non-linear regression is a technique called Markov Chain Monte Carlo (MCMC).This method produces reliable parameter estimates and generates joint confidence regions (JCRs) with correct shape and correct probability content. Despite these advantages, its application in chemical engineering literature has been limited. Therefore, in this project, MCMC methods were applied to a variety of chemical engineering models. The objectives of this research is to (1) illustrate how to implement MCMC methods in complex non-linear models (2) show the advantages of using MCMC techniques over classical regression approaches and (3) provide practical guidelines on how to reduce the computational time. MCMC methods were first applied to the biological oxygen demand (BOD) problem. In this case study, an implementation procedure was outlined using specific examples from the BOD problem. The results from the study illustrated the importance of estimating the pure error variance as a parameter rather than fixing its value based on the mean square error. In addition, a comparison was carried out between the MCMC results and the results obtained from using classical regression approaches. The findings show that although similar point estimates are obtained, JCRs generated from approximation methods cannot model the parameter uncertainty adequately. Markov Chain Monte Carlo techniques were then applied in estimating reactivity ratios in the Mayo-Lewis model, Meyer-Lowry model, the direct numerical integration model and the triad fraction multiresponse model. The implementation steps for each of these models were discussed in detail and the results from this research were once again compared to previously used approximation methods. Once again, the conclusion drawn from this work showed that MCMC methods must be employed in order to obtain JCRs with the correct shape and correct probability content. MCMC methods were also applied in estimating kinetic parameter used in the solid oxide fuel cell study. More specifically, the kinetics of the water-gas shift reaction, which is used in generating hydrogen for the fuel cell, was studied. The results from this case study showed how the MCMC output can be analyzed in order to diagnose parameter observability and correlation. A significant portion of the model needed to be reduced due to these issues of observability and correlation. Point estimates and JCRs were then generated using the reduced model and diagnostic checks were carried out in order to ensure the model was able to capture the data adequately. A few select parameters in the Waterloo Polymer Simulator were estimated using the MCMC algorithm. Previous studies have shown that accurate parameter estimates and JCRs could not be obtained using classical regression approaches. However, when MCMC techniques were applied to the same problem, reliable parameter estimates and correct shape and correct probability content confidence regions were observed. This case study offers a strong argument as to why classical regression approaches should be replaced by MCMC techniques. Finally, a very brief overview of the computational times for each non-linear model used in this research was provided. In addition, a serial farming approach was proposed and a significant decrease in computational time was observed when this procedure was implemented.
22

Selection of the number of states by birth-death processes

Sögner, Leopold January 2000 (has links) (PDF)
In this article we use spatial birth-death processes to estimate the number of states k of a switching model. Following Preston (1976) and Stephens (1998) matching the detailed balance condition for the underlying birth-death process results in an unique invariant probability measure with the corresponding stationary distribution of the number of states. This concept could be easily integrated to Bayesian sampling to derive the marginal posterior distribution of the number of states within the sampling procedure. We apply this technique to simulated AR(1)data and to quarterly Austrian data on unemployment and real gross domestic product. (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
23

Bayesian Variable Selection for Logistic Models Using Auxiliary Mixture Sampling

Tüchler, Regina January 2006 (has links) (PDF)
The paper presents an Markov Chain Monte Carlo algorithm for both variable and covariance selection in the context of logistic mixed effects models. This algorithm allows us to sample solely from standard densities, with no additional tuning being needed. We apply a stochastic search variable approach to select explanatory variables as well as to determine the structure of the random effects covariance matrix. For logistic mixed effects models prior determination of explanatory variables and random effects is no longer prerequisite since the definite structure is chosen in a data-driven manner in the course of the modeling procedure. As an illustration two real-data examples from finance and tourism studies are given. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
24

Uncertainty Analysis in Upscaling Well Log data By Markov Chain Monte Carlo Method

Hwang, Kyubum 16 January 2010 (has links)
More difficulties are now expected in exploring economically valuable reservoirs because most reservoirs have been already developed since beginning seismic exploration of the subsurface. In order to efficiently analyze heterogeneous fine-scale properties in subsurface layers, one ongoing challenge is accurately upscaling fine-scale (high frequency) logging measurements to coarse-scale data, such as surface seismic images. In addition, numerically efficient modeling cannot use models defined on the scale of log data. At this point, we need an upscaling method replaces the small scale data with simple large scale models. However, numerous unavoidable uncertainties still exist in the upscaling process, and these problems have been an important emphasis in geophysics for years. Regarding upscaling problems, there are predictable or unpredictable uncertainties in upscaling processes; such as, an averaging method, an upscaling algorithm, analysis of results, and so forth. To minimize the uncertainties, a Bayesian framework could be a useful tool for providing the posterior information to give a better estimate for a chosen model with a conditional probability. In addition, the likelihood of a Bayesian framework plays an important role in quantifying misfits between the measured data and the calculated parameters. Therefore, Bayesian methodology can provide a good solution for quantification of uncertainties in upscaling. When analyzing many uncertainties in porosities, wave velocities, densities, and thicknesses of rocks through upscaling well log data, the Markov Chain Monte Carlo (MCMC) method is a potentially beneficial tool that uses randomly generated parameters with a Bayesian framework producing the posterior information. In addition, the method provides reliable model parameters to estimate economic values of hydrocarbon reservoirs, even though log data include numerous unknown factors due to geological heterogeneity. In this thesis, fine layered well log data from the North Sea were selected with a depth range of 1600m to 1740m for upscaling using an MCMC implementation. The results allow us to automatically identify important depths where interfaces should be located, along with quantitative estimates of uncertainty in data. Specifically, interfaces in the example are required near depths of 1,650m, 1,695m, 1,710m, and 1,725m. Therefore, the number and location of blocked layers can be effectively quantified in spite of uncertainties in upscaling log data.
25

The Application of Markov Chain Monte Carlo Techniques in Non-Linear Parameter Estimation for Chemical Engineering Models

Mathew, Manoj January 2013 (has links)
Modeling of chemical engineering systems often necessitates using non-linear models. These models can range in complexity, from a simple analytical equation to a system of differential equations. Regardless of what type of model is being utilized, determining parameter estimates is essential in everyday chemical engineering practice. One promising approach to non-linear regression is a technique called Markov Chain Monte Carlo (MCMC).This method produces reliable parameter estimates and generates joint confidence regions (JCRs) with correct shape and correct probability content. Despite these advantages, its application in chemical engineering literature has been limited. Therefore, in this project, MCMC methods were applied to a variety of chemical engineering models. The objectives of this research is to (1) illustrate how to implement MCMC methods in complex non-linear models (2) show the advantages of using MCMC techniques over classical regression approaches and (3) provide practical guidelines on how to reduce the computational time. MCMC methods were first applied to the biological oxygen demand (BOD) problem. In this case study, an implementation procedure was outlined using specific examples from the BOD problem. The results from the study illustrated the importance of estimating the pure error variance as a parameter rather than fixing its value based on the mean square error. In addition, a comparison was carried out between the MCMC results and the results obtained from using classical regression approaches. The findings show that although similar point estimates are obtained, JCRs generated from approximation methods cannot model the parameter uncertainty adequately. Markov Chain Monte Carlo techniques were then applied in estimating reactivity ratios in the Mayo-Lewis model, Meyer-Lowry model, the direct numerical integration model and the triad fraction multiresponse model. The implementation steps for each of these models were discussed in detail and the results from this research were once again compared to previously used approximation methods. Once again, the conclusion drawn from this work showed that MCMC methods must be employed in order to obtain JCRs with the correct shape and correct probability content. MCMC methods were also applied in estimating kinetic parameter used in the solid oxide fuel cell study. More specifically, the kinetics of the water-gas shift reaction, which is used in generating hydrogen for the fuel cell, was studied. The results from this case study showed how the MCMC output can be analyzed in order to diagnose parameter observability and correlation. A significant portion of the model needed to be reduced due to these issues of observability and correlation. Point estimates and JCRs were then generated using the reduced model and diagnostic checks were carried out in order to ensure the model was able to capture the data adequately. A few select parameters in the Waterloo Polymer Simulator were estimated using the MCMC algorithm. Previous studies have shown that accurate parameter estimates and JCRs could not be obtained using classical regression approaches. However, when MCMC techniques were applied to the same problem, reliable parameter estimates and correct shape and correct probability content confidence regions were observed. This case study offers a strong argument as to why classical regression approaches should be replaced by MCMC techniques. Finally, a very brief overview of the computational times for each non-linear model used in this research was provided. In addition, a serial farming approach was proposed and a significant decrease in computational time was observed when this procedure was implemented.
26

Bayesian Inference for Stochastic Volatility Models

Men, Zhongxian January 1012 (has links)
Stochastic volatility (SV) models provide a natural framework for a representation of time series for financial asset returns. As a result, they have become increasingly popular in the finance literature, although they have also been applied in other fields such as signal processing, telecommunications, engineering, biology, and other areas. In working with the SV models, an important issue arises as how to estimate their parameters efficiently and to assess how well they fit real data. In the literature, commonly used estimation methods for the SV models include general methods of moments, simulated maximum likelihood methods, quasi Maximum likelihood method, and Markov Chain Monte Carlo (MCMC) methods. Among these approaches, MCMC methods are most flexible in dealing with complicated structure of the models. However, due to the difficulty in the selection of the proposal distribution for Metropolis-Hastings methods, in general they are not easy to implement and in some cases we may also encounter convergence problems in the implementation stage. In the light of these concerns, we propose in this thesis new estimation methods for univariate and multivariate SV models. In the simulation of latent states of the heavy-tailed SV models, we recommend the slice sampler algorithm as the main tool to sample the proposal distribution when the Metropolis-Hastings method is applied. For the SV models without heavy tails, a simple Metropolis-Hastings method is developed for simulating the latent states. Since the slice sampler can adapt to the analytical structure of the underlying density, it is more efficient. A sample point can be obtained from the target distribution with a few iterations of the sampler, whereas in the original Metropolis-Hastings method many sampled values often need to be discarded. In the analysis of multivariate time series, multivariate SV models with more general specifications have been proposed to capture the correlations between the innovations of the asset returns and those of the latent volatility processes. Due to some restrictions on the variance-covariance matrix of the innovation vectors, the estimation of the multivariate SV (MSV) model is challenging. To tackle this issue, for a very general setting of a MSV model we propose a straightforward MCMC method in which a Metropolis-Hastings method is employed to sample the constrained variance-covariance matrix, where the proposal distribution is an inverse Wishart distribution. Again, the log volatilities of the asset returns can then be simulated via a single-move slice sampler. Recently, factor SV models have been proposed to extract hidden market changes. Geweke and Zhou (1996) propose a factor SV model based on factor analysis to measure pricing errors in the context of the arbitrage pricing theory by letting the factors follow the univariate standard normal distribution. Some modification of this model have been proposed, among others, by Pitt and Shephard (1999a) and Jacquier et al. (1999). The main feature of the factor SV models is that the factors follow a univariate SV process, where the loading matrix is a lower triangular matrix with unit entries on the main diagonal. Although the factor SV models have been successful in practice, it has been recognized that the order of the component may affect the sample likelihood and the selection of the factors. Therefore, in applications, the component order has to be considered carefully. For instance, the factor SV model should be fitted to several permutated data to check whether the ordering affects the estimation results. In the thesis, a new factor SV model is proposed. Instead of setting the loading matrix to be lower triangular, we set it to be column-orthogonal and assume that each column has unit length. Our method removes the permutation problem, since when the order is changed then the model does not need to be refitted. Since a strong assumption is imposed on the loading matrix, the estimation seems even harder than the previous factor models. For example, we have to sample columns of the loading matrix while keeping them to be orthonormal. To tackle this issue, we use the Metropolis-Hastings method to sample the loading matrix one column at a time, while the orthonormality between the columns is maintained using the technique proposed by Hoff (2007). A von Mises-Fisher distribution is sampled and the generated vector is accepted through the Metropolis-Hastings algorithm. Simulation studies and applications to real data are conducted to examine our inference methods and test the fit of our model. Empirical evidence illustrates that our slice sampler within MCMC methods works well in terms of parameter estimation and volatility forecast. Examples using financial asset return data are provided to demonstrate that the proposed factor SV model is able to characterize the hidden market factors that mainly govern the financial time series. The Kolmogorov-Smirnov tests conducted on the estimated models indicate that the models do a reasonable job in terms of describing real data.
27

Bayesian Anatomy of Galaxy Structure

Yoon, Ilsang 01 February 2013 (has links)
In this thesis I develop Bayesian approach to model galaxy surface brightness and apply it to a bulge-disc decomposition analysis of galaxies in near-infrared band, from Two Micron All Sky Survey (2MASS). The thesis has three main parts. First part is a technical development of Bayesian galaxy image decomposition package Galphat based on Markov chain Monte Carlo algorithm. I implement a fast and accurate galaxy model image generation algorithm to reduce computation time and make Bayesian approach feasible for real science analysis using large ensemble of galaxies. I perform a benchmark test of Galphat and demonstrate significant improvement in parameter estimation with a correct statistical confidence. Second part is a performance test for full Bayesian application to galaxy bulgedisc decomposition analysis including not only the parameter estimation but also the model comparison to classify different galaxy population. The test demonstrates that Galphat has enough statistical power to make a reliable model inference using galaxy photometric survey data. Bayesian prior update is also tested for parameter estimation and Bayes factor model comparison and it shows that informative prior significantly improves the model inference in every aspects. Last part is a Bayesian bulge-disc decomposition analysis using 2MASS Ks-band selected samples. I characterise the luminosity distributions in spheroids, bulges and discs separately in the local Universe and study the galaxy morphology correlation, by full utilising the ensemble parameter posterior of the entire galaxy samples. It shows that to avoid a biased inference, the parameter covariance and model degeneracy has to be carefully characterised by the full probability distribution.
28

ON PARTICLE METHODS FOR UNCERTAINTY QUANTIFICATION IN COMPLEX SYSTEMS

Yang, Chao January 2017 (has links)
No description available.
29

On a Selection of Advanced Markov Chain Monte Carlo Algorithms for Everyday Use: Weighted Particle Tempering, Practical Reversible Jump, and Extensions

Carzolio, Marcos Arantes 08 July 2016 (has links)
We are entering an exciting era, rich in the availability of data via sources such as the Internet, satellites, particle colliders, telecommunication networks, computer simulations, and the like. The confluence of increasing computational resources, volumes of data, and variety of statistical procedures has brought us to a modern enlightenment. Within the next century, these tools will combine to reveal unforeseeable insights into the social and natural sciences. Perhaps the largest headwind we now face is our collectively slow-moving imagination. Like a car on an open road, learning is limited by its own rate. Historically, slow information dissemination and the unavailability of experimental resources limited our learning. To that point, any methodological contribution that helps in the conversion of data into knowledge will accelerate us along this open road. Furthermore, if that contribution is accessible to others, the speedup in knowledge discovery scales exponentially. Markov chain Monte Carlo (MCMC) is a broad class of powerful algorithms, typically used for Bayesian inference. Despite their variety and versatility, these algorithms rarely become mainstream workhorses because they can be difficult to implement. The humble goal of this work is to bring to the table a few more highly versatile and robust, yet easily-tuned algorithms. Specifically, we introduce weighted particle tempering, a parallelizable MCMC procedure that is adaptable to large computational resources. We also explore and develop a highly practical implementation of reversible jump, the most generalized form of MetropolisHastings. Finally, we combine these two algorithms into reversible jump weighted particle tempering, and apply it on a model and dataset that was partially collected by the author and his collaborators, halfway around the world. It is our hope that by introducing, developing, and exhibiting these algorithms, we can make a reasonable contribution to the ever-growing body of MCMC research. / Ph. D.
30

Bayesian Methods for Intensity Measure and Ground Motion Selection in Performance-Based Earthquake Engineering

Dhulipala, Lakshmi Narasimha Somayajulu 19 March 2019 (has links)
The objective of quantitative Performance-Based Earthquake Engineering (PBEE) is designing buildings that meet the specified performance objectives when subjected to an earthquake. One challenge to completely relying upon a PBEE approach in design practice is the open-ended nature of characterizing the earthquake ground motion by selecting appropriate ground motions and Intensity Measures (IM) for seismic analysis. This open-ended nature changes the quantified building performance depending upon the ground motions and IMs selected. So, improper ground motion and IM selection can lead to errors in structural performance prediction and thus to poor designs. Hence, the goal of this dissertation is to propose methods and tools that enable an informed selection of earthquake IMs and ground motions, with the broader goal of contributing toward a robust PBEE analysis. In doing so, the change of perspective and the mechanism to incorporate additional information provided by Bayesian methods will be utilized. Evaluation of the ability of IMs towards predicting the response of a building with precision and accuracy for a future, unknown earthquake is a fundamental problem in PBEE analysis. Whereas current methods for IM quality assessment are subjective and have multiple criteria (hence making IM selection challenging), a unified method is proposed that enables rating the numerous IMs. This is done by proposing the first quantitative metric for assessing IM accuracy in predicting the building response to a future earthquake, and then by investigating the relationship between precision and accuracy. This unified metric is further expected to provide a pathway toward improving PBEE analysis by allowing the consideration of multiple IMs. Similar to IM selection, ground motion selection is important for PBEE analysis. Consensus on the "right" input motions for conducting seismic response analyses is often varied and dependent on the analyst. Hence, a general and flexible tool is proposed to aid ground motion selection. General here means the tool encompasses several structural types by considering their sensitivities to different ground motion characteristics. Flexible here means the tool can consider additional information about the earthquake process when available with the analyst. Additionally, in support of this ground motion selection tool, a simplified method for seismic hazard analysis for a vector of IMs is developed. This dissertation addresses four critical issues in IM and ground motion selection for PBEE by proposing: (1) a simplified method for performing vector hazard analysis given multiple IMs; (2) a Bayesian framework to aid ground motion selection which is flexible and general to incorporate preferences of the analyst; (3) a unified metric to aid IM quality assessment for seismic fragility and demand hazard assessment; (4) Bayesian models for capturing heteroscedasticity (non-constant standard deviation) in seismic response analyses which may further influence IM selection. / Doctor of Philosophy / Earthquake ground shaking is a complex phenomenon since there is no unique way to assess its strength. Yet, the strength of ground motion (shaking) becomes an integral part for predicting the future earthquake performance of buildings using the Performance-Based Earthquake Engineering (PBEE) framework. The PBEE framework predicts building performance in terms of expected financial losses, possible downtime, the potential of the building to collapse under a future earthquake. Much prior research has shown that the predictions made by the PBEE framework are heavily dependent upon how the strength of a future earthquake ground motion is characterized. This dependency leads to uncertainty in the predicted building performance and hence its seismic design. The goal of this dissertation therefore is to employ Bayesian reasoning, which takes into account the alternative explanations or perspectives of a research problem, and propose robust quantitative methods that aid IM selection and ground motion selection in PBEE The fact that the local intensity of an earthquake can be characterized in multiple ways using Intensity Measures (IM; e.g., peak ground acceleration) is problematic for PBEE because it leads to different PBEE results for different choices of the IM. While formal procedures for selecting an optimal IM exist, they may be considered as being subjective and have multiple criteria making their use difficult and inconclusive. Bayes rule provides a mechanism called change of perspective using which a problem that is difficult to solve from one perspective could be tackled from a different perspective. This change of perspective mechanism is used to propose a quantitative, unified metric for rating alternative IMs. The immediate application of this metric is aiding the selection of the best IM that would predict the building earthquake performance with least bias. Structural analysis for performance assessment in PBEE is conducted by selecting ground motions which match a target response spectrum (a representation of future ground motions). The definition of a target response spectrum lacks general consensus and is dependent on the analysts’ preferences. To encompass all these preferences and requirements of analysts, a Bayesian target response spectrum which is general and flexible is proposed. While the generality of this Bayesian target response spectrum allow analysts select those ground motions to which their structures are the most sensitive, its flexibility permits the incorporation of additional information (preferences) into the target response spectrum development. This dissertation addresses four critical questions in PBEE: (1) how can we best define ground motion at a site?; (2) if ground motion can only be defined by multiple metrics, how can we easily derive the probability of such shaking at a site?; (3) how do we use these multiple metrics to select a set of ground motion records that best capture the site’s unique seismicity; (4) when those records are used to analyze the response of a structure, how can we be sure that a standard linear regression technique accurately captures the uncertainty in structural response at low and high levels of shaking?

Page generated in 0.0808 seconds