• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 160
  • 16
  • 15
  • 11
  • 10
  • 8
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 318
  • 318
  • 318
  • 318
  • 142
  • 77
  • 72
  • 54
  • 45
  • 45
  • 42
  • 42
  • 40
  • 31
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Bayesian Variable Selection for Logistic Models Using Auxiliary Mixture Sampling

Tüchler, Regina January 2006 (has links) (PDF)
The paper presents an Markov Chain Monte Carlo algorithm for both variable and covariance selection in the context of logistic mixed effects models. This algorithm allows us to sample solely from standard densities, with no additional tuning being needed. We apply a stochastic search variable approach to select explanatory variables as well as to determine the structure of the random effects covariance matrix. For logistic mixed effects models prior determination of explanatory variables and random effects is no longer prerequisite since the definite structure is chosen in a data-driven manner in the course of the modeling procedure. As an illustration two real-data examples from finance and tourism studies are given. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
22

Uncertainty Analysis in Upscaling Well Log data By Markov Chain Monte Carlo Method

Hwang, Kyubum 16 January 2010 (has links)
More difficulties are now expected in exploring economically valuable reservoirs because most reservoirs have been already developed since beginning seismic exploration of the subsurface. In order to efficiently analyze heterogeneous fine-scale properties in subsurface layers, one ongoing challenge is accurately upscaling fine-scale (high frequency) logging measurements to coarse-scale data, such as surface seismic images. In addition, numerically efficient modeling cannot use models defined on the scale of log data. At this point, we need an upscaling method replaces the small scale data with simple large scale models. However, numerous unavoidable uncertainties still exist in the upscaling process, and these problems have been an important emphasis in geophysics for years. Regarding upscaling problems, there are predictable or unpredictable uncertainties in upscaling processes; such as, an averaging method, an upscaling algorithm, analysis of results, and so forth. To minimize the uncertainties, a Bayesian framework could be a useful tool for providing the posterior information to give a better estimate for a chosen model with a conditional probability. In addition, the likelihood of a Bayesian framework plays an important role in quantifying misfits between the measured data and the calculated parameters. Therefore, Bayesian methodology can provide a good solution for quantification of uncertainties in upscaling. When analyzing many uncertainties in porosities, wave velocities, densities, and thicknesses of rocks through upscaling well log data, the Markov Chain Monte Carlo (MCMC) method is a potentially beneficial tool that uses randomly generated parameters with a Bayesian framework producing the posterior information. In addition, the method provides reliable model parameters to estimate economic values of hydrocarbon reservoirs, even though log data include numerous unknown factors due to geological heterogeneity. In this thesis, fine layered well log data from the North Sea were selected with a depth range of 1600m to 1740m for upscaling using an MCMC implementation. The results allow us to automatically identify important depths where interfaces should be located, along with quantitative estimates of uncertainty in data. Specifically, interfaces in the example are required near depths of 1,650m, 1,695m, 1,710m, and 1,725m. Therefore, the number and location of blocked layers can be effectively quantified in spite of uncertainties in upscaling log data.
23

Bayesian Inference for Stochastic Volatility Models

Men, Zhongxian January 1012 (has links)
Stochastic volatility (SV) models provide a natural framework for a representation of time series for financial asset returns. As a result, they have become increasingly popular in the finance literature, although they have also been applied in other fields such as signal processing, telecommunications, engineering, biology, and other areas. In working with the SV models, an important issue arises as how to estimate their parameters efficiently and to assess how well they fit real data. In the literature, commonly used estimation methods for the SV models include general methods of moments, simulated maximum likelihood methods, quasi Maximum likelihood method, and Markov Chain Monte Carlo (MCMC) methods. Among these approaches, MCMC methods are most flexible in dealing with complicated structure of the models. However, due to the difficulty in the selection of the proposal distribution for Metropolis-Hastings methods, in general they are not easy to implement and in some cases we may also encounter convergence problems in the implementation stage. In the light of these concerns, we propose in this thesis new estimation methods for univariate and multivariate SV models. In the simulation of latent states of the heavy-tailed SV models, we recommend the slice sampler algorithm as the main tool to sample the proposal distribution when the Metropolis-Hastings method is applied. For the SV models without heavy tails, a simple Metropolis-Hastings method is developed for simulating the latent states. Since the slice sampler can adapt to the analytical structure of the underlying density, it is more efficient. A sample point can be obtained from the target distribution with a few iterations of the sampler, whereas in the original Metropolis-Hastings method many sampled values often need to be discarded. In the analysis of multivariate time series, multivariate SV models with more general specifications have been proposed to capture the correlations between the innovations of the asset returns and those of the latent volatility processes. Due to some restrictions on the variance-covariance matrix of the innovation vectors, the estimation of the multivariate SV (MSV) model is challenging. To tackle this issue, for a very general setting of a MSV model we propose a straightforward MCMC method in which a Metropolis-Hastings method is employed to sample the constrained variance-covariance matrix, where the proposal distribution is an inverse Wishart distribution. Again, the log volatilities of the asset returns can then be simulated via a single-move slice sampler. Recently, factor SV models have been proposed to extract hidden market changes. Geweke and Zhou (1996) propose a factor SV model based on factor analysis to measure pricing errors in the context of the arbitrage pricing theory by letting the factors follow the univariate standard normal distribution. Some modification of this model have been proposed, among others, by Pitt and Shephard (1999a) and Jacquier et al. (1999). The main feature of the factor SV models is that the factors follow a univariate SV process, where the loading matrix is a lower triangular matrix with unit entries on the main diagonal. Although the factor SV models have been successful in practice, it has been recognized that the order of the component may affect the sample likelihood and the selection of the factors. Therefore, in applications, the component order has to be considered carefully. For instance, the factor SV model should be fitted to several permutated data to check whether the ordering affects the estimation results. In the thesis, a new factor SV model is proposed. Instead of setting the loading matrix to be lower triangular, we set it to be column-orthogonal and assume that each column has unit length. Our method removes the permutation problem, since when the order is changed then the model does not need to be refitted. Since a strong assumption is imposed on the loading matrix, the estimation seems even harder than the previous factor models. For example, we have to sample columns of the loading matrix while keeping them to be orthonormal. To tackle this issue, we use the Metropolis-Hastings method to sample the loading matrix one column at a time, while the orthonormality between the columns is maintained using the technique proposed by Hoff (2007). A von Mises-Fisher distribution is sampled and the generated vector is accepted through the Metropolis-Hastings algorithm. Simulation studies and applications to real data are conducted to examine our inference methods and test the fit of our model. Empirical evidence illustrates that our slice sampler within MCMC methods works well in terms of parameter estimation and volatility forecast. Examples using financial asset return data are provided to demonstrate that the proposed factor SV model is able to characterize the hidden market factors that mainly govern the financial time series. The Kolmogorov-Smirnov tests conducted on the estimated models indicate that the models do a reasonable job in terms of describing real data.
24

Bayesian Anatomy of Galaxy Structure

Yoon, Ilsang 01 February 2013 (has links)
In this thesis I develop Bayesian approach to model galaxy surface brightness and apply it to a bulge-disc decomposition analysis of galaxies in near-infrared band, from Two Micron All Sky Survey (2MASS). The thesis has three main parts. First part is a technical development of Bayesian galaxy image decomposition package Galphat based on Markov chain Monte Carlo algorithm. I implement a fast and accurate galaxy model image generation algorithm to reduce computation time and make Bayesian approach feasible for real science analysis using large ensemble of galaxies. I perform a benchmark test of Galphat and demonstrate significant improvement in parameter estimation with a correct statistical confidence. Second part is a performance test for full Bayesian application to galaxy bulgedisc decomposition analysis including not only the parameter estimation but also the model comparison to classify different galaxy population. The test demonstrates that Galphat has enough statistical power to make a reliable model inference using galaxy photometric survey data. Bayesian prior update is also tested for parameter estimation and Bayes factor model comparison and it shows that informative prior significantly improves the model inference in every aspects. Last part is a Bayesian bulge-disc decomposition analysis using 2MASS Ks-band selected samples. I characterise the luminosity distributions in spheroids, bulges and discs separately in the local Universe and study the galaxy morphology correlation, by full utilising the ensemble parameter posterior of the entire galaxy samples. It shows that to avoid a biased inference, the parameter covariance and model degeneracy has to be carefully characterised by the full probability distribution.
25

ON PARTICLE METHODS FOR UNCERTAINTY QUANTIFICATION IN COMPLEX SYSTEMS

Yang, Chao January 2017 (has links)
No description available.
26

Bayesian Methods for Intensity Measure and Ground Motion Selection in Performance-Based Earthquake Engineering

Dhulipala, Lakshmi Narasimha Somayajulu 19 March 2019 (has links)
The objective of quantitative Performance-Based Earthquake Engineering (PBEE) is designing buildings that meet the specified performance objectives when subjected to an earthquake. One challenge to completely relying upon a PBEE approach in design practice is the open-ended nature of characterizing the earthquake ground motion by selecting appropriate ground motions and Intensity Measures (IM) for seismic analysis. This open-ended nature changes the quantified building performance depending upon the ground motions and IMs selected. So, improper ground motion and IM selection can lead to errors in structural performance prediction and thus to poor designs. Hence, the goal of this dissertation is to propose methods and tools that enable an informed selection of earthquake IMs and ground motions, with the broader goal of contributing toward a robust PBEE analysis. In doing so, the change of perspective and the mechanism to incorporate additional information provided by Bayesian methods will be utilized. Evaluation of the ability of IMs towards predicting the response of a building with precision and accuracy for a future, unknown earthquake is a fundamental problem in PBEE analysis. Whereas current methods for IM quality assessment are subjective and have multiple criteria (hence making IM selection challenging), a unified method is proposed that enables rating the numerous IMs. This is done by proposing the first quantitative metric for assessing IM accuracy in predicting the building response to a future earthquake, and then by investigating the relationship between precision and accuracy. This unified metric is further expected to provide a pathway toward improving PBEE analysis by allowing the consideration of multiple IMs. Similar to IM selection, ground motion selection is important for PBEE analysis. Consensus on the "right" input motions for conducting seismic response analyses is often varied and dependent on the analyst. Hence, a general and flexible tool is proposed to aid ground motion selection. General here means the tool encompasses several structural types by considering their sensitivities to different ground motion characteristics. Flexible here means the tool can consider additional information about the earthquake process when available with the analyst. Additionally, in support of this ground motion selection tool, a simplified method for seismic hazard analysis for a vector of IMs is developed. This dissertation addresses four critical issues in IM and ground motion selection for PBEE by proposing: (1) a simplified method for performing vector hazard analysis given multiple IMs; (2) a Bayesian framework to aid ground motion selection which is flexible and general to incorporate preferences of the analyst; (3) a unified metric to aid IM quality assessment for seismic fragility and demand hazard assessment; (4) Bayesian models for capturing heteroscedasticity (non-constant standard deviation) in seismic response analyses which may further influence IM selection. / Doctor of Philosophy / Earthquake ground shaking is a complex phenomenon since there is no unique way to assess its strength. Yet, the strength of ground motion (shaking) becomes an integral part for predicting the future earthquake performance of buildings using the Performance-Based Earthquake Engineering (PBEE) framework. The PBEE framework predicts building performance in terms of expected financial losses, possible downtime, the potential of the building to collapse under a future earthquake. Much prior research has shown that the predictions made by the PBEE framework are heavily dependent upon how the strength of a future earthquake ground motion is characterized. This dependency leads to uncertainty in the predicted building performance and hence its seismic design. The goal of this dissertation therefore is to employ Bayesian reasoning, which takes into account the alternative explanations or perspectives of a research problem, and propose robust quantitative methods that aid IM selection and ground motion selection in PBEE The fact that the local intensity of an earthquake can be characterized in multiple ways using Intensity Measures (IM; e.g., peak ground acceleration) is problematic for PBEE because it leads to different PBEE results for different choices of the IM. While formal procedures for selecting an optimal IM exist, they may be considered as being subjective and have multiple criteria making their use difficult and inconclusive. Bayes rule provides a mechanism called change of perspective using which a problem that is difficult to solve from one perspective could be tackled from a different perspective. This change of perspective mechanism is used to propose a quantitative, unified metric for rating alternative IMs. The immediate application of this metric is aiding the selection of the best IM that would predict the building earthquake performance with least bias. Structural analysis for performance assessment in PBEE is conducted by selecting ground motions which match a target response spectrum (a representation of future ground motions). The definition of a target response spectrum lacks general consensus and is dependent on the analysts’ preferences. To encompass all these preferences and requirements of analysts, a Bayesian target response spectrum which is general and flexible is proposed. While the generality of this Bayesian target response spectrum allow analysts select those ground motions to which their structures are the most sensitive, its flexibility permits the incorporation of additional information (preferences) into the target response spectrum development. This dissertation addresses four critical questions in PBEE: (1) how can we best define ground motion at a site?; (2) if ground motion can only be defined by multiple metrics, how can we easily derive the probability of such shaking at a site?; (3) how do we use these multiple metrics to select a set of ground motion records that best capture the site’s unique seismicity; (4) when those records are used to analyze the response of a structure, how can we be sure that a standard linear regression technique accurately captures the uncertainty in structural response at low and high levels of shaking?
27

When Infinity is Too Long to Wait: On the Convergence of Markov Chain Monte Carlo Methods

Olsen, Andrew Nolan 08 October 2015 (has links)
No description available.
28

Critical slowing down and error analysis of lattice QCD simulations

Virotta, Francesco 07 May 2012 (has links)
In dieser Arbeit untersuchen wir das Critical Slowing down der Gitter-QCD Simulationen. Wir führen eine Vorstudie in der quenched Approximation durch, in der wir feststellen, dass unsere Schätzung der exponentiellen Autokorrelation wie $\tauexp(a) \sim a^{-5} $ skaliert, wobei $a$ der Gitterabstand ist. In unquenched Simulationen mit O(a)-verbesserten Wilson-Fermionen finden wir ein ähnliches Skalierungsgesetz. Die Diskussion wird von einem gro\ss{}en Satz an Ensembles sowohl in reiner Eichtheorie als auch in der Theorie mit zwei entarteten Seequarks unterstützt. Wir haben darüber hinaus die Wirkung von langsamen algorithmischen Modi in der Fehleranalyse des Erwartungswertes von typischen Gitter-QCD-Observablen (hadronische Matrixelemente und Massen) untersucht. Im Kontext der Simulationen, die durch langsame Modi betroffen sind, schlagen wir vor und testen eine Methode, um zuverlässige Schätzungen der statistischen Fehler zu bekommen. Diese Methode soll in dem typischen Simulationsbereich der Gitter-QCD helfen, nämlich dann, wenn die gesamte erfasste Statistik O(10)\tauexp ist. Dies ist der typische Fall bei Simulationen in der Nähe des Kontinuumslimes, wo der Rechenaufwand für die Erzeugung von zwei unabhängigen Datenpunkten sehr gro\ss{} sein kann. Schlie\ss{}lich diskutieren wir die Skalenbestimmung in N_f=2-Simulationen mit der Kaon Zerfallskonstante f_K als experimentellem Input. Die Methode wird zusammen mit einer gründlichen Diskussion der angewandten Fehleranalyse erklärt. Eine Beschreibung der öffentlich zugänglichen Software, die für die Fehleranalyse genutzt wurde, ist eingeschlossen. / In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as $\tauexp(a)\sim a^{-5}$, where $a$ is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10)\tauexp. This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in Nf=2 simulations using the Kaon decay constant f_K as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.
29

[en] ENERGY PRICE SIMULATION IN BRAZIL THROUGH DEMAND SIDE BIDDING / [pt] SIMULAÇÃO DOS PREÇOS DE ENERGIA NO LEILÃO DE EFICIÊNCIA ENERGÉTICA NO BRASIL

JAVIER LINKOLK LOPEZ GONZALES 18 May 2016 (has links)
[pt] A Eficiência Energética (EE) pode ser considerada sinônimo de preservação ambiental, pois a energia economizada evita a construção de novas plantas de geração e de linhas de transmissão. O Leilão de Eficiência Energética (LEE) poderia representar uma alternativa muito interessante para a dinamização e promoção de práticas de EE no Brasil. Porém, é importante mencionar que isso pressupõe uma confiança na quantidade de energia reduzida, o que só pode se tornar realidade com a implantação e desenvolvimento de um sistema de Medição e Verificação (M&V) dos consumos de energia. Neste contexto, tem-se como objetivo principal simular os preços de energia do Leilão de Eficiência Energética no ambiente regulado para conhecer se a viabilidade no Brasil poderia se concretizar. A metodologia utilizada para realizar as simulações foi a de Monte Carlo, ademais, antes se utilizou o método do Kernel com a finalidade de conseguir ajustar os dados a uma curva através de polinômios. Uma vez conseguida a curva melhor ajustada se realizou a análise de cada cenário (nas diferentes rodadas) com cada amostra (500, 1000, 5000 e 10000) para encontrar a probabilidade dos preços ficarem entre o intervalo de 110 reais e 140 reais (preços ótimos propostos no LEE). Finalmente, os resultados apresentam que a probabilidade de o preço ficar no intervalo de 110 reais e 140 reais na amostra de 500 dados é de 28,20 por cento, na amostra de 1000 é de 33,00 por cento, na amostra de 5000 é de 29,96 por cento e de 10000 é de 32,36 por cento. / [en] The Energy Efficiency (EE) is considered a synonymous of environmental preservation, because the energy saved prevents the construction of new generating plants and transmission lines. The Demand-Side Bidding (DSB) could represent a very interesting alternative for the revitalization and promotion of EE practices in Brazil. However, it is important to note that this presupposes a confidence on the amount of reduced energy, which can only take reality with the implementation and development of a measurement system and verification (M&V) the energy consumption. In this context, the main objective is to simulate of the prices of the demand-side bidding in the regulated environment to meet the viability in Brazil that could become a reality. The methodology used to perform the simulations was the Monte Carlo addition, prior to the Kernel method was used in order to be able to adjust the data to a curve, using polynomials. Once achieved the best-fitted curve was carried out through an analysis of each scenario (in different rounds) with each sample (500, 1000, 5000 and 10000) to find the probability of the price falling between the 110 real range and 140 real (great prices proposed by the DSB). Finally, the results showed that the probability of staying in the price range from 110 real nd 140 real data 500 in the sample is 28.20 percent, the sample 1000 is 33.00 percent, the sample 5000 is 29.96 percent and 10000 is 32.36 percent.
30

Branching Out with Mixtures: Phylogenetic Inference That’s Not Afraid of a Little Uncertainty / Förgreningar med mixturer: Fylogenetisk inferens som inte räds lite osäkerhet

Molén, Ricky January 2023 (has links)
Phylogeny, the study of evolutionary relationships among species and other taxa, plays a crucial role in understanding the history of life. Bayesian analysis using Markov chain Monte Carlo (MCMC) is a widely used approach for inferring phylogenetic trees, but it suffers from slow convergence in higher dimensions and is slow to converge. This thesis focuses on exploring variational inference (VI), a methodology that is believed to lead to improved speed and accuracy of phylogenetic models. However, VI models are known to concentrate the density of the learned approximation in high-likelihood areas. This thesis evaluates the current state of Variational Inference Bayesian Phylogenetics (VBPI) and proposes a solution using a mixture of components to improve the VBPI method's performance on complex datasets and multimodal latent spaces. Additionally, we cover the basics of phylogenetics to provide a comprehensive understanding of the field. / Fylogeni, vilket är studien av evolutionära relationer mellan arter och andra taxonomiska grupper, spelar en viktig roll för att förstå livets historia. En ofta använd metod för att dra slutsatser om fylogenetiska träd är bayesiansk analys med Markov Chain Monte Carlo (MCMC), men den lider av långsam konvergens i högre dimensioner och kräver oändligt med tid. Denna uppsats fokuserar på att undersöka hur variationsinferens (VI) kan nyttjas inom fylogenetisk inferens med hög noggranhet. Vi fokuserar specifik på en modell kallad VBPI. Men VI-modeller är allmänt kända att att koncentrera sig på höga sannolikhetsområden i posteriorfördelningar. Vi utvärderar prestandan för Variatinal Inference Baysian Phylogenetics (VBPI) och föreslår en förbättring som använder mixturer av förslagsfördelningar för att förbättra VBPI-modellens förmåga att hantera mer komplexa datamängder och multimodala posteriorfördelningar. Utöver dettta går vi igenom grunderna i fylogenetik för att ge en omfattande förståelse av området.

Page generated in 0.0448 seconds