• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 17
  • 15
  • 11
  • 10
  • 8
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 334
  • 334
  • 334
  • 334
  • 146
  • 79
  • 73
  • 54
  • 47
  • 46
  • 44
  • 42
  • 42
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Modeling the Spread of Infectious Disease Using Genetic Information Within a Marked Branching Process

Leman, Scotland C., Levy, Foster, Walker, Elaine S. 20 December 2009 (has links)
Accurate assessment of disease dynamics requires a quantification of many unknown parameters governing disease transmission processes. While infection control strategies within hospital settings are stringent, some disease will be propagated due to human interactions (patient-to-patient or patient-to- caregiver-topatient). In order to understand infectious transmission rates within the hospital, it is necessary to isolate the amount of disease that is endemic to the outside environment. While discerning the origins of disease is difficult when using ordinary spatio-temporal data (locations and time of disease detection), genotypes that are common to pathogens, with common sources, aid in distinguishing nosocomial infections from independent arrivals of the disease. The purpose of this study was to demonstrate a Bayesian modeling procedure for identifying nosocomial infections, and quantify the rate of these transmissions. We will demonstrate our method using a 10-year history of Morexella catarhallis. Results will show the degree to which pathogen-specific, genotypic information impacts inferences about the nosocomial rate of infection.
102

Peptide Refinement by Using a Stochastic Search

Lewis, Nicole H., Hitchcock, David B., Dryden, Ian L., Rose, John R. 01 November 2018 (has links)
Identifying a peptide on the basis of a scan from a mass spectrometer is an important yet highly challenging problem. To identify peptides, we present a Bayesian approach which uses prior information about the average relative abundances of bond cleavages and the prior probability of any particular amino acid sequence. The scoring function proposed is composed of two overall distance measures, which measure how close an observed spectrum is to a theoretical scan for a peptide. Our use of our scoring function, which approximates a likelihood, has connections to the generalization presented by Bissiri and co-workers of the Bayesian framework. A Markov chain Monte Carlo algorithm is employed to simulate candidate choices from the posterior distribution of the peptide sequence. The true peptide is estimated as the peptide with the largest posterior density.
103

A Method for Reconstructing Historical Destructive Earthquakes Using Bayesian Inference

Ringer, Hayden J. 04 August 2020 (has links)
Seismic hazard analysis is concerned with estimating risk to human populations due to earthquakes and the other natural disasters that they cause. In many parts of the world, earthquake-generated tsunamis are especially dangerous. Assessing the risk for seismic disasters relies on historical data that indicate which fault zones are capable of supporting significant earthquakes. Due to the nature of geologic time scales, the era of seismological data collection with modern instruments has captured only a part of the Earth's seismic hot zones. However, non-instrumental records, such as anecdotal accounts in newspapers, personal journals, or oral tradition, provide limited information on earthquakes that occurred before the modern era. Here, we introduce a method for reconstructing the source earthquakes of historical tsunamis based on anecdotal accounts. We frame the reconstruction task as a Bayesian inference problem by making a probabilistic interpretation of the anecdotal records. Utilizing robust models for simulating earthquakes and tsunamis provided by the software package GeoClaw, we implement a Metropolis-Hastings sampler for the posterior distribution on source earthquake parameters. In this work, we present our analysis of the 1852 Banda Arc earthquake and tsunami as a case study for the method. Our method is implemented as a Python package, which we call tsunamibayes. It is available, open-source, on GitHub: https://github.com/jwp37/tsunamibayes.
104

Modern Monte Carlo Methods and Their Application in Semiparametric Regression

Thomas, Samuel Joseph 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The essence of Bayesian data analysis is to ascertain posterior distributions. Posteriors generally do not have closed-form expressions for direct computation in practical applications. Analysts, therefore, resort to Markov Chain Monte Carlo (MCMC) methods for the generation of sample observations that approximate the desired posterior distribution. Standard MCMC methods simulate sample values from the desired posterior distribution via random proposals. As a result, the mechanism used to generate the proposals inevitably determines the efficiency of the algorithm. One of the modern MCMC techniques designed to explore the high-dimensional space more efficiently is Hamiltonian Monte Carlo (HMC), based on the Hamiltonian differential equations. Inspired by classical mechanics, these equations incorporate a latent variable to generate MCMC proposals that are likely to be accepted. This dissertation discusses how such a powerful computational approach can be used for implementing statistical models. Along this line, I created a unified computational procedure for using HMC to fit various types of statistical models. The procedure that I proposed can be applied to a broad class of models, including linear models, generalized linear models, mixed-effects models, and various types of semiparametric regression models. To facilitate the fitting of a diverse set of models, I incorporated new parameterization and decomposition schemes to ensure the numerical performance of Bayesian model fitting without sacrificing the procedure’s general applicability. As a concrete application, I demonstrate how to use the proposed procedure to fit a multivariate generalized additive model (GAM), a nonstandard statistical model with a complex covariance structure and numerous parameters. Byproducts of the research include two software packages that all practical data analysts to use the proposed computational method to fit their own models. The research’s main methodological contribution is the unified computational approach that it presents for Bayesian model fitting that can be used for standard and nonstandard statistical models. Availability of such a procedure has greatly enhanced statistical modelers’ toolbox for implementing new and nonstandard statistical models.
105

Investigating Convergence of Markov Chain Monte Carlo Methods for Bayesian Phylogenetic Inference

Spade, David Allen 29 August 2013 (has links)
No description available.
106

Break Point Detection for Strategic Asset Allocation / Detektering av brytpunkter för strategisk tillgångsslagsallokering

Madebrink, Erika January 2019 (has links)
This paper focuses on how to improve strategic asset allocation in practice. Strategic asset allocation is perhaps the most fundamental issue in portfolio management and it has been thoroughly discussed in previous research. We take our starting point in the traditional work of Markowitz within portfolio optimization. We provide a new solution of how to perform portfolio optimization in practice, or more specifically how to estimate the covariance matrix, which is needed to perform conventional portfolio optimization. Many researchers within this field have noted that the return distribution of financial assets seems to vary over time, so called regime switching, which makes it dicult to estimate the covariance matrix. We solve this problem by using a Bayesian approach for developing a Markov chain Monte Carlo algorithm that detects break points in the return distribution of financial assets, thus enabling us to improve the estimation of the covariance matrix. We find that there are two break points during the time period studied and that the main difference between the periods are that the volatility was substantially higher for all assets during the period that corresponds to the financial crisis, whereas correlations were less affected. By evaluating the performance of the algorithm we find that the algorithm can increase the Sharpe ratio of a portfolio, thus that our algorithm can improve strategic asset allocation over time. / Detta examensarbete fokuserar på hur man kan förbättra tillämpningen av strategisk tillgångsslagsallokering i praktiken. Hur man allokerar kapital mellan tillgångsslag är kanske de mest fundamentala beslutet inom kapitalförvaltning och ämnet har diskuterats grundligt i litteraturen. Vårt arbete utgår från Markowitz traditionella teorier inom portföljoptimering och utifrån dessa tar vi fram ett nytt angreppssätt för att genomföra portföljoptimering i praktiken. Mer specifikt utvecklar vi ett nytt sätt att uppskatta kovar-iansmatrisen för avkastningsfördelningen för finansiella tillgångar, något som är essentiellt för att kunna beräkna de optimala portföljvikterna enligt Markowitz. Det påstås ofta att avkastningens fördelning förändras över tid; att det sker så kallade regimskiften, vilket försvårar uppskattningen av kovariansmatrisen. Vi löser detta problem genom att använda ett Bayesiansk angreppssätt där vi utvecklar en Markov chain Monte Carlo-algoritm som upptäcker brytpunkter i avkastningsfördelningen, vilket gör att uppskattningen av kovar-iansmatrisen kan förbättras. Vi finner två brytpunkter i fördelningen under den studerade tidsperioden och den huvudsakliga skillnaden mellan de olika tidsperioderna är att volatiliten var betydligt högre för samtliga tillgångar under den tidsperiod som motsvaras av finanskrisen, medan korrelationerna mellan tillgångsslagen inte påverkades lika mycket. Genom att utvärdera hur algoritmen presterar finner vi att den ökar en portföljs Sharpe ratio och således att den kan förbättra den strategiska allokeringen mellan tillgångsslagen över tid.
107

Exact Markov Chain Monte Carlo for a Class of Diffusions

Qi Wang (14157183) 05 December 2022 (has links)
<p>This dissertation focuses on the simulation efficiency of the Markov process for two scenarios: Stochastic differential equations(SDEs) and simulated weather data. </p> <p><br></p> <p>For SDEs, we propose a novel Gibbs sampling algorithm that allows sampling from a particular class of SDEs without any discretization error and shows the proposed algorithm improves the sampling efficiency by orders of magnitude against the existing popular algorithms.  </p> <p><br></p> <p>In the weather data simulation study, we investigate how representative the simulated data are for three popular stochastic weather generators. Our results suggest the need for more than a single realization when generating weather data to obtain suitable representations of climate. </p>
108

Robust Method for Reservoir Simulation History Matching Using Bayesian Inversion and Long-Short Term Memory Network (LSTM) Based Proxy

Zhang, Zhen 11 1900 (has links)
History matching is a critical process used for calibrating simulation models and assessing subsurface uncertainties. This common technique aims to align the reservoir models with the observed data. However, achieving this goal is often challenging due to the non uniqueness of the solution, underlying subsurface uncertainties, and usually the high computational cost of simulations. The traditional approach is often based on trial and error, which is exhaustive and labor-intensive. Some analytical and numerical proxies combined with Monte Carlo simulations are utilized to reduce the computational time. However, these approaches suffer from low accuracy and may not fully capture subsurface uncertainties. This study proposes a new robust method utilizing Bayesian Markov Chain Monte Carlo (MCMC) to perform assisted history matching under uncertainties. We propose a novel three-step workflow that includes 1) multi-resolution low-fidelity models to guarantee high-quality matching; 2) Long-Short Term Memory (LSTM) network as a low-fidelity model to reproduce continuous time-response based on the simulation model, combined with Bayesian optimization to obtain the optimum low fidelity model; 3) Bayesian MCMC runs to obtain the Bayesian inversion of the uncertainty parameters. We perform sensitivity analysis on the LSTM’s architecture, hyperparameters, training set, number of chains, and chain length to obtain the optimum setup for Bayesian-LSTM history matching. We also compare the performance of predicting the recovery factor using different surrogate methods, including polynomial chaos expansions (PCE), kriging, and support vector machines for regression (SVR). We demonstrate the proposed method using a water flooding problem for the upper Tarbert formation of the tenth SPE comparative model. This study case represents a highly heterogeneous nearshore environment. Results showed that the Bayesian-optimized LSTM has successfully captured the physics in the high-fidelity model. The Bayesian-LSTM MCMC produces an accurate prediction with narrow ranges of uncertainties. The posterior prediction through the high-fidelity model ensures the robustness and accuracy of the workflow. This approach provides an efficient and practical history-matching method for reservoir simulation and subsurface flow modeling with significant uncertainties.
109

Bayesian Non-parametric Models for Time Series Decomposition

Granados-Garcia, Guilllermo 05 January 2023 (has links)
The standard approach to analyzing brain electrical activity is to examine the spectral density function (SDF) and identify frequency bands, defined apriori, that have the most substantial relative contributions to the overall variance of the signal. However, a limitation of this approach is that the precise frequency and bandwidth of oscillations are not uniform across cognitive demands. Thus, these bands should not be arbitrarily set in any analysis. To overcome this limitation, we propose three Bayesian Non-parametric models for time series decomposition which are data-driven approaches that identifies (i) the number of prominent spectral peaks, (ii) the frequency peak locations, and (iii) their corresponding bandwidths (or spread of power around the peaks). The standardized SDF is represented as a Dirichlet process mixture based on a kernel derived from second-order auto-regressive processes which completely characterize the location (peak) and scale (bandwidth) parameters. A Metropolis-Hastings within Gibbs algorithm is developed for sampling from the posterior distribution of the mixture parameters for each project. Simulation studies demonstrate the robustness and performance of the proposed methods. The methods developed were applied to analyze local field potential (LFP) activity from the hippocampus of laboratory rats across different conditions in a non-spatial sequence memory experiment to identify the most prominent frequency bands and examine the link between specific patterns of brain oscillatory activity and trial-specific cognitive demands. The second application study 61 EEG channels from two subjects performing a visual recognition task to discover frequency-specific oscillations present across brain zones. The third application extends the model to characterize the data coming from 10 alcoholics and 10 controls across three experimental conditions across 30 trials. The proposed models generate a framework to condense the oscillatory behavior of populations across different tasks isolating the target fundamental components allowing the practitioner different perspectives of analysis.
110

Bayesian Restricted Likelihood Methods

Lewis, John Robert January 2014 (has links)
No description available.

Page generated in 0.3226 seconds