Spelling suggestions: "subject:"bayesian statistics"" "subject:"eayesian statistics""
121 |
Application of Bootstrap in Approximate Bayesian Computation (ABC)Nyman, Ellinor January 2023 (has links)
The ABC algorithm is a Bayesian method which simulates samples from the posterior distribution. In this thesis, the method is applied on both synthetic and observed data of a regression model. Under normal error distribution a conjugate prior and the likelihood function are used in the algorithm. Additionally, a bootstrap method is implemented in a modified algorithm to provide an alternative method, without requiring normal error distribution. The results of both methods are thereafter presented and compared with the analytic posterior under a conjugate prior, to evaluate their performances. Lastly, advantages and possible issues are discussed.
|
122 |
Bayesian Uncertainty Quantification while Leveraging Multiple Computer Model RunsWalsh, Stephen A. 22 June 2023 (has links)
In the face of spatially correlated data, Gaussian process regression is a very common modeling approach. Given observational data, kriging equations will provide the best linear unbiased predictor for the mean at unobserved locations. However, when a computer model provides a complete grid of forecasted values, kriging will not apply. To develop an approach to quantify uncertainty of computer model output in this setting, we leverage information from a collection of computer model runs (e.g., historical forecast and observation pairs for tropical cyclone precipitation totals) through a Bayesian hierarchical framework. This framework allows us to combine information and account for the spatial correlation within and across computer model output. Using maximum likelihood estimates and the corresponding Hessian matrices for Gaussian process parameters, these are input to a Gibbs sampler which provides posterior distributions for parameters of interest. These samples are used to generate predictions which provide uncertainty quantification for a given computer model run (e.g., tropical cyclone precipitation forecast). We then extend this framework using deep Gaussian processes to allow for nonstationary covariance structure, applied to multiple computer model runs from a cosmology application. We also perform sensitivity analyses to understand which parameter inputs most greatly impact cosmological computer model output. / Doctor of Philosophy / A crucial theme when analyzing spatial data is that locations that are closer together are more likely to have similar output values (for example, daily precipitation totals). For a particular event, common modeling approach of spatial data is to observe data at numerous locations, and make predictions for locations that were unobserved. In this work, we extend this within-event modeling approach by additionally learning about the uncertainty across different events. Through this extension, we are able to quantify uncertainty for a particular computer model (which may be modeling tropical cyclone precipitation, for example) that does not provide any uncertainty on its own. This framework can be utilized to quantify uncertainty across a vast array of computer model outputs where more than one event or model run has been obtained. We also study how inputting different values into a computer model can influence the values it produces.
|
123 |
Dry Static Friction in Metals: Experiments and Micro-Asperity Based ModelingSista, Sri Narasimha Bhargava January 2014 (has links)
No description available.
|
124 |
Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based DesignRiley, Matthew E. 07 September 2011 (has links)
No description available.
|
125 |
Bayesian Nonparametric Methods with Applications in Longitudinal, Heterogeneous and Spatiotemporal DataDuan, Li 19 October 2015 (has links)
No description available.
|
126 |
Bayesian estimation by sequential Monte Carlo sampling for nonlinear dynamic systemsChen, Wen-shiang 17 June 2004 (has links)
No description available.
|
127 |
Measuring the Effects of Satisfaction: Linking Customers, Employees, and Firm Financial PerformanceDotson, Jeffrey P. 15 July 2009 (has links)
No description available.
|
128 |
Generating Learning Algorithms: Hidden Markov Models as a Case StudySzymczak, Daniel 04 1900 (has links)
<p>This thesis presents the design and implementation of a source code generator for dealing with Bayesian statistics. The specific focus of this case study is to produce usable source code for handling Hidden Markov Models (HMMs) from a Domain Specific Language (DSL).</p> <p>Domain specific languages are used to allow domain experts to design their source code from the perspective of the problem domain. The goal of designing in such a way is to increase the development productivity without requiring extensive programming knowledge.</p> / Master of Applied Science (MASc)
|
129 |
A Probabilistic Characterization of Shark Movement Using Location Tracking DataAckerman, Samuel January 2018 (has links)
Our data consist of measurements of 22 sharks' movements within a 366-acre tidal basin. The measurements are made at irregular time points over a 16-month interval. Constant-length observation intervals would have been desirable, but are often infeasible in practice. We model the sharks' paths at short constant-length intervals by inferring their behavior (feeding vs transiting), interpolating their locations, and estimating parameters of motion (speed and turning angle) in environmental and ecological contexts. We are interested in inferring regional differences in the sharks' behavior, and behavioral interaction between them. Our method uses particle filters, a computational Bayesian technique designed to sequentially model a dynamic system. We discuss how resampling is used to approximate arbitrary densities, and illustrate its use in a simple example of a particle filter implementation of a state-space model. We then introduce a particular model formulation that uses conditioning to introduce unobserved parameters for the shark's behaviors. We show how the irregularly-observed shark locations can be modeled by interpolation as a set of movements at constant-length time intervals. We use a spline method for generating approximations of the ground truth at these intervals for comparison with our model. Finally, we demonstrate our model's estimates of the sharks' behavioral and ecological parameters of interest on a subset of the observed data. / Statistics
|
130 |
Evaluating and Improving Performance of Bisulfite Short Reads Alignment and the Identification of Differentially Methylated SitesTran, Hong Thi Thanh 18 January 2018 (has links)
Large-scale bisulfite treatment and short reads sequencing technology allows comprehensive estimation of methylation states of Cs in the genomes of different tissues, cell types, and developmental stages. Accurate characterization of DNA methylation is essential for understanding genotype phenotype association, gene and environment interaction, diseases, and cancer. The thesis work first evaluates the performance of several commonly used bisulfite short read mappers and investigates how pre-processing data might affect the performance. Aligning bisulfite short reads to a reference genome remains a challenging task. In practice, only a limited proportion of bisulfite treated DNA reads can be mapped uniquely (around 50-70%) while a significant proportion of reads (called multireads) are aligned to multiple genomic locations. The thesis outlines a strategy to improve the mapping efficiencies of the existing bisulfite short reads software by finding unique locations for multireads. Analyses of both simulated data and real hairpin bisulfite sequencing data show that our strategy can effectively assign approximately 70% of the multireads to their best locations with up to 90% accuracy, leading to a significant increase in the overall mapping efficiency.
The most common and essential downstream task in DNA methylation analysis is to detect differential methylated cytosines (DMCs). Although many statistical methods have been applied to detect DMCs, inconsistency in detecting differential methylated sites among statistical tools remains. We adapt the wavelet-based functional mixed models (WFMM) to detect DMCs. Analyses of simulated Arabidopsis data show that WFMM has higher sensitivities and specificities in detecting DMCs compared to existing methods especially when methylation differences are small. Analyses of monozygotic twin data who have different pain sensitivity also show that WFMM can find more relevant DMCs related to pain sensitivity compared to methylKit. In addition, we provide a strategy to modify the default settings in both WFMM and methylKit to be more tailored to a given methylation profile, thus improving the accuracy of detecting DMCs.
Population growth and climate change leave billions of people around the world living in water scarcity conditions. Therefore, utility of reclaimed water (treated wastewater) is pivotal for water sustainability. Recently, researchers discovered microbial regrowth problems in reclaimed water distribution systems (RWDs). The third part of the thesis involves: 1) identifying fundamental conditions that affect proliferation of antibiotic resistance genes (ARGs), 2) identifying the effect of water chemistry and water age on microbial regrowth, and 3) characterizing co-occurrence of ARGs and/or mobile genetics elements (MGEs), i.e., plasmids in simulated RWDs. Analyses of preliminary results from simulated RWDs show that biofilms, bulk water environment, temperature, and disinfectant types have significant influence on shaping antibiotic resistant bacteria (ARB) communities. In particular, biofilms create a favorable environment for ARGs to diversify but with lower total ARG populations. ARGs are the least diverse at 300C and the most diverse at 220C. Disinfectants reduce ARG populations as well as ARG diversity. Chloramines keep ARG populations and diversity at the lowest rate. Disinfectants work better in bulk water environment than in biofilms in terms of shaping resistome. Network analysis on assembly data is done to determine which ARG pairs are the most co-occurred. Bayesian network is more consistent with the co-occurrence network constructed from assembly data than the network based on Spearman's correlation network of ARG abundance profiles. / Ph. D. / Human genome project has been lately attracting a lot of public attention. With the flood of big genomic data, understanding and extracting valuable information from the data remain challenge. The thesis work first evaluates the performance of different genome analysis tools. After that, the thesis outlies a strategy to improve the overall performance of whole-genome analysis tools, thus contributing to more accurate identification of mutations that are responsible for cancer and diseases. Population growth and climate change leave billions of people around the world living in water scarcity conditions. Therefore, utility of reclaimed water (treated wastewater) is pivotal for water sustainability. Recently, researchers discovered microbial regrowth problems in reclaimed water distribution systems which can worsen the existing problem of antibiotics resistance spread. The thesis identifies fundamental factors that help shape the microbial communities in reclaimed water systems in order to limit the spread of antibiotics resistance.
|
Page generated in 0.0731 seconds