Spelling suggestions: "subject:"bayesian analysis"" "subject:"8ayesian analysis""
1 
Bayesian Updating and Statistical Inference for Betabinomial ModelsJanuary 2018 (has links)
acase@tulane.edu / The Betabinomial distribution is often employed as a model for count data in cases where the observed dispersion is greater than would be expected for the standard binomial distribution. Parameter estimation in this setting is typically performed using a Bayesian approach, which requires specifying appropriate prior distributions for parameters. In the context of many applications, incorporating estimates from previous analyses can offer advantages over naive or diffuse priors. An example of this is in the food security setting, where baseline consumption surveys can inform parameter estimation in crisis situations during which data must be collected hastily on smaller samples of individuals. We have developed an approach for Bayesian updating in the betabinomial model that incorporates adjustable prior weights and enables inference using a bivariate normal approximation for the mode of the posterior distribution. Our methods, which are implemented in the R programming environment, include tools for the estimation of statistical power to detect changes in parameter values. / 1 / Aleksandra Gorzycka

2 
Applying an Intrinsic Conditional Autoregressive Reference Prior for Areal DataPorter, Erica May 09 July 2019 (has links)
Bayesian hierarchical models are useful for modeling spatial data because they have flexibility to accommodate complicated dependencies that are common to spatial data. In particular, intrinsic conditional autoregressive (ICAR) models are commonly assigned as priors for spatial random effects in hierarchical models for areal data corresponding to spatial partitions of a region. However, selection of prior distributions for these spatial parameters presents a challenge to researchers. We present and describe ref.ICAR, an R package that implements an objective Bayes intrinsic conditional autoregressive prior on a vector of spatial random effects. This model provides an objective Bayesian approach for modeling spatially correlated areal data. ref.ICAR enables analysis of spatial areal data for a specified region, given userprovided data and information about the structure of the study region. The ref.ICAR package performs Markov Chain Monte Carlo (MCMC) sampling and outputs posterior medians, intervals, and trace plots for fixed effect and spatial parameters. Finally, the functions provide regional summaries, including medians and credible intervals for fitted values by subregion. / Master of Science / Spatial data is increasingly relevant in a wide variety of research areas. Economists, medical researchers, ecologists, and policymakers all make critical decisions about populations using data that naturally display spatial dependence. One such data type is areal data; data collected at county, habitat, or tract levels are often spatially related. Most convenient software platforms provide analyses for independent data, as the introduction of spatial dependence increases the complexity of corresponding models and computation. Use of analyses with an independent data assumption can lead researchers and policymakers to make incorrect, simplistic decisions. Bayesian hierarchical models can be used to effectively model areal data because they have flexibility to accommodate complicated dependencies that are common to spatial data. However, use of hierarchical models increases the number of model parameters and requires specification of prior distributions. We present and describe ref.ICAR, an R package available to researchers that automatically implements an objective Bayesian analysis that is appropriate for areal data.

3 
Incorporating Historical Data via Bayesian Analysis Based on The Logit ModelChenxi, Yu January 2018 (has links)
This thesis presents a Bayesian approach to incorporate historical data. Usually, in statistical inference, a large data size is required to establish a strong evidence. However, in most bioassay experiments, dataset is of limited size. Here, we proposed a method that is able to incorporate control groups data from historical studies. The approach is framed in the context of testing whether an increased dosage of the chemical is associated with increased probability of the adverse event. To test whether such a relationship exists, the proposed approach compares two logit models via Bayes factor. In particular, we eliminate the effect of survival time by using polyk test. We test the performance of the proposed approach by applying it to six simulated scenarios. / Thesis / Master of Science (MSc) / This thesis presents a Bayesian approach to incorporate historical data. Usually, in statistical inference, a large data size is required to establish a strong evidence. However, in most bioassay experiments, dataset is of limited size. Here, we proposed a method that is able to incorporate control groups data from historical studies. The approach is framed in the context of testing whether an increased dosage of the chemical is associated with increased probability of the adverse event. To test whether such a relationship exists, the proposed approach compares two logit models via Bayes factor. In particular, we eliminate the effect of survival time by using polyk test. We test the performance of the proposed approach by applying it to six simulated scenarios.

4 
Bayesian numerical and approximation techniques for ARMA time seriesMarriott, John M. January 1989 (has links)
No description available.

5 
Robust variational Bayesian clustering for underdetermined speech separationZohny, Zeinab Y. January 2016 (has links)
The main focus of this thesis is the enhancement of the statistical framework employed for underdetermined TF masking blind separation of speech. While humans are capable of extracting a speech signal of interest in the presence of other interference and noise; actual speech recognition systems and hearing aids cannot match this psychoacoustic ability. They perform well in noise and reverberant free environments but suffer in realistic environments. Timefrequency masking algorithms based on computational auditory scene analysis attempt to separate multiple sound sources from only two reverberant stereo mixtures. They essentially rely on the sparsity that binaural cues exhibit in the timefrequency domain to generate masks which extract individual sources from their corresponding spectrogram points to solve the problem of underdetermined convolutive speech separation. Statistically, this can be interpreted as a classical clustering problem. Due to analytical simplicity, a finite mixture of Gaussian distributions is commonly used in TF masking algorithms for modelling interaural cues. Such a model is however sensitive to outliers, therefore, a robust probabilistic model based on the Student's tdistribution is first proposed to improve the robustness of the statistical framework. This heavy tailed distribution, as compared to the Gaussian distribution, can potentially better capture outlier values and thereby lead to more accurate probabilistic masks for source separation. This nonGaussian approach is applied to the stateof theart MESSL algorithm and comparative studies are undertaken to confirm the improved separation quality. A Bayesian clustering framework that can better model uncertainties in reverberant environments is then exploited to replace the conventional expectationmaximization (EM) algorithm within a maximum likelihood estimation (MLE) framework. A variational Bayesian (VB) approach is then applied to the MESSL algorithm to cluster interaural phase differences thereby avoiding the drawbacks of MLE; specifically the probable presence of singularities and experimental results confirm an improvement in the separation performance. Finally, the joint modelling of the interaural phase and level differences and the integration of their nonGaussian modelling within a variational Bayesian framework, is proposed. This approach combines the advantages of the robust estimation provided by the Student's tdistribution and the robust clustering inherent in the Bayesian approach. In other words, this general framework avoids the difficulties associated with MLE and makes use of the heavy tailed Student's tdistribution to improve the estimation of the soft probabilistic masks at various reverberation times particularly for sources in close proximity. Through an extensive set of simulation studies which compares the proposed approach with other TF masking algorithms under different scenarios, a significant improvement in terms of objective and subjective performance measures is achieved.

6 
Reevaluating archaeomagnetic dates of the vitrified hillforts of ScotlandSuttie, Neil, Batt, Catherine M. 04 March 2020 (has links)
Yes / A reanalysis of archaeomagnetic data from seven vitrified hillforts in Scotland, sampled in the 1980s, shows excellent agreement with recent radiocarbon dates. In the past thirty years our knowledge of the secular variation of the geomagnetic field has greatly improved, especially in the 1st millennium BC, allowing earlier archaeomagnetic data to be reconsidered. We evaluate the likelihood of the data with respect to a stateoftheart field geomagnetic model and find close coherence between the observed directions and the model for the closing centuries of the first millennium BC. A new Bayesian method of calibration gives the most likely number of separate events required to produce a series of magnetic directions. We then show that the burning of three of the four oblong forts most likely took place around the same time, and our estimate for the date of this is indistinguishable from recent radiocarbon dates from another fort of similar type.

7 
14Cベイズ解析と較正解析ソフトOxCalの日本語版についてNAKAMURA, Toshio, NISHIMOTO, Hiroshi, OMORI, Takayuki, 中村, 俊夫, 西本, 寛, 大森, 貴之 03 1900 (has links)
第23回名古屋大学年代測定総合研究センターシンポジウム平成22(2010)年度報告

8 
Productivity prediction model based on Bayesian analysis and productivity consoleYun, Seok Jun 29 August 2005 (has links)
Software project management is one of the most critical activities in modern software
development projects. Without realistic and objective management, the software development
process cannot be managed in an effective way. There are three general
problems in project management: effort estimation is not accurate, actual status is
difficult to understand, and projects are often geographically dispersed. Estimating
software development effort is one of the most challenging problems in project
management. Various attempts have been made to solve the problem; so far, however,
it remains a complex problem. The error rate of a renowned effort estimation
model can be higher than 30% of the actual productivity. Therefore, inaccurate estimation
results in poor planning and defies effective control of time and budgets in
project management. In this research, we have built a productivity prediction model
which uses productivity data from an ongoing project to reevaluate the initial productivity
estimate and provides managers a better productivity estimate for project
management. The actual status of the software project is not easy to understand
due to problems inherent in software project attributes. The project attributes are
dispersed across the various CASE (ComputerAided Software Engineering) tools and
are difficult to measure because they are not hard material like building blocks. In
this research, we have created a productivity console which incorporates an expert
system to measure project attributes objectively and provides graphical charts to
visualize project status. The productivity console uses project attributes gathered
in KB (Knowledge Base) of PAMPA II (Project Attributes Monitoring and Prediction
Associate) that works with CASE tools and collects project attributes from the
databases of the tools. The productivity console and PAMPA II work on a network, so
geographically dispersed projects can be managed via the Internet without difficulty.

9 
Integration in Computer Experiments and Bayesian AnalysisKaruri, Stella January 2005 (has links)
Mathematical models are commonly used in science and industry to simulate complex physical processes. These models are implemented by computer codes which are often complex. For this reason, the codes are also expensive in terms of computation time, and this limits the number of simulations in an experiment. The codes are also deterministic, which means that output from a code has no measurement error. <br /><br /> One modelling approach in dealing with deterministic output from computer experiments is to assume that the output is composed of a drift component and systematic errors, which are stationary Gaussian stochastic processes. A Bayesian approach is desirable as it takes into account all sources of model uncertainty. Apart from prior specification, one of the main challenges in a complete Bayesian model is integration. We take a Bayesian approach with a Jeffreys prior on the model parameters. To integrate over the posterior, we use two approximation techniques on the log scaled posterior of the correlation parameters. First we approximate the Jeffreys on the untransformed parameters, this enables us to specify a uniform prior on the transformed parameters. This makes Markov Chain Monte Carlo (MCMC) simulations run faster. For the second approach, we approximate the posterior with a Normal density. <br /><br /> A large part of the thesis is focused on the problem of integration. Integration is often a goal in computer experiments and as previously mentioned, necessary for inference in Bayesian analysis. Sampling strategies are more challenging in computer experiments particularly when dealing with computationally expensive functions. We focus on the problem of integration by using a sampling approach which we refer to as "GaSP integration". This approach assumes that the integrand over some domain is a Gaussian random variable. It follows that the integral itself is a Gaussian random variable and the Best Linear Unbiased Predictor (BLUP) can be used as an estimator of the integral. We show that the integration estimates from GaSP integration have lower absolute errors. We also develop the Adaptive Subregion Sampling Integration Algorithm (ASSIA) to improve GaSP integration estimates. The algorithm recursively partitions the integration domain into subregions in which GaSP integration can be applied more effectively. As a result of the adaptive partitioning of the integration domain, the adaptive algorithm varies sampling to suit the variation of the integrand. This "strategic sampling" can be used to explore the structure of functions in computer experiments.

10 
Statistical analysis in downscaling climate models : wavelet and Bayesian methods in multimodel ensemblesCai, Yihua 2009 August 1900 (has links)
Various climate models have been developed to analyze and predict climate change; however, model uncertainties cannot be easily overcome. A statistical approach has been presented in this paper to calculate the distributions of future climate change based on an ensemble of the Weather Research and Forecasting (WRF) models. Wavelet analysis has been adopted to denoise the WRF model output. Using the denoised model output, we carry out Bayesian analysis to decrease uncertainties in model CAM_KF, RRTM_KF and RRTM_GRELL for each downscaling region. / text

Page generated in 0.0673 seconds