• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 23
  • 14
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 39
  • 30
  • 25
  • 23
  • 22
  • 18
  • 15
  • 14
  • 12
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bayesian Updating and Statistical Inference for Beta-binomial Models

January 2018 (has links)
acase@tulane.edu / The Beta-binomial distribution is often employed as a model for count data in cases where the observed dispersion is greater than would be expected for the standard binomial distribution. Parameter estimation in this setting is typically performed using a Bayesian approach, which requires specifying appropriate prior distributions for parameters. In the context of many applications, incorporating estimates from previous analyses can offer advantages over naive or diffuse priors. An example of this is in the food security setting, where baseline consumption surveys can inform parameter estimation in crisis situations during which data must be collected hastily on smaller samples of individuals. We have developed an approach for Bayesian updating in the beta-binomial model that incorporates adjustable prior weights and enables inference using a bivariate normal approximation for the mode of the posterior distribution. Our methods, which are implemented in the R programming environment, include tools for the estimation of statistical power to detect changes in parameter values. / 1 / Aleksandra Gorzycka
2

Incorporating Historical Data via Bayesian Analysis Based on The Logit Model

Chenxi, Yu January 2018 (has links)
This thesis presents a Bayesian approach to incorporate historical data. Usually, in statistical inference, a large data size is required to establish a strong evidence. However, in most bioassay experiments, dataset is of limited size. Here, we proposed a method that is able to incorporate control groups data from historical studies. The approach is framed in the context of testing whether an increased dosage of the chemical is associated with increased probability of the adverse event. To test whether such a relationship exists, the proposed approach compares two logit models via Bayes factor. In particular, we eliminate the effect of survival time by using poly-k test. We test the performance of the proposed approach by applying it to six simulated scenarios. / Thesis / Master of Science (MSc) / This thesis presents a Bayesian approach to incorporate historical data. Usually, in statistical inference, a large data size is required to establish a strong evidence. However, in most bioassay experiments, dataset is of limited size. Here, we proposed a method that is able to incorporate control groups data from historical studies. The approach is framed in the context of testing whether an increased dosage of the chemical is associated with increased probability of the adverse event. To test whether such a relationship exists, the proposed approach compares two logit models via Bayes factor. In particular, we eliminate the effect of survival time by using poly-k test. We test the performance of the proposed approach by applying it to six simulated scenarios.
3

Applying an Intrinsic Conditional Autoregressive Reference Prior for Areal Data

Porter, Erica May 09 July 2019 (has links)
Bayesian hierarchical models are useful for modeling spatial data because they have flexibility to accommodate complicated dependencies that are common to spatial data. In particular, intrinsic conditional autoregressive (ICAR) models are commonly assigned as priors for spatial random effects in hierarchical models for areal data corresponding to spatial partitions of a region. However, selection of prior distributions for these spatial parameters presents a challenge to researchers. We present and describe ref.ICAR, an R package that implements an objective Bayes intrinsic conditional autoregressive prior on a vector of spatial random effects. This model provides an objective Bayesian approach for modeling spatially correlated areal data. ref.ICAR enables analysis of spatial areal data for a specified region, given user-provided data and information about the structure of the study region. The ref.ICAR package performs Markov Chain Monte Carlo (MCMC) sampling and outputs posterior medians, intervals, and trace plots for fixed effect and spatial parameters. Finally, the functions provide regional summaries, including medians and credible intervals for fitted values by subregion. / Master of Science / Spatial data is increasingly relevant in a wide variety of research areas. Economists, medical researchers, ecologists, and policymakers all make critical decisions about populations using data that naturally display spatial dependence. One such data type is areal data; data collected at county, habitat, or tract levels are often spatially related. Most convenient software platforms provide analyses for independent data, as the introduction of spatial dependence increases the complexity of corresponding models and computation. Use of analyses with an independent data assumption can lead researchers and policymakers to make incorrect, simplistic decisions. Bayesian hierarchical models can be used to effectively model areal data because they have flexibility to accommodate complicated dependencies that are common to spatial data. However, use of hierarchical models increases the number of model parameters and requires specification of prior distributions. We present and describe ref.ICAR, an R package available to researchers that automatically implements an objective Bayesian analysis that is appropriate for areal data.
4

Bayesian numerical and approximation techniques for ARMA time series

Marriott, John M. January 1989 (has links)
No description available.
5

14Cベイズ解析と較正解析ソフトOxCalの日本語版について

NAKAMURA, Toshio, NISHIMOTO, Hiroshi, OMORI, Takayuki, 中村, 俊夫, 西本, 寛, 大森, 貴之 03 1900 (has links)
第23回名古屋大学年代測定総合研究センターシンポジウム平成22(2010)年度報告
6

Robust variational Bayesian clustering for underdetermined speech separation

Zohny, Zeinab Y. January 2016 (has links)
The main focus of this thesis is the enhancement of the statistical framework employed for underdetermined T-F masking blind separation of speech. While humans are capable of extracting a speech signal of interest in the presence of other interference and noise; actual speech recognition systems and hearing aids cannot match this psychoacoustic ability. They perform well in noise and reverberant free environments but suffer in realistic environments. Time-frequency masking algorithms based on computational auditory scene analysis attempt to separate multiple sound sources from only two reverberant stereo mixtures. They essentially rely on the sparsity that binaural cues exhibit in the time-frequency domain to generate masks which extract individual sources from their corresponding spectrogram points to solve the problem of underdetermined convolutive speech separation. Statistically, this can be interpreted as a classical clustering problem. Due to analytical simplicity, a finite mixture of Gaussian distributions is commonly used in T-F masking algorithms for modelling interaural cues. Such a model is however sensitive to outliers, therefore, a robust probabilistic model based on the Student's t-distribution is first proposed to improve the robustness of the statistical framework. This heavy tailed distribution, as compared to the Gaussian distribution, can potentially better capture outlier values and thereby lead to more accurate probabilistic masks for source separation. This non-Gaussian approach is applied to the state-of the-art MESSL algorithm and comparative studies are undertaken to confirm the improved separation quality. A Bayesian clustering framework that can better model uncertainties in reverberant environments is then exploited to replace the conventional expectation-maximization (EM) algorithm within a maximum likelihood estimation (MLE) framework. A variational Bayesian (VB) approach is then applied to the MESSL algorithm to cluster interaural phase differences thereby avoiding the drawbacks of MLE; specifically the probable presence of singularities and experimental results confirm an improvement in the separation performance. Finally, the joint modelling of the interaural phase and level differences and the integration of their non-Gaussian modelling within a variational Bayesian framework, is proposed. This approach combines the advantages of the robust estimation provided by the Student's t-distribution and the robust clustering inherent in the Bayesian approach. In other words, this general framework avoids the difficulties associated with MLE and makes use of the heavy tailed Student's t-distribution to improve the estimation of the soft probabilistic masks at various reverberation times particularly for sources in close proximity. Through an extensive set of simulation studies which compares the proposed approach with other T-F masking algorithms under different scenarios, a significant improvement in terms of objective and subjective performance measures is achieved.
7

Re-evaluating archaeomagnetic dates of the vitrified hillforts of Scotland

Suttie, Neil, Batt, Catherine M. 04 March 2020 (has links)
Yes / A re-analysis of archaeomagnetic data from seven vitrified hillforts in Scotland, sampled in the 1980s, shows excellent agreement with recent radiocarbon dates. In the past thirty years our knowledge of the secular variation of the geomagnetic field has greatly improved, especially in the 1st millennium BC, allowing earlier archaeomagnetic data to be reconsidered. We evaluate the likelihood of the data with respect to a state-of-the-art field geomagnetic model and find close coherence between the observed directions and the model for the closing centuries of the first millennium BC. A new Bayesian method of calibration gives the most likely number of separate events required to produce a series of magnetic directions. We then show that the burning of three of the four oblong forts most likely took place around the same time, and our estimate for the date of this is indistinguishable from recent radiocarbon dates from another fort of similar type.
8

Statistical analysis in downscaling climate models : wavelet and Bayesian methods in multimodel ensembles

Cai, Yihua 2009 August 1900 (has links)
Various climate models have been developed to analyze and predict climate change; however, model uncertainties cannot be easily overcome. A statistical approach has been presented in this paper to calculate the distributions of future climate change based on an ensemble of the Weather Research and Forecasting (WRF) models. Wavelet analysis has been adopted to de-noise the WRF model output. Using the de-noised model output, we carry out Bayesian analysis to decrease uncertainties in model CAM_KF, RRTM_KF and RRTM_GRELL for each downscaling region. / text
9

Modified Cocomo Model For Maintenance cost Estimation of Real Time System Software

Chakraverti, Sugandha, Kumar, Sheo, Agarwal, S. C., Chakraverti, Ashish Kumar 15 February 2012 (has links)
Software maintenance is an important activity in software engineering. Over the decades, software maintenance costs have been continually reported to account for a large majority of software costs [Zelkowitz 1979, Boehm 1981, McKee 1984, Boehm 1988, Erlikh 2000]. This fact is not surprising. On the one hand, software environments and requirements are constantly changing, which lead to new software system upgrades to keep pace with the changes. On the other hand, the economic benefits of software reuse have encouraged the software industry to reuse and enhance the existing systems rather than to build new ones [Boehm 1981, 1999]. Thus, it is crucial for project managers to estimate and manage the software maintenance costs effectively. / Accurate cost estimation of software projects is one of the most desired capabilities in software development Process. Accurate cost estimates not only help the customer make successful investments but also assist the software project manager in coming up with appropriate plans for the project and making reasonable decisions during the project execution. Although there have been reports that software maintenance accounts for the majority of the software total cost, the software estimation research has focused considerably on new development and much less on maintenance. Now if we talk about real time software system(RTSS) development cost estimation and maintenance cost estimation is not much differ from simple software but some critical factor are considered for RTSS development and maintenance like response time of software for input and processing time to give correct output. As like simple software maintenance cost estimation existing models (i.e. Modified COCOMO-II) can be used but after inclusion of some critical parameters related to RTSS. A Hypothetical Expert input and an industry data set of eighty completed software maintenance projects were used to build the model for RTSS maintenance cost. The full model, which was derived through the Bayesian analysis, yields effort estimates within 30% of the actual 51% of the time,outperforming the original COCOMO II model when it was used to estimate theseprojects by 34%. Further performance improvement was obtained when calibrating the full model to each individual program, generating effort estimates within 30% of the actual 80% of the time.
10

Integration in Computer Experiments and Bayesian Analysis

Karuri, Stella January 2005 (has links)
Mathematical models are commonly used in science and industry to simulate complex physical processes. These models are implemented by computer codes which are often complex. For this reason, the codes are also expensive in terms of computation time, and this limits the number of simulations in an experiment. The codes are also deterministic, which means that output from a code has no measurement error. <br /><br /> One modelling approach in dealing with deterministic output from computer experiments is to assume that the output is composed of a drift component and systematic errors, which are stationary Gaussian stochastic processes. A Bayesian approach is desirable as it takes into account all sources of model uncertainty. Apart from prior specification, one of the main challenges in a complete Bayesian model is integration. We take a Bayesian approach with a Jeffreys prior on the model parameters. To integrate over the posterior, we use two approximation techniques on the log scaled posterior of the correlation parameters. First we approximate the Jeffreys on the untransformed parameters, this enables us to specify a uniform prior on the transformed parameters. This makes Markov Chain Monte Carlo (MCMC) simulations run faster. For the second approach, we approximate the posterior with a Normal density. <br /><br /> A large part of the thesis is focused on the problem of integration. Integration is often a goal in computer experiments and as previously mentioned, necessary for inference in Bayesian analysis. Sampling strategies are more challenging in computer experiments particularly when dealing with computationally expensive functions. We focus on the problem of integration by using a sampling approach which we refer to as "GaSP integration". This approach assumes that the integrand over some domain is a Gaussian random variable. It follows that the integral itself is a Gaussian random variable and the Best Linear Unbiased Predictor (BLUP) can be used as an estimator of the integral. We show that the integration estimates from GaSP integration have lower absolute errors. We also develop the Adaptive Sub-region Sampling Integration Algorithm (ASSIA) to improve GaSP integration estimates. The algorithm recursively partitions the integration domain into sub-regions in which GaSP integration can be applied more effectively. As a result of the adaptive partitioning of the integration domain, the adaptive algorithm varies sampling to suit the variation of the integrand. This "strategic sampling" can be used to explore the structure of functions in computer experiments.

Page generated in 0.0694 seconds