Spelling suggestions: "subject:"bayesian estimation"" "subject:"eayesian estimation""
1 |
Model Structure Estimation and Correction Through Data AssimilationBulygina, Nataliya January 2007 (has links)
The main philosophy underlying this research is that a model should constitute a representation of both what we know and what we do not know about the structure and behavior of a system. In other words it should summarize, as far as possible, both our degree of certainty and degree of uncertainty, so that it facilitates statements about prediction uncertainty arising from model structural uncertainty. Based on this philosophy, the following issues were explored in the dissertation: Identification of a hydrologic system model based on assumption about perceptual and conceptual models structure only, without strong additional assumptions about its mathematical structure Development of a novel data assimilation method for extraction of mathematical relationships between modeled variables using a Bayesian probabilistic framework as an alternative to up-scaling of governing equations Evaluation of the uncertainty in predicted system response arising from three uncertainty types: o uncertainty caused by initial conditions, o uncertainty caused by inputs, o uncertainty caused by mathematical structure Merging of theory and data to identify a system as an alternative to parameter calibration and state-updating approaches Possibility of correcting existing models and including descriptions of uncertainty about their mapping relationships using the proposed method Investigation of a simple hydrological conceptual mass balance model with two-dimensional input, one-dimensional state and two-dimensional output at watershed scale and different temporal scales using the method
|
2 |
Estimating phylogenetic trees from discrete morphological dataWright, April Marie 04 September 2015 (has links)
Morphological characters have a long history of use in the estimation of phylogenetic trees. Datasets consisting of morphological characters are most often analyzed using the maximum parsimony criterion, which seeks to minimize the amount of character change across a phylogenetic tree. When combined with molecular data, characters are often analyzed using model-based methods, such as maximum likelihood or, more commonly, Bayesian estimation. The efficacy of likelihood and Bayesian methods using a common model for estimating topology from discrete morphological characters, the Mk model, is poorly-explored. In Chapter One, I explore the efficacy of Bayesian estimation of phylogeny, using the Mk model, under conditions that are commonly encountered in paleontological studies. Using simulated data, I describe the relative performances of parsimony and the Mk model under a range of realistic conditions that include common scenarios of missing data and rate heterogeneity. I further examine the use of the Mk model in Chapter Two. Like any model, the Mk model makes a number of assumptions. One is that transition between character states are symmetric (i.e., there is an equal probability of changing from state 0 to state 1 and from state 1 to state 0). Many characters, including alleged Dollo characters and extremely labile characters, may not fit this assumption. I tested methods for relaxing this assumption in a Bayesian context. Using empirical datasets, I performed model fitting to demonstrate cases in which modelling asymmetric transitions among characters is preferred. I used simulated datasets to demonstrate that choosing the best-fit model of transition state symmetry can improve model fit and phylogenetic estimation. In my final chapter, I looked at the use of partitions to model datasets more appropriately. Common in molecular studies, partitioning breaks up the dataset into pieces that evolve according to similar mechanisms. These pieces, called partitions, are then modeled separately. This practice has not been widely adopted in morphological studies. I extended the PartitionFinder software, which is used in molecular studies to score different possible partition schemes to find the one which best models the dataset. I used empirical datasets to demonstrate the effects of partitioning datasets on model likelihoods and on the phylogenetic trees estimated from those datasets. / text
|
3 |
Parallel Stochastic Estimation on Multicore PlatformsRosén, Olov January 2015 (has links)
The main part of this thesis concerns parallelization of recursive Bayesian estimation methods, both linear and nonlinear such. Recursive estimation deals with the problem of extracting information about parameters or states of a dynamical system, given noisy measurements of the system output and plays a central role in signal processing, system identification, and automatic control. Solving the recursive Bayesian estimation problem is known to be computationally expensive, which often makes the methods infeasible in real-time applications and problems of large dimension. As the computational power of the hardware is today increased by adding more processors on a single chip rather than increasing the clock frequency and shrinking the logic circuits, parallelization is one of the most powerful ways of improving the execution time of an algorithm. It has been found in the work of this thesis that several of the optimal filtering methods are suitable for parallel implementation, in certain ranges of problem sizes. For many of the suggested parallelizations, a linear speedup in the number of cores has been achieved providing up to 8 times speedup on a double quad-core computer. As the evolution of the parallel computer architectures is unfolding rapidly, many more processors on the same chip will soon become available. The developed methods do not, of course, scale infinitely, but definitely can exploit and harness some of the computational power of the next generation of parallel platforms, allowing for optimal state estimation in real-time applications. / CoDeR-MP
|
4 |
Learning, Evolution, and Bayesian Estimation in Games and Dynamic Choice ModelsMonte Calvo, Alexander 29 September 2014 (has links)
This dissertation explores the modeling and estimation of learning in strategic and individual choice settings. While learning has been extensively used in economics, I introduce the concept into standard models in unorthodox ways. In each case, changing the perspective of what learning is drastically changes standard models. Estimation proceeds using advanced Bayesian techniques which perform very well in simulated data.
The first chapter proposes a framework called Experienced-Based Ability (EBA) in which players increase the payoffs of a particular strategy in the future through using the strategy today. This framework is then introduced into a model of differentiated duopoly in which firms can utilize price or quantity contracts, and I explore how the resulting equilibrium is affected by changes in model parameters.
The second chapter extends the EBA model into an evolutionary setting. This new model offers a simple and intuitive way to theoretically explain complicated dynamics. Moreover, this chapter demonstrates how to estimate posterior distributions of the model's parameters using a particle filter and Metropolis-Hastings algorithm, a technique that can also be used in estimating standard evolutionary models. This allows researchers to recover estimates of unobserved fitness and skill across time while only observing population share data.
The third chapter investigates individual learning in a dynamic discrete choice setting. This chapter relaxes the assumption that individuals base decisions off an optimal policy and investigates the importance of policy learning. Q-learning is proposed as a model of individual choice when optimal policies are unknown, and I demonstrate how it can be used in the estimation of dynamic discrete choice (DDC) models. Using Bayesian Markov chain Monte Carlo techniques on simulated data, I show that the Q-learning model performs well at recovering true parameter values and thus functions as an alternative structural DDC model for researchers who want to move away from the rationality assumption. In addition, the simulated data are used to illustrate possible issues with standard structural estimation if the rationality assumption is incorrect. Lastly, using marginal likelihood analysis, I demonstrate that the Q-learning model can be used to test for the significance of learning effects if this is a concern.
|
5 |
The "Fair" Triathlon: Equating Standard Deviations Using Non-Linear Bayesian ModelsCurtis, Steven McKay 14 May 2004 (has links) (PDF)
The Ironman triathlon was created in 1978 by combining events with the longest distances for races then contested in Hawaii in swimming, cycling, and running. The Half Ironman triathlon was formed using half the distances of each of the events in the Ironman. The Olympic distance triathlon was created by combining events with the longest distances for races sanctioned by the major federations for swimming, cycling, and running. The relative importance of each event in overall race outcome was not given consideration when determining the distances of each of the races in modern triathlons. Thus, there is a general belief among triathletes that the swimming portion of the standard-distance triathlons is underweighted. We present a nonlinear Bayesian model for triathlon finishing times that models time and standard deviation of time as a function of distance. We use this model to create "fair" triathlons by equating the standard deviations of the times taken to complete the swimming, cycling, and running events. Thus, in these "fair" triathlons, a one standard deviation improvement in any event has an equivalent impact on overall race time.
|
6 |
A Bayesian inversion framework for subsurface seismic imaging problemsUrozayev, Dias 11 1900 (has links)
This thesis considers the reconstruction of subsurface models from seismic observations, a well-known high-dimensional and ill-posed problem. As a first regularization to such a problem, a reduction of the parameters' space is considered following a truncated Discrete Cosine Transform (DCT). This helps regularizing the seismic inverse problem and alleviates its computational complexity. A second regularization based on Laplace priors as a way of accounting for sparsity in the model is further proposed to enhance the reconstruction quality. More specifically, two Laplace-based penalizations are applied: one for the DCT coefficients and another one for the spatial variations of the subsurface model, which leads to an enhanced representation of cross-correlations of the DCT coefficients. The Laplace priors are represented by hierarchical forms that are suitable for deriving efficient inversion schemes. The corresponding inverse problem, which is formulated within a Bayesian framework, lies in computing the joint posteriors of the target model parameters and the hyperparameters of the introduced priors. Such a joint posterior is indeed approximated using the Variational Bayesian (VB) approach with a separable form of marginals under the minimization of Kullback-Leibler divergence criterion. The VB approach can provide an efficient means of obtaining not only point estimates but also closed forms of the posterior probability distributions of the quantities of interest, in contrast with the classical deterministic optimization methods. The case in which the observations are contaminated with outliers is further considered. For that case, a robust inversion scheme is proposed based on a Student-t prior for the observation noise. The proposed approaches are applied to successfully reconstruct the subsurface acoustic impedance model of the Volve oilfield.
|
7 |
Bayesian ridge estimation of age-period-cohort modelsXu, Minle 02 October 2014 (has links)
Age-Period-Cohort models offer a useful framework to study trends of time-specific phenomena in various areas. Yet the perfect linear relationship among age, period, and cohort induces a singular design matrix and brings about the identification issue of age, period, and cohort model due to the identity Cohort = Period -- Age. Over the last few decades, multiple methods have been proposed to cope with the identification issue, e.g., the intrinsic estimator (IE), which may be viewed as a limiting form of ridge regression. This study views the ridge estimator from a Bayesian perspective by introducing a prior distribution(s) for the ridge parameter(s). Data used in this study describe the incidence rate of cervical cancer among Ontario women from 1960 to 1994. Results indicate that a Bayesian ridge model with a common prior for the ridge parameter yields estimates of age, period, and cohort effects similar to those based on the intrinsic estimator and to those based on a ridge estimator. The performance of Bayesian models with distinctive priors for the ridge parameters of age, period, and cohort effects is affected more by the choice of prior distributions. In sum, a Bayesian ridge model is an alternative way to deal with the identification problem of age, period, and cohort model. Future studies should further investigate the influences of different prior choices on Bayesian ridge models. / text
|
8 |
Predicting influenza hospitalizationsRamakrishnan, Anurekha 15 October 2014 (has links)
Seasonal influenza epidemics are a major public health concern, causing three to five million cases of severe illness and about 250,000 to 500,000 deaths worldwide. Given the unpredictability of these epidemics, hospitals and health authorities are often left unprepared to handle the sudden surge in demand. Hence early detection of disease activity is fundamental to reduce the burden on the healthcare system, to provide the most effective care for infected patients and to optimize the timing of control efforts. Early detection requires reliable forecasting methods that make efficient use of surveillance data. We developed a dynamic Bayesian estimator to predict weekly hospitalizations due to influenza related illnesses in the state of Texas. The prediction of peak hospitalizations using our model is accurate both in terms of number of hospitalizations and the time at which the peak occurs. For 1-to 8 week predictions, the predicted number of hospitalizations was within 8% of actual value and the predicted time of occurrence was within a week of actual peak. / text
|
9 |
An estimated two-country DSGE model of Austria and the Euro AreaBreuss, Fritz, Rabitsch, Katrin January 2008 (has links) (PDF)
We present a two-country New Open Economy Macro model of the Austrian economy within the European Union's Economic & Monetary Union (EMU). The model includes both nominal and real frictions that have proven to be important in matching business cycle facts, and that allows for an investigation of the effects and cross-country transmission of a number of structural shocks: shocks to technologies, shocks to preferences, cost-push type shocks and policy shocks. The model is estimated using Bayesian methods on quarterly data covering the period of 1976:Q1- 2005:Q1. In addition to the assessment of the relative importance of various shocks, the model also allows to investigate effects of the monetary regime switch with the final stage of the EMU and investigates in how far this has altered macroeconomic transmission. We find that Austria's economy appears to react stronger to demand shocks, while in the rest of the Euro Area supply shocks have a stronger impact. Comparing the estimations on pre-EMU and EMU subsamples we find that the contribution of (rest of the) Euro Area shocks to Austria's business cycle fluctuations has increased significantly. (author´s abstract) / Series: EI Working Papers / Europainstitut
|
10 |
Reservoir History Matching Using Ensemble Kalman Filters with Anamorphosis TransformsAman, Beshir M. 12 1900 (has links)
This work aims to enhance the Ensemble Kalman Filter performance by transforming the non-Gaussian state variables into Gaussian variables to be a step closer to optimality. This is done by using univariate and multivariate Box-Cox transformation.
Some History matching methods such as Kalman filter, particle filter and the ensemble Kalman filter are reviewed and applied to a test case in the reservoir application. The key idea is to apply the transformation before the update step and then transform back after applying the Kalman correction. In general, the results of the multivariate method was promising, despite the fact it over-estimated some variables.
|
Page generated in 0.1689 seconds