Spelling suggestions: "subject:"bayesian"" "subject:"eayesian""
181 |
Properties of the maximum likelihood and Bayesian estimators of availabilityKuo, Way January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
182 |
Adaptive Threat Detector Testing Using Bayesian Gaussian Process ModelsFerguson, Bradley Thomas 18 May 2011 (has links)
Detection of biological and chemical threats is an important consideration in the modern national defense policy. Much of the testing and evaluation of threat detection technologies is performed without appropriate uncertainty quantification. This paper proposes an approach to analyzing the effect of threat concentration on the probability of detecting chemical and biological threats. The approach uses a probit semi-parametric formulation between threat concentration level and the probability of instrument detection. It also utilizes a bayesian adaptive design to determine at which threat concentrations the tests should be performed. The approach offers unique advantages, namely, the flexibility to model non-monotone curves and the ability to test in a more informative way. We compare the performance of this approach to current threat detection models and designs via a simulation study.
|
183 |
An investigation of a Bayesian decision-theoretic procedure in the context of mastery testsHsieh, Ming-Chuan 01 January 2007 (has links)
The purpose of this study was to extend Glas and Vos's (1998) Bayesian procedure to the 3PL IRT model by using the MCMC method. In the context of fixed-length mastery tests, the Bayesian decision-theoretic procedure was compared with two conventional procedures (conventional- Proportion Correct and conventional- EAP) across different simulation conditions. Several simulation conditions were investigated, including two loss functions (linear and threshold loss function), three item pools (high discrimination, moderate discrimination and real item pool) and three test lengths (20, 40 and 60). Different loss parameters were manipulated in the Bayesian decision-theoretic procedure to examine the effectiveness of controlling false positive and false negative errors. The degree of decision accuracy for the Bayesian decision-theoretic procedure using both the 3PL and 1PL models was also compared. Four criteria, including the percentages of correct classifications, false positive error rates, false negative error rates, and phi correlations between the true and observed classification status, were used to evaluate the results of this study. According to these criteria, the Bayesian decision-theoretic procedure appeared to effectively control false negative and false positive error rates. The differences in the percentages of correct classifications and phi correlations between true and predicted status for the Bayesian decision-theoretic procedures and conventional procedures were quite small. The results also showed that there was no consistent advantage for either the linear or threshold loss function. In relation to the four criteria used in this study, the values produced by these two loss functions were very similar. One of the purposes of this study was to extend the Bayesian procedure from the 1PL to the 3PL model. The results showed that when the datasets were simulated to fit the 3PL model, using the 1PL model in the Bayesian procedure yielded less accurate results. However, when the datasets were simulated to fit the 1PL model, using the 3PL model in the Bayesian procedure yielded reasonable classification accuracies in most cases. Thus, the use of the Bayesian decision-theoretic procedure with the 3PL model seemed quite promising in the context of fixed-length mastery tests.
|
184 |
Labor market policies in an equilibrium matching model with heterogeneous agents and on-the-job searchStavrunova, Olena 01 January 2007 (has links)
This dissertation quantitatively evaluates selected labor market policies in a search-matching model with skill heterogeneity where high-skilled workers can take temporary jobs with skill requirements below their skill levels. The joint posterior distribution of structural parameters of the theoretical model is obtained conditional on the data on labor markets histories of the NLSY79 respondents. The information on AFQT scores of individuals and the skill requirements of occupations is utilized to identify the skill levels of workers and complexity levels of jobs in the job-worker matches realized in the data. The model and the data are used to simulate the posterior distributions of impacts of labor market policies on the endogenous variables of interest to a policy-maker, including unemployment rates, durations and wages of low- and high-skilled workers. In particular, the effects of the following policies are analyzed: increase in proportion of high-skilled workers, subsidies for employing or hiring high- and low-skilled workers and increase in unemployment income.
|
185 |
Mathematical modeling of the transmission dynamics of malaria in South SudanMukhtar, Abdulaziz Yagoub Abdelrahman January 2019 (has links)
Philosophiae Doctor - PhD / Malaria is a common infection in tropical areas, transmitted between humans
through female anopheles mosquito bites as it seeks blood meal to carry out
egg production. The infection forms a direct threat to the lives of many people
in South Sudan. Reports show that malaria caused a large proportion of
morbidity and mortality in the fledgling nation, accounting for 20% to 40%
morbidity and 20% to 25% mortality, with the majority of the affected people
being children and pregnant mothers. In this thesis, we construct and analyze
mathematical models for malaria transmission in South Sudan context
incorporating national malaria control strategic plan. In addition, we investigate
important factors such as climatic conditions and population mobility
that may drive malaria in South Sudan. Furthermore, we study a stochastic
version of the deterministic model by introducing a white noise.
|
186 |
Misclassification of the dependent variable in binary choice modelsGu, Yuanyuan, Economics, Australian School of Business, UNSW January 2006 (has links)
Survey data are often subject to a number of measurement errors. The measurement error associated with a multinomial variable is called a misclassification error. In this dissertation we study such errors when the outcome is binary. It is known that ignoring such misclassification errors may affect the parameter estimates, see for example Hausman, Abrevaya and Scott-Morton (1998). However, previous studies showed that robust estimation of the parameters is achievable if we take misclassification into account. There are many attempts to do so in the literature and the major problem in implementing them is to avoid poor or fragile identifiability of the misclassification probabilities. Generally we restrict these parameters by imposing prior information on them. Such prior constraints on the parameters are simple to impose within a Bayesian framework. Hence we consider a Bayesian logistic regression model that takes into account the misclassification of the dependent variable. A very convenient way to implement such a Bayesian analysis is to estimate the hierarchical model using the WinBUGS software package developed by the MRC biostatistics group, Institute of Public Health, at Cambridge University. WinGUGS allows us to estimate the posterior distributions of all the parameters using relatively little programming and once the program is written it is trivial to change the link function, for example from logit to probit. If we wish to have more control over the sampling scheme or to deal with more complex models, then we propose a data augmentation approach using the Metropolis-Hastings algorithm within a Gibbs sampling framework. The sampling scheme can be made more efficient by using a one-step Newton-Raphson algorithm to form the Metropolis-Hastings proposal. Results from empirically analyzing real data and from the simulation studies suggest that if suitable priors are specified for the misclassification parameters and the regression parameters, then logistic regression allowing for misclassification results in better estimators than the estimators that do not take misclassification into account.
|
187 |
Bayesian estimation of decomposable Gaussian graphical modelsArmstrong, Helen, School of Mathematics, UNSW January 2005 (has links)
This thesis explains to statisticians what graphical models are and how to use them for statistical inference; in particular, how to use decomposable graphical models for efficient inference in covariance selection and multivariate regression problems. The first aim of the thesis is to show that decomposable graphical models are worth using within a Bayesian framework. The second aim is to make the techniques of graphical models fully accessible to statisticians. To achieve these aims the thesis makes a number of statistical contributions. First, it proposes a new prior for decomposable graphs and a simulation methodology for estimating this prior. Second, it proposes a number of Markov chain Monte Carlo sampling schemes based on graphical techniques. The thesis also presents some new graphical results, and some existing results are reproved to make them more readily understood. Appendix 8.1 contains all the programs written to carry out the inference discussed in the thesis, together with both a summary of the theory on which they are based and a line by line description of how each routine works.
|
188 |
Testing specifications in partial observability models : a Bayesian encompassing approachAlmeida, Carlos 04 October 2007 (has links)
A structural approach for modelling a statistical problem permits to introduce a contextual theory based in previous knowledge. This approach makes the parameters completely meaningful; but, in the intermediate steps, some unobservable characteristics are introduced because of their contextual meaning. When the model is completely specified, the marginalisation into the observed variables is operated in order to obtain a tatistical model.
The variables can be discrete or continuous both at the level of unobserved and at the level of observed or manifest variables. We are sometimes faced, especially in behavioural sciences, with ordinal variables; this is the case of the so-called Likert scales.
Therefore, an ordinal variable could be nterpreted as a discrete version of a latent concept (the discretization model). The normality of the latent variables simplifies the study of this model into the analysis of the structure of the covariance matrix of the "ideally" measured variables, but only a sub-parameter of these matrix can be identified and consistently estimated (i.e. the matrix of polychoric correlations). Consequently, two questions rise here: Is the normality of the latent variables testable? If not, what is the aspect of this hypothesis which could be testable?.
In the discretization model, we observe a loss of information with related to the information contained in the latent variables. In order to treat this situation we introduce the concept of partial observability through a (non bijective) measurable function of the latent variable. We explore this definition and verify that other models can be adjusted to this concept. The definition of partial observability permits us to distinguish between two cases depending on whether the involved function is or not depending on a Euclidean parameter. Once the partial observability is introduced, we expose a set of conditions for building a specification test at the level of latent variables. The test is built using the encompassing principle in a Bayesian framework.
More precisely, the problem treated in this thesis is: How to test, in a Bayesian framework, the multivariate normality of a latent vector when only a discretized version of that vector is observed. More generally, the problem can be extended to (or re-paraphrased in): How to test, in Bayesian framework, a parametric specification on latent variables against a nonparametric alternative when only a partial observation of these latent variables is available.
|
189 |
Bayesian Methods for On-Line Gross Error Detection and CompensationGonzalez, Ruben 11 1900 (has links)
Data reconciliation and gross error detection are traditional methods toward detecting mass balance inconsistency within process instrument data. These methods use a static approach for statistical evaluation. This thesis is concerned with using an alternative statistical approach (Bayesian statistics) to detect mass balance inconsistency in real time.
The proposed dynamic Baysian solution makes use of a state space process model which incorporates mass balance relationships so that a governing set of mass balance variables can be estimated using a Kalman filter. Due to the incorporation of mass balances, many model parameters are defined by first principles. However, some parameters, namely the observation and state covariance matrices, need to be estimated from process data before the dynamic Bayesian methods could be applied. This thesis makes use of Bayesian machine learning techniques to estimate these parameters, separating process disturbances from instrument measurement noise. / Process Control
|
190 |
An efficient Bayesian formulation for production data integration into reservoir modelsLeonardo, Vega Velasquez 17 February 2005 (has links)
Current techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.
|
Page generated in 0.0384 seconds