Spelling suggestions: "subject:"decisiontheory"" "subject:"decisions.theory""
161 |
Analysis of structural equation models by Bayesian computation methods.January 1996 (has links)
by Jian-Qing Shi. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 118-123). / Chapter Chapter 1. --- Introduction and overview --- p.1 / Chapter Chapter 2. --- General methodology --- p.8 / Chapter Chapter 3. --- A Bayesian approach to confirmatory factor analysis --- p.16 / Chapter 3.1 --- Confirmatory factor analysis model and its prior --- p.16 / Chapter 3.2 --- The algorithm of data augmentation --- p.19 / Chapter 3.2.1 --- Data augmentation and one-run method --- p.19 / Chapter 3.2.2 --- Rao-Blackwellized estimation --- p.22 / Chapter 3.3 --- Asymptotic properties --- p.28 / Chapter 3.3.1 --- Asymptotic normality and posterior covariance matrix --- p.28 / Chapter 3.3.2 --- Goodness-of-fit statistic --- p.31 / Chapter Chapter 4. --- Bayesian inference for structural equation models --- p.34 / Chapter 4.1 --- LISREL Model and prior information --- p.34 / Chapter 4.2 --- Algorithm and conditional distributions --- p.38 / Chapter 4.2.1 --- Data augmentation algorithm --- p.38 / Chapter 4.2.2 --- Conditional distributions --- p.39 / Chapter 4.3 --- Posterior analysis --- p.44 / Chapter 4.3.1 --- Rao-Blackwellized estimation --- p.44 / Chapter 4.3.2 --- Asymptotic properties and goodness-of-fit statistic --- p.45 / Chapter 4.4 --- Simulation study --- p.47 / Chapter Chapter 5. --- A Bayesian estimation of factor score with non-standard data --- p.52 / Chapter 5.1 --- General Bayesian approach to polytomous data --- p.52 / Chapter 5.2 --- Covariance matrix of the posterior distribution --- p.61 / Chapter 5.3 --- Data augmentation --- p.65 / Chapter 5.4 --- EM algorithm --- p.68 / Chapter 5.5 --- Analysis of censored data --- p.72 / Chapter 5.5.1 --- General Bayesian approach --- p.72 / Chapter 5.5.2 --- EM algorithm --- p.76 / Chapter 5.6 --- Analysis of truncated data --- p.78 / Chapter Chapter 6. --- Structural equation model with continuous and polytomous data --- p.82 / Chapter 6.1 --- Factor analysis model with continuous and polytomous data --- p.83 / Chapter 6.1.1 --- Model and Bayesian inference --- p.83 / Chapter 6.1.2 --- Gibbs sampler algorithm --- p.85 / Chapter 6.1.3 --- Thresholds parameters --- p.89 / Chapter 6.1.4 --- Posterior analysis --- p.92 / Chapter 6.2 --- LISREL model with continuous and polytomous data --- p.94 / Chapter 6.2.1 --- LISREL model and Bayesian inference --- p.94 / Chapter 6.2.2 --- Posterior analysis --- p.101 / Chapter 6.3 --- Simulation study --- p.103 / Chapter Chapter 7. --- Further development --- p.108 / Chapter 7.1 --- More about one-run method --- p.108 / Chapter 7.2 --- Structural equation model with censored data --- p.111 / Chapter 7.3 --- Multilevel structural equation model --- p.114 / References --- p.118 / Appendix --- p.124 / Chapter A.1 --- The derivation of conditional distribution --- p.124 / Chapter A.2 --- Generate a random variate from normal density which restricted in an interval --- p.129 / Tables --- p.132 / Figures --- p.155
|
162 |
Bayesian approach for a multigroup structural equation model with fixed covariates.January 2003 (has links)
Oi-Ping Chiu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 45-46). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Model --- p.4 / Chapter 2.1 --- General Model --- p.4 / Chapter 2.2 --- Constraint --- p.5 / Chapter 3 --- Bayesian Estimation via Gibbs Sampler --- p.7 / Chapter 3.1 --- Conditional Distributions --- p.10 / Chapter 3.2 --- Constraint --- p.15 / Chapter 3.3 --- Bayesian Estimation --- p.16 / Chapter 4 --- Model Comparison using the Bayes Factor --- p.18 / Chapter 5 --- Simulation Study --- p.22 / Chapter 6 --- Real Example --- p.27 / Chapter 6.1 --- Model Selection --- p.29 / Chapter 6.2 --- Bayesian Estimate --- p.30 / Chapter 6.3 --- Sensitivity Analysis --- p.31 / Chapter 7 --- Discussion --- p.32 / Chapter A --- p.34 / Bibliography --- p.45
|
163 |
FBST seqüencial / Sequential FBSTMarcelo Leme de Arruda 04 June 2012 (has links)
O FBST (Full Bayesian Significance Test) é um instrumento desenvolvido por Pereira e Stern (1999) com o objetivo de apresentar uma alternativa bayesiana aos testes de hipóteses precisas. Desde sua introdução, o FBST se mostrou uma ferramenta muito útil para a solução de problemas para os quais não havia soluções freqüentistas. Esse teste, contudo, depende de que a amostra seja coletada uma única vez, após o que a distribuição a posteriori dos parâmetros é obtida e a medida de evidência, calculada. Ensejadas por esse aspecto, são apresentadas abordagens analíticas e computacionais para a extensão do FBST ao contexto de decisão seqüencial (DeGroot, 2004). É apresentado e analisado um algoritmo para a execução do FBST Seqüencial, bem como o código-fonte de um software baseado nesse algoritmo. / FBST (Full Bayesian Significance Test) is a tool developed by Pereira and Stern (1999), to show a bayesian alternative to the tests of precise hypotheses. Since its introduction, FBST has shown to be a very useful tool to solve problems to which there were no frequentist solutions. This test, however, needs that the sample be collected just one time and, after this, the parameters posterior distribution is obtained and the evidence measure, computed. Suggested by this feature, there are presented analytic and computational approaches to the extension of the FBST to the sequential decision context (DeGroot, 2004). It is presented and analyzed an algorithm to execute the Sequential FBST, as well as the source code of a software based on this algorithm.
|
164 |
Making diagnoses with multiple tests under no gold standardZhang, Jingyang 01 May 2012 (has links)
In many applications, it is common to have multiple diagnostic tests on each subject. When there are multiple tests available, combining tests to incorporate information from various aspects in subjects may be necessary in order to obtain a better diagnostic. For continuous tests, in the presence of a gold standard, we could combine the tests linearly (Su and Liu, 1993) or sequentially (Thompson, 2003), or using the risk score as studied by McIntosh and Pepe (2002). The gold standard, however, is not always available in practice. This dissertation concentrates on deriving classification methods based on multiple tests in the absence of a gold standard. Motivated by a lab data set consisting of two tests testing for an antibody in 100 blood samples, we first develop a mixture model of four bivariate normal distributions with the mixture probabilities depending on a two-stage latent structure. The proposed two-stage latent structure is based on the biological mechanism of the tests. A Bayesian classification method incorporating the available prior information is derived utilizing Bayesian decision theory. The proposed method is illustrated by the motivating example, and the properties of the estimation and the classification are described via simulation studies. Sensitivity to the choice of the prior distribution is also studied. We also investigate a general problem of combining multiple continuous tests without any gold standard or a reference test. We thoroughly study the existing methods for combining multiple tests and develop optimal classification rules corresponding to the methods accommodating the situation without a gold standard. We justify the proposed methods both theoretically and numerically through exten- sive simulation studies and illustrate the methods with the motivating example. In the end, we conclude the thesis with remarks and some interesting open questions extended from the dissertation.
|
165 |
Strongly coupled Bayesian models for interacting object and scene classification processesEhtiati, Tina. January 2007 (has links)
No description available.
|
166 |
Discretization for Naive-Bayes learningYang, Ying January 2003 (has links)
Abstract not available
|
167 |
Bayesian statistical models for predicting software effort using small datasetsVan Koten, Chikako, n/a January 2007 (has links)
The need of today�s society for new technology has resulted in the development of a growing number of software systems. Developing a software system is a complex endeavour that requires a large amount of time. This amount of time is referred to as software development effort. Software development effort is the sum of hours spent by all individuals involved. Therefore, it is not equal to the duration of the development.
Accurate prediction of the effort at an early stage of development is an important factor in the successful completion of a software system, since it enables the developing organization to allocate and manage their resource effectively. However, for many software systems, accurately predicting the effort is a challenge. Hence, a model that assists in the prediction is of active interest to software practitioners and researchers alike.
Software development effort varies depending on many variables that are specific to the system, its developmental environment and the organization in which it is being developed. An accurate model for predicting software development effort can often be built specifically for the target system and its developmental environment. A local dataset of similar systems to the target system, developed in a similar environment, is then used to calibrate the model.
However, such a dataset often consists of fewer than 10 software systems, causing a serious problem in the prediction, since predictive accuracy of existing models deteriorates as the size of the dataset decreases.
This research addressed this problem with a new approach using Bayesian statistics. This particular approach was chosen, since the predictive accuracy of a Bayesian statistical model is not so dependent on a large dataset as other models. As the size of the dataset decreases to fewer than 10 software systems, the accuracy deterioration of the model is expected to be less than that of existing models. The Bayesian statistical model can also provide additional information useful for predicting software development effort, because it is also capable of selecting important variables from multiple candidates. In addition, it is parametric and produces an uncertainty estimate.
This research developed new Bayesian statistical models for predicting software development effort. Their predictive accuracy was then evaluated in four case studies using different datasets, and compared with other models applicable to the same small dataset.
The results have confirmed that the best new models are not only accurate but also consistently more accurate than their regression counterpart, when calibrated with fewer than 10 systems. They can thus replace the regression model when using small datasets. Furthermore, one case study has shown that the best new models are more accurate than a simple model that predicts the effort by calculating the average value of the calibration data. Two case studies has also indicated that the best new models can be more accurate for some software systems than a case-based reasoning model.
Since the case studies provided sufficient empirical evidence that the new models are generally more accurate than existing models compared, in the case of small datasets, this research has produced a methodology for predicting software development effort using the new models.
|
168 |
Development of high performance implantable cardioverter defibrillator based statistical analysis of electrocardiographyKwan, Siu-ki. January 2007 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2007. / Title proper from title frame. Also available in printed format.
|
169 |
Approximation methods for efficient learning of Bayesian networks /Riggelsen, Carsten. January 1900 (has links)
Thesis (Ph.D.)--Utrecht University, 2006. / Includes bibliographical references (p. [133]-137).
|
170 |
Logic sampling, likelihood weighting and AIS-BN : an exploration of importance samplingWang, Haiou 21 June 2001 (has links)
Logic Sampling, Likelihood Weighting and AIS-BN are three variants of
stochastic sampling, one class of approximate inference for Bayesian networks.
We summarize the ideas underlying each algorithm and the relationship among
them. The results from a set of empirical experiments comparing Logic Sampling,
Likelihood Weighting and AIS-BN are presented. We also test the impact
of each of the proposed heuristics and learning method separately and in combination
in order to give a deeper look into AIS-BN, and see how the heuristics
and learning method contribute to the power of the algorithm.
Key words: belief network, probability inference, Logic Sampling, Likelihood
Weighting, Importance Sampling, Adaptive Importance Sampling Algorithm for
Evidential Reasoning in Large Bayesian Networks(AIS-BN), Mean Percentage
Error (MPE), Mean Square Error (MSE), Convergence Rate, heuristic, learning
method. / Graduation date: 2002
|
Page generated in 0.0746 seconds