11 |
Belief representation for counts in Bayesian inference and experimental designWilson, Kevin James January 2011 (has links)
Bayesian inference for such things as collections of related binomial or Poisson distributions typically involves rather indirect prior specifications and intensive numerical methods (usually Markov chain Monte Carlo) for posterior evaluations. As well as requiring some rather unnatural prior judgements this creates practical difficulties in problems such as experimental design. This thesis investigates some possible alternative approaches to this problem with the aims of making prior specification more feasible and making the calculations necessary for updating beliefs or for designing experiments less demanding, while maintaining coherence. Both fully Bayesian and Bayes linear approaches are considered initially. The most promising utilises Bayes linear kinematics in which simple conjugate specifications for individual counts are linked through a Bayes linear belief structure. Intensive numerical methods are not required. The use of transformations of the binomial and Poisson parameters is proposed. The approach is illustrated in two examples from reliability analysis, one involving Poisson counts of failures, the other involving binomial counts in an analysis of failure times. A survival example based on a piecewise constant hazards model is also investigated. Applying this approach to the design of experiments greatly reduces the computational burden when compared to standard fully Bayesian approaches and the problem can be solved without the need for intensive numerical methods. The method is illustrated using two examples, one based on usability testing and the other on bioassay.
|
12 |
Topics in statistics of spatial-temporal disease modellingRichardson, Jennifer January 2009 (has links)
This thesis is concerned with providing further statistical development in the area of space-time modelling with particular application to disease data. We briefly consider the non-Bayesian approaches of empirical mode decomposition and generalised linear modelling for analysing space-time data, but our main focus is on the increasingly popular Bayesian hierarchical approach and topics surrounding that. We begin by introducing the hierarchical Poisson regression model of Mugglin et al. [36] and a data set provided by NHS Direct which will be used to illustrate our results through-out the remainder of the thesis. We provide details of how a Bayesian analysis can be performed using Markov chain Monte Carlo (MCMC) via the software LinBUGS then go on to consider two particular issues associated with such analyses. Firstly, a problem with the efficiency of MCMC for the Poisson regression model is likely to be due to the presence of non-standard conditional distributions. We develop and test the 'improved auxiliary mixture sampling' method which introduces auxiliary variables to the conditional distribution in such a way that it becomes multivariate Normal and an efficient block Gibbs sampling scheme can be used to simulate from it. Secondly, since MCMC allows modelling of such complexity, inputs such as priors can only be elicited in a casual way thereby increasing the need to check how sensitive our output is to changes to the prior. We therefore develop and test the 'marginal sensitivity' method which, using only one MCMC output sample, quantifies how sensitive the marginal posterior distributions are to changes to prior parameters
|
13 |
Bayesian spatio-temporal modelling for inspection and prediction of complex problems in the petrochemical industryLittle, John January 2003 (has links)
No description available.
|
14 |
Bayesian decision theoretic approach to experimental design with application to usability experimentsValks, Pamela January 2004 (has links)
This thesis looks at the practicality of applying a Bayesian Decision Theoretic approach to the design of HCI usability experiments. It looks at the particular issues involved in following the Bayesian experimental design framework of developing a stochastic model, eliciting priors and utility functions and choosing the option with the maximum expected utility. HeI usability testing may involve user and analyst experimentation and various courses of action may be employed using either one or both types of experiment. The thesis shows that HeI usability experiments can be represented diagrammatically by a decision tree so that courses of action and consequences can be shown in sequential order and consequently that decision theory can be applied to experimental design. A structure of three decisions is proposed for the user experiment where the design of the experiment is a decision within the larger decision of whether to launch or rewrite. A structure of a single decision is proposed for the analyst experiment. The thesis shows that stochastic models can be developed which give solutions using realistic priors and utility functions. For the user experiment the problem of a joint prior distribution for two dependent binomial parameters is overcome by developing a method using copula functions. For the analyst experiment a two factor capturerecapture model for the identification of potential HeI problems is developed. Two ways of representing the utility function, either in terms of monetary rewards only or as a bivariate utility function, are investigated. The thesis shows that for realistic utility functions both ways require numerical methods to calculate the expected utilities, but a bivariate utility function has computational and elicitation advantages. Hel usability experiments pose many questions including the following. Should a user experiment be performed is it better to launch or rewrite without performing an experiment? If a user experiment is performed what is the optimal number of subjects? After a user experiment is it better to launch or rewrite? What is the optimal number of analysts to take part in an experiment? How many problems are remaining in the system after an analyst experiment? This thesis shows how models currently described in the HCI literature can be generalised using a Bayesian Decision Theoretic approach and used to give answers to these questions.
|
15 |
Elicitation of subjective probability distributionsElfadaly, Fadlalla Ghaly Hassan Mohamed January 2012 (has links)
To incorporate expert opinion into a Bayesian analysis, it must be quantified as a prior distribution through an elicitation process that asks the expert meaningful questions whose answers determine this distribution. The aim of this thesis is to fill some gaps in the available techniques for eliciting prior distributions for Generalized Linear Models (GLMs) and multinomial models. A general method for quantifying opinion about GLMs was developed in Garthwaite and Al- Awadhi (2006). They model the relationship between each continuous predictor and the dependant variable as a piecewise-linear function with a regression coefficient at each of its dividing points. How- ever, coefficients were assumed a priori independent if associated with different predictors. We relax this simplifying assumption and propose three new methods for eliciting positive-definite variance- covariance matrices of a multivariate normal prior distribution. In addition, we extend the method of Garthwaite and Dickey (1988) for eliciting an inverse chi-squared conjugate prior for the error variance in normal linear models. We also propose a novel method for eliciting a lognormal prior distribution for the scale parameter of a gamma GLM. For multinomial models, novel methods are proposed that quantify expert opinion about a conjugate Dirichlet distribution and, additionally, about three more general and flexible prior distributions. First, an elicitation method is proposed for the generalized Dirichlet distribution that was introduced by Connor and Mosimann (1969). Second, a method is developed for eliciting the Gaussian copula as a multivariate distribution with marginal beta priors. Third, a further novel method is constructed that quantifies expert opinion about the most flexible alternate prior, the logistic normal distribution (Aitchison, 1986). This third method is extended to the case of multinomial models with explanatory covariates.
|
16 |
Semantic enhanced argumentation based group decision making for complex problemsJia, Haibo January 2012 (has links)
This thesis is concerned with issues ansmg from group argumentation based decision making support. An investigation was carried out into the semantic representation of argumentation schema ontology and the influence of it on decision making support problem. Previous research has shown argumentation as a process of communication and reasoning is a powerful way of discovering the structure and identifying various aspects of ill structured problems. The literature review revealed that many researchers have covered different aspects of representing and evaluating argumentation for decision making purpose, however there is no clearly defined comprehensive conceptual group argumentation framework for decision making support. In most cases, group argumentation and decision making are regarded as separate processes which cause difficulty to fully integrate the argumentation process with the decision making process. In this thesis, the main elements of group argumentation and decision making are identified. A new conceptual framework is designed to glue those two sets of elements together to support decision making fully using argumentation approach. In order to better integrate different sources of argumentative information, a semantic based approach is employed to model argumentative schema ontology. The design of this ontology not only considers the basic discussion and group interaction concepts, but also the notion of strength of the claim and pro/cons argument and different argument types from practical view and epistemic view. In this research, the semantic support is not only constrained to the structure of the argumentation but also to the topic of argumentation content. The experiment has shown the semantic topic annotation of utterances can facilitate the intelligent agent to discover, retrieve and map related 2 information which can bring some new benefits for supporting decision making such as better presenting the perspectives of decision problems, automatically identifying the criteria for evaluating solution, modelling and updating experts' credibility in the topic level etc. Different from a fully automatic or manual semantic annotation approach, a middle way solution for semantic annotation is proposed which allows users to manually label the content with a simple keyword and then automatically conceptualize the keyword using the formal ontological term querying from the cross domain ontology knowledge base -DBpedia. Based on the designed framework and semantics of the defined argumentative ontology, a prototype agent based distributed group argumentation system for decision making was developed. This prototype system, acting as a test bed, was used in a group argumentation experiment to test the proposed hypothesis. The experiment result was gathered from observation and users' experience based on the questionnaire. The analysis of the result indicates that this semantic enhanced group argumentation based decision making approach not only can advise the solution route for a decision task with a high degree of user satisfaction but also can present more perspectives of the decision problems which can enable an iterative process of problem solving. It is consistent with the new vision of group decision making support. A metric based evaluation was conducted to compare our proposed approach with other related approaches from the different aspects regarding group argumentation based decision making support; the conclusion shows our approach not only share many common features with others, but also has many unique characteristics enhanced by the comprehensive argumentation model and semantic support which are essential for the new decision support paradigm. 3 It is considered that the expectations as given in the initial aims have been achieved. Existing methods either focus on the reasoning capability of the argumentation for the decision making or focus on the communicative capability of the argumentation for discovering different problem perspectives and iterating the problem solving process. In our proposed approach, a comprehensive argumentation ontology for argumentation structure and a semantic annotation mechanism to conceptualize the argumentative content are designed so that the semantic support can cover both argumentation structure level and content level, via which the system can better interpret and manage the information generated in the process of group argumentation and provide more semantic services such as argumentation process iteration, decision rationale reuse, decision problem discovery etc. The findings from this study may make a contribution to the development of new paradigm group decision making systems based on group argumentation. 4.
|
17 |
Bayesian analysis of multi-species demography : synchrony and integrated population models for a breeding community of seabirdsLahoz-Monfort, Jose´ Joaqui´n January 2012 (has links)
In the study of wildlife populations, demographic data have traditionally been analysed independently for different species, even within communities. With environmental conditions changing rapidly, there is a need to move beyond single-species models and consider how communities respond to environmental drivers. This thesis proposes a modelling framework to study multi-species synchrony in demographic parameters, using random effects to partition year-to-year variation into synchronous and asyn- chronous components. The approach also allows us to quantify the contribution of environmental covariates as synchronising/desynchronising agents
|
18 |
Contributions to the Bayesian analysis of mixture modelsHernandez-Vela, Carlos Erwin Rodriguez January 2013 (has links)
Mixture models can be used to approximate irregular densities or to model heterogeneity. ·When a density estimate is needed, then we can approximate any distribution on the real line using an infinite number of normals (Ferguson (1983)). On the other hand, when a mLxture model is used to model heterogeneity, there is a proper interpretation for each element of the modeL If the distributional assumptions about the components are met and the number of underlying clusters within the data is known, then in a Bayesian setting, to perform classification analysis and in general component specific inference, methods to undo the label switching and recover the interpretation of the components need to be applied. If latent allocations for the design of the Markov chain Monte Carlo (MCMC) strategy are included, and the sampler has converged, then labels assigned to each component may change from iteration to iteration. However, observations being allocated together must remain similar, and we use this fundamental fact to derive an easy and efficient solution to the label switching problem. We compare our strategy with other relabeling algorithms on univariate and multivariate data examples and demonstrate improvements over alternative strategies. When there is no further information about the shape of components and the number of clusters within the data, then a common theme is the use of the normal distribution as the "benchmark" components distribution. However, if a cluster is skewed or heavy tailed, then the normal distribution will be inefficient and many may be needed to model a single cluster. In this thesis, we present an attempt to solve this problem. We define a cluster to be a group of data which can be modeled by a unimodal density function. Hence, our intention is to use a family of univariate distribution funct ions, to replace the normal, for which the only constraint is unimodality. With this aim, we devise a new family of nonparametric unimodal distributions, which has large support over the space of univariate unimoda1 distributions. The difficult aspect of the Bayesian model is to construct a suitable MCMC algorithm to sample from the correct posterior distribution. The key will be the introduction of strategic latent variables and the use of the product space (Godsill (2001») view of reversible jump (Green (1995») methodology. We illustrate and compare our methodology against the classic mixture of normals using simulated and real data sets. To solve the label switching problem we use the new relabeling algorithm.
|
19 |
Bayesian nonparametric modelling of financial dataDelatola, Eleni-Ioanna January 2012 (has links)
This thesis presents a class of discrete time univariate stochastic volatility models using Bayesian nonparametric techniques. In particular, the models that will be introduced are not only the basic stochastic volatility model, but also the heavy-tailed model using scale mixture of Normals and the leverage model. The aim will be focused on capturing flexibly the distribution of the logarithm of the squared return under the aforementioned models using infinite mixture of Normals. Parameter estimates for these models will be obtained using Markov chain Monte Carlo methods and the Kalman filter. Links between the return distribution and the distribution of the logarithm of the squared returns "fill be established. The one-step ahead predictive ability of the model will be measured using log-predictive scores. Asset returns, stock indices and exchange rates will be fitted using the developed methods.
|
20 |
Bayesian density estimation and classification of incomplete data using semi-parametric and non parametric modelsZhang, Jufen January 2006 (has links)
No description available.
|
Page generated in 0.0222 seconds