• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 3
  • Tagged with
  • 41
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sequential decision making with adaptive utility

Houlding, Brett January 2008 (has links)
Decision making with adaptive utility provides a generalisation to classical Bayesian decision theory, allowing the creation of a normative theory for decision selection when preferences are initially uncertain. The theory of adaptive utility was introduced by Cyert & DeGroot [27], but had since received little attention or development. In particular, foundational issues had not been explored and no consideration had been given to the generalisation of traditional utility concepts such as value of information or risk aversion. This thesis addresses such issues. An in-depth review of the decision theory literature is given, detailing differences in assumptions between various proposed normative theories and their possible generalisations. Motivation is provided for generalising expected utility theory to permit uncertain preferences, and it is argued that in such a situation, under the acceptance of traditional utility axioms, the decision maker should seek to select decisions so asto maximise expected adaptive utility . The possible applications of the theory forsequential decision making are illustrated by some small-scale examples, including examples of relevance within reliability theory.
22

Bayesian analysis of linear spatio-temporal models

Garside, Linda Michelle January 2004 (has links)
Spatio-temporal models provide a mechanism for analysing data that occurs naturally in space and time such as pollution levels, functional magnetic resonance imaging data and temperature data. These models aim to capture the important features of the space time structure that can be overlooked by examining the spatial and temporal features separately. In this thesis a dynamic linear model (DLM) is used to describe a lattice Markov spatio-temporal system with Markov chain Monte Carlo (MCMC) techniques used to obtain estimates for the model parameters from the marginal posterior distributions. This thesis is concerned with the modelling of the latent structure of a Bayesian spatio-temporal model with a view to improving parameter inference, smoothing and prediction. The equilibrium distribution of a time stationary system will be examined, paying particular attention to edge effects and the effect of grid coarsening. In order to develop an effective MCMC algorithm the latent process is integrated out of the model. These techniques are illustrated using both simulated data and North Atlantic ocean temperature data.
23

Bayesian analysis of genetic mapping and related problems via simulation based techniques

Sibanda, Nokuthaba January 2007 (has links)
No description available.
24

Bayesian inference for mixture models via Monte Carlo computation

Jasra, Ajay January 2006 (has links)
In the past fifteen years there has been a dramatic increase of interest in Bayesian mixture models. This is because we are now able to apply Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods. This thesis is concerned with Bayesian mixture modelling via such approaches. The following topics are considered. Firstly, an important problem with Bayesian mixture models is that of label switch- ing. This is caused by the nonidentifiability of the components under symmetric priors. Therefore, Monte Carlo estimates of component specific quantities will be identical and thus useless for inference. We review the existing solutions to the label switching problem and develop a probabilistic method to deal with it. Secondly, another difficulty is actually simulating from trans-dimensional measures induced by mixtures. To solve this problem, we present an extension of population-based MCMC. We provide a result related to the uniform ergodicity of a population Markov kernel. We also give an example of a population algorithm for simulating from a Bayesian multivariate mixture model. Thirdly, a problem for SMC samplers (Del Moral et al.; 2005) is that of low particle diversity for multimodal targets. In such situations, even under reasonable Markov kernels, poor Monte Carlo estimates may be derived. We present an interacting SMC method which seeks to maintain a diverse set of particles. We establish some convergence results. Additionally, we show how the methodology may be used for a Bayesian mixture model and, using adaptive methods, a mixture problem in population genetics. Finally, we develop statistical methodology for geochronological data. The main characteristics of geochronological data are heterogeneity and measurement error. The former property means that mixture models are appropriate for its analysis. We establish that current methods for analyzing such data are not always suitable. Therefore, we develop Bayesian mixture models for this task.
25

Bayesian inference for volatility models in financial time series

Surapaitoolkorn, Wantanee January 2006 (has links)
The aim of the thesis is to study the two principal volatility models used in ¯nancial time series, and to perform inference using a Bayesian approach. The ¯rst model is the Deterministic Time-Varying volatility represented by Autoregressive Conditional Heteroscedastic (ARCH) models. The second model is the Stochastic Time Varying volatility or Stochastic Volatility (SV) model. The thesis concentrates on using Financial Foreign Exchange (FX) data including time series for four Asian countries of Thailand, Singapore, Japan and Hong Kong, and FX data sets from other countries. The time period this particular FX data set covers includes the recent biggest crisis in Asian ¯nancial markets in 1997. The analysis involves exploring high frequency ¯nancial FX data where the sets of data used are the daily and hourly opening FX rates. The key development of the thesis is the implementation of changepoint models to allow for non-stationarity in the volatility process. The changepoint approach has only rarely been implemented for volatility data. In this thesis, the changepoint model for SVtype volatility structures is formulated. The variable dimensional nature of the inference problem, that is, that the number as well as the locations of the volatility changepoints are unknown, is acknowledged and incorporated, as are the potential leptokurtic nature of ¯nancial returns. The Bayesian computational approach used for making inference about the model parameters is Markov Chain Monte Carlo (MCMC). Another contribution of this thesis is the study of reparameterizations of parameters in both ARCH and SV models. The objective is to improve the performance of the MCMC method.
26

Variational approximations in Bayesian model selection

McGrory, Clare Anne January 2005 (has links)
The research presented in this thesis is on the topic of the Bayesian approach to statistical inference. In particular it focuses on the analysis of mixture models. Mixture models are a useful tool for representing complex data and are widely applied in many areas of statistics (see, for example, Titterington et al. (1985)). The representation of mixture models as missing data models is often useful as it makes more techniques of inference available to us. In addition, it allows us to introduce further dependencies within the mixture model hierarchy leading to the definition of the hidden Markov model and the hidden Markov random field model (see Titterington (1990)). Chapter 1 introduces the main themes of the thesis. It provides an overview of variational methods for approximate Bayesian inference and describes the Deviance Information Criterion for Bayesian model selection. Chapter 2 reviews the theory of finite mixture models and extends the variational approach and the Deviance Information Criterion to mixtures of Gaussians. Chapter 3 examines the use of the variational approximation for general mixtures of exponential family models and considers the specific application to mixtures of Poisson and Exponential densities. Chapter 4 describes how the variational approach can be used in the context of hidden Markov models. It also describes how the Deviance Information Criterion can be used for model selection with this class of model. Chapter 5 explores the use of variational Bayes and the Deviance Information Criterion in hidden Markov random field analysis. In particular, the focus is on the application to image analysis. Chapter 6 summarises the research presented in this thesis and suggests some possible avenues of future development. The material in chapter 2 was presented at the ISBA 2004 world conference in Viña del Mar, Chile and was awarded a prize for best student presentation.
27

Chain event graphs : theory and application

Thwaites, Peter January 2008 (has links)
This thesis is concerned with the Graphical model known as the Chain Event Graph (CEG) [1][60][61], and develops the theory that appears in the currently published papers on this work. Results derived are analogous to those produced for Bayesian Networks (BNs), and I show that for asymmetric problems the CEG is generally superior to the BN both as a representation of the problem and as an analytical tool. The CEG is designed to embody the conditional independence structure of problems whose state spaces are asymmetric and do not admit a natural Product Space structure. In this they differ from BNs and other structures with variable-based topologies. Chapter 1 details researchers' attempts to adapt BNs to model such problems, and outlines the advantages CEGs have over these adaptations. Chapter 2 describes the construction of CEGs. In chapter 3I create a semantic structure for the reading of CEGs, and derive results expressible in the form of context-specific conditional independence statements, that allow us to delve much more deeply into the independence structure of a problem than we can do with BNs. In chapter 4I develop algorithms for the updating of a CEG following observation of an event, analogous to the Local Message Passing algorithms used with BNs. These are more efficient than the BN-based algorithms when used with asymmetric problems. Chapter 5 develops the theory of Causal manipulation of CEGs, and introduces the singular manipulation, a class of interventions containing the set of interventions possible with BNs. I produce Back Door and Front Door Theorems analogous to those of Pearl [42], but more flexible as they allow asymmetric manipulations of asymmetric problems. The ideas and results of chapters 2 to 5 are summarised in chapter 6.
28

Estimation in causal graphical models

Daneshkhah, Alireza January 2004 (has links)
Pearl (2000), Spirtes et al (1993) and Lauritzen (2001) set up a new framework to encode the causal relationships between the random variables by a causal Bayesian network. The estimation of the conditional probabilities in a Bayesian network has received considerable attention by several investigators (e. g., Jordan (1998), Geiger and Heckerman (1997), Ileckerman et al (1995)), but, this issue has not been studied in a causal Bayesian network. In this thesis, we define the multicausal essential graph on the equivalence class of Bayesian networks in which each member of this class manifests a sort of strong type of invariance under (causal) manipulation called hypercausality. We then characterise the families of prior distributions on the parameters of the Bayesian networks which are consistent with hypercausality and show that their unmanipulated uncertain Bayesian networks must demonstrate the independence assumptions. As a result, such prior distributions satisfy a generalisation of the Geiger and lieckerman condition. In particular, when the corresponding essential graph is undirected, the mentioned class of prior distributions will reduce to the Hyper-Dirichlet family (see Chapter 6). In tile second part of this thesis, we will calculate certain local sensitivity measures and through them we are able to provide the solutions for the following questions: Is the network structure that is learned from data robust with respect to changes of the directionality of some specific arrows? Is the local conditional distributions associated with the specified node robust with respect to the changes to its prior distribution or with respect to the changes to the local conditional distribution of another node? Most importantly, is the posterior distribution associated with the parameters of any node robust with respect to the changes to the prior distribution associated with the parameters of one specific node? Finally, are the quantities mentioned above robust with respect to the changes in the independence assumptions described in Chapter 3? Most of the local sensitivity measures (particularly, local measures of the overall posteriors sensitivity), developed in the last decade, tend to diverge to infinity as the sample size becomes very large (Gustafson (1994) and Gustafson et al (1996)). This is in contrast to our knowledge that, starting from different priors, posteriors tend to agree as the data accumulate. Here we define a now class of metrics with more satisfactory asymptotic behaviour. The advantage of the corresponding local sensitivity measures is boundedness for large sample size.
29

Εκτίμηση και διαστήματα εμπιστοσύνης για ορισμένες παραμέτρους κλίμακος

Ηλιόπουλος, Γεώργιος 23 October 2009 (has links)
- / -
30

Bayesian methods for sparse data decomposition and blind source separation

Roussos, Evangelos January 2012 (has links)
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or 'sources' via a generally unknown mapping. Reconstructing sources from their mixtures is an extremely ill-posed problem in general. However, solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian method- ology, allowing us to incorporate "soft" constraints in a natural manner. This Thesis proposes the use of sparse statistical decomposition methods for ex- ploratory analysis of datasets. We make use of the fact that many natural signals have a sparse representation in appropriate signal dictionaries. The work described in this Thesis is mainly driven by problems in the analysis of large datasets, such as those from functional magnetic resonance imaging of the brain for the neuro-scientific goal of extracting relevant 'maps' from the data. We first propose Bayesian Iterative Thresholding, a general method for solv- ing blind linear inverse problems under sparsity constraints, and we apply it to the problem of blind source separation. The algorithm is derived by maximiz- ing a variational lower-bound on the likelihood. The algorithm generalizes the recently proposed method of Iterative Thresholding. The probabilistic view en- ables us to automatically estimate various hyperparameters, such as those that control the shape of the prior and the threshold, in a principled manner. We then derive an efficient fully Bayesian sparse matrix factorization model for exploratory analysis and modelling of spatio-temporal data such as fMRI. We view sparse representation as a problem in Bayesian inference, following a ma- chine learning approach, and construct a structured generative latent-variable model employing adaptive sparsity-inducing priors. The construction allows for automatic complexity control and regularization as well as denoising. The performance and utility of the proposed algorithms is demonstrated on a variety of experiments using both simulated and real datasets. Experimental results with benchmark datasets show that the proposed algorithms outper- form state-of-the-art tools for model-free decompositions such as independent component analysis.

Page generated in 0.0269 seconds