Spelling suggestions: "subject:"bayesian 3methods."" "subject:"bayesian 4methods.""
11 |
On risk-coherent input design and Bayesian methods for nonlinear system identificationValenzuela Pacheco, Patricio E. January 2016 (has links)
System identification deals with the estimation of mathematical models from experimental data. As mathematical models are built for specific purposes, ensuring that the estimated model represents the system with sufficient accuracy is a relevant aspect in system identification. Factors affecting the accuracy of the estimated model include the experimental data, the manner in which the estimation method accounts for prior knowledge about the system, and the uncertainties arising when designing the experiment and initializing the search of the estimation method. As the accuracy of the estimated model depends on factors that can be affected by the user, it is of importance to guarantee that the user decisions are optimal. Hence, it is of interest to explore how to optimally perform an experiment in the system, how to account for prior knowledge about the system and how to deal with uncertainties that can potentially degrade the model accuracy. This thesis is divided into three topics. The first contribution concerns an input design framework for the identification of nonlinear dynamical models. The method designs an input as a realization of a stationary Markov process. As the true system description is uncertain, the resulting optimization problem takes the uncertainty on the true value of the parameters into account. The stationary distribution of the Markov process is designed over a prescribed set of marginal cumulative distribution functions associated with stationary processes. By restricting the input alphabet to be a finite set, the parametrization of the feasible set can be done using graph theoretical tools. Based on the graph theoretical framework, the problem formulation turns out to be convex in the decision variables. The method is then illustrated by an application to model estimation of systems with quantized measurements. The second contribution of this thesis is on Bayesian techniques for input design and estimation of dynamical models. In regards of input design, we explore the application of Bayesian optimization methods to input design for identification of nonlinear dynamical models. By imposing a Gaussian process prior over the scalar cost function of the Fisher information matrix, the method iteratively computes the predictive posterior distribution based on samples of the feasible set. To drive the exploration of this set, a user defined acquisition function computes at every iteration the sample for updating the predictive posterior distribution. In this sense, the method tries to explore the feasible space only on those regions where an improvement in the cost function is expected. Regarding the estimation of dynamical models, this thesis discusses a Bayesian framework to account for prior information about the model parameters when estimating linear time-invariant dynamical models. Specifically, we discuss how to encode information about the model complexity by a prior distribution over the Hankel singular values of the model. Given the prior distribution and the likelihood function, the posterior distribution is approximated by the use of a Metropolis-Hastings sampler. Finally, the existence of the posterior distribution and the correctness of the Metropolis-Hastings sampler is analyzed and established. As the last contribution of this thesis, we study the problem of uncertainty in system identification, with special focus in input design. By adopting a risk theoretical perspective, we show how the uncertainty can be handled in the problems arising in input design. In particular, we introduce the notion of coherent measure of risk and its use in the input design formulation to account for the uncertainty on the true system description. The discussion also introduces the conditional value at risk, which is a risk coherent measure accounting for the mean behavior of the cost function on the undesired cases. The use of risk coherent measures is also employed in application oriented input design, where the input is designed to achieve a prescribed performance in the intended model application. / <p>QC 20161216</p>
|
12 |
Métodos bayesianos em metanálise: especificação da distribuição a priori para a variabilidade entre os estudos / Bayesian methods in meta-analysis: specication of prior distributions for the between-studies variabilityMazin, Suleimy Cristina 27 November 2009 (has links)
MAZIN, S. C.Metodos Bayesianos em Metanalise: Especicac~ao da Distribuic~ao a Priori para a Variabilidade entre os Estudos. 2009. 175f. Dissertac~ao (mestrado) - Faculdade de Medicina de Ribeir~ao Preto, Universidade de S~ao Paulo, Ribeir~ao Preto, 2009. Prossionais da saude, pesquisadores e outros responsaveis por polticas de saude s~ao frequentemente inundados com quantidades de informac~oes nem sempre manejaveis, o que torna a revis~ao sistematica uma maneira eciente de integrar o conhecimento existente gerando dados que auxiliem a tomada de decis~ao. Em uma revis~ao sistematica os dados dos diferentes estudos podem ser quantitativamente combinados por metodos estatsticos chamados metanalise. A metanalise e uma ferramenta estatstica utilizada para combinar ou integrar os resultados dos diversos estudos independentes, sobre o mesmo tema. Entre os estudos que comp~oem a metanalise pode existir uma variabilidade que n~ao e devida ao acaso, chamada heterogeneidade. A heterogeneidade e geralmente testada pelo teste Q ou quanticada pela estatstica I2. A investigac~ao da heterogeneidade na metanalise e de grande import^ancia pois a aus^encia ou a presenca indica o modelo estatstico mais adequado. Assim, na aus^encia desta variabilidade utilizamos um modelo estatstico de efeito xo e na presenca utilizamos um modelo de efeitos aleatorios que incorpora a variabilidade entre os estudos na metanalise. Muitas metanalises s~ao compostas por poucos estudos, e quando isso acontece, temos diculdades de estimar as medidas de efeito metanalticas atraves da teoria classica, pois esta e dependente de pressupostos assintoticos. Na abordagem bayesiana n~ao temos esse problema, mas devemos ter muito cuidado com a especicac~ao da distribuic~ao a priori. Uma vantagem da infer^encia bayesiana e a possibilidade de predizer um resultado para um estudo futuro. Neste trabalho, conduzimos um estudo sobre a especicac~ao da distribuic~ao a priori para o par^ametro que expressa a vari^ancia entre os estudos e constatamos que n~ao existe uma unica escolha que caracterize uma distribuic~ao a priori que possa ser considerada ~ao informativa\"em todas as situac~oes. A escolha de uma distribuic~ao a priori ~ao informativa\"depende da heterogeneidade entre os estudos na metanalise. Assim a distribuic~ao a priori deve ser escolhida com muito cuidado e seguida de uma analise de sensibilidade, especialmente quando o numero de estudos e pequeno. / MAZIN, S. C. Bayesian methods in meta-analysis: specication of prior distributions for the between-studies variability. 2009. 175s. Dissertation (master degree) - Faculty of Medicine of Ribeir~ao Preto, University of S~ao Paulo, Ribeir~ao Preto, 2009. Health professionals, researchers and others responsible for health policy are often overwhelmed by amounts of information that can not always be manageable, which makes the systematic review an ecient way to integrate existing knowledge generating information that may help decision making. In a systematic review, data from dierent studies can be quantitatively combined by statistical methods called meta-analysis. The meta-analysis is a statistical tool used to combine or integrate the results of several independent studies on the same topic. Among the studies that comprise the meta-analysis we have a variability that does not yield from the chance, called the heterogeneity. Heterogeneity is usually tested by Q or quantied by the statistic I2. The investigation of heterogeneity in meta-analysis has a great importance because the absence or presence indicates the most appropriate statistical model. In the absence of this variability we used a xed eect statistical model and a random eects model was used to incorporate the variability between studies in the meta-analysis. Many meta-analysis are composed of few studies, and in those cases, it is dicult to estimate the eect of meta-analytic measures by the classical theory because the asymptotic assumptions. In the Bayesian approach we do not have this problem, but we must be very careful about the specication of prior distribution. One advantage of Bayesian inference is the ability to predict an outcome for a future study. In this work, carried out a study about the specication of prior distribution for the parameter that expresses of the variance between studies and found that there is no single choice that features a prior distribution that would be considered uninformative at all times. The choice of a prior distribution uninformative depend heterogeneity among studies in the meta-analysis. Thus, the prior distribution should be examined very carefully and followed by a sensitivity analysis, especially when the number of studies is small.
|
13 |
Métodos bayesianos em metanálise: especificação da distribuição a priori para a variabilidade entre os estudos / Bayesian methods in meta-analysis: specication of prior distributions for the between-studies variabilitySuleimy Cristina Mazin 27 November 2009 (has links)
MAZIN, S. C.Metodos Bayesianos em Metanalise: Especicac~ao da Distribuic~ao a Priori para a Variabilidade entre os Estudos. 2009. 175f. Dissertac~ao (mestrado) - Faculdade de Medicina de Ribeir~ao Preto, Universidade de S~ao Paulo, Ribeir~ao Preto, 2009. Prossionais da saude, pesquisadores e outros responsaveis por polticas de saude s~ao frequentemente inundados com quantidades de informac~oes nem sempre manejaveis, o que torna a revis~ao sistematica uma maneira eciente de integrar o conhecimento existente gerando dados que auxiliem a tomada de decis~ao. Em uma revis~ao sistematica os dados dos diferentes estudos podem ser quantitativamente combinados por metodos estatsticos chamados metanalise. A metanalise e uma ferramenta estatstica utilizada para combinar ou integrar os resultados dos diversos estudos independentes, sobre o mesmo tema. Entre os estudos que comp~oem a metanalise pode existir uma variabilidade que n~ao e devida ao acaso, chamada heterogeneidade. A heterogeneidade e geralmente testada pelo teste Q ou quanticada pela estatstica I2. A investigac~ao da heterogeneidade na metanalise e de grande import^ancia pois a aus^encia ou a presenca indica o modelo estatstico mais adequado. Assim, na aus^encia desta variabilidade utilizamos um modelo estatstico de efeito xo e na presenca utilizamos um modelo de efeitos aleatorios que incorpora a variabilidade entre os estudos na metanalise. Muitas metanalises s~ao compostas por poucos estudos, e quando isso acontece, temos diculdades de estimar as medidas de efeito metanalticas atraves da teoria classica, pois esta e dependente de pressupostos assintoticos. Na abordagem bayesiana n~ao temos esse problema, mas devemos ter muito cuidado com a especicac~ao da distribuic~ao a priori. Uma vantagem da infer^encia bayesiana e a possibilidade de predizer um resultado para um estudo futuro. Neste trabalho, conduzimos um estudo sobre a especicac~ao da distribuic~ao a priori para o par^ametro que expressa a vari^ancia entre os estudos e constatamos que n~ao existe uma unica escolha que caracterize uma distribuic~ao a priori que possa ser considerada ~ao informativa\"em todas as situac~oes. A escolha de uma distribuic~ao a priori ~ao informativa\"depende da heterogeneidade entre os estudos na metanalise. Assim a distribuic~ao a priori deve ser escolhida com muito cuidado e seguida de uma analise de sensibilidade, especialmente quando o numero de estudos e pequeno. / MAZIN, S. C. Bayesian methods in meta-analysis: specication of prior distributions for the between-studies variability. 2009. 175s. Dissertation (master degree) - Faculty of Medicine of Ribeir~ao Preto, University of S~ao Paulo, Ribeir~ao Preto, 2009. Health professionals, researchers and others responsible for health policy are often overwhelmed by amounts of information that can not always be manageable, which makes the systematic review an ecient way to integrate existing knowledge generating information that may help decision making. In a systematic review, data from dierent studies can be quantitatively combined by statistical methods called meta-analysis. The meta-analysis is a statistical tool used to combine or integrate the results of several independent studies on the same topic. Among the studies that comprise the meta-analysis we have a variability that does not yield from the chance, called the heterogeneity. Heterogeneity is usually tested by Q or quantied by the statistic I2. The investigation of heterogeneity in meta-analysis has a great importance because the absence or presence indicates the most appropriate statistical model. In the absence of this variability we used a xed eect statistical model and a random eects model was used to incorporate the variability between studies in the meta-analysis. Many meta-analysis are composed of few studies, and in those cases, it is dicult to estimate the eect of meta-analytic measures by the classical theory because the asymptotic assumptions. In the Bayesian approach we do not have this problem, but we must be very careful about the specication of prior distribution. One advantage of Bayesian inference is the ability to predict an outcome for a future study. In this work, carried out a study about the specication of prior distribution for the parameter that expresses of the variance between studies and found that there is no single choice that features a prior distribution that would be considered uninformative at all times. The choice of a prior distribution uninformative depend heterogeneity among studies in the meta-analysis. Thus, the prior distribution should be examined very carefully and followed by a sensitivity analysis, especially when the number of studies is small.
|
14 |
Statistical mechanical models for image processing16 October 2001 (has links) (PDF)
No description available.
|
15 |
A Geometric Approach for Inference on Graphical ModelsLunagomez, Simon January 2009 (has links)
We formulate a novel approach to infer conditional independence models or Markov structure of a multivariate distribution. Specifically, our objective is to place informative prior distributions over graphs (decomposable and unrestricted) and sample efficiently from the induced posterior distribution. We also explore the idea of factorizing according to complete sets of a graph; which implies working with a hypergraph that cannot be retrieved from the graph alone. The key idea we develop in this paper is a parametrization of hypergraphs using the geometry of points in $R^m$. This induces informative priors on graphs from specified priors on finite sets of points. Constructing hypergraphs from finite point sets has been well studied in the fields of computational topology and random geometric graphs. We develop the framework underlying this idea and illustrate its efficacy using simulations. / Dissertation
|
16 |
Robust manufacturing system design using petri nets and bayesian methodsSharda, Bikram 10 October 2008 (has links)
Manufacturing system design decisions are costly and involve significant
investment in terms of allocation of resources. These decisions are complex, due to
uncertainties related to uncontrollable factors such as processing times and part
demands. Designers often need to find a robust manufacturing system design that meets
certain objectives under these uncertainties. Failure to find a robust design can lead to
expensive consequences in terms of lost sales and high production costs. In order to find
a robust design configuration, designers need accurate methods to model various
uncertainties and efficient ways to search for feasible configurations.
The dissertation work uses a multi-objective Genetic Algorithm (GA) and Petri net
based modeling framework for a robust manufacturing system design. The Petri nets are
coupled with Bayesian Model Averaging (BMA) to capture uncertainties associated with
uncontrollable factors. BMA provides a unified framework to capture model, parameter
and stochastic uncertainties associated with representation of various manufacturing
activities. The BMA based approach overcomes limitations associated with uncertainty representation using classical methods presented in literature. Petri net based modeling is
used to capture interactions among various subsystems, operation precedence and to
identify bottleneck or conflicting situations. When coupled with Bayesian methods, Petri
nets provide accurate assessment of manufacturing system dynamics and performance in
presence of uncertainties. A multi-objective Genetic Algorithm (GA) is used to search
manufacturing system designs, allowing designers to consider multiple objectives. The
dissertation work provides algorithms for integrating Bayesian methods with Petri nets.
Two manufacturing system design examples are presented to demonstrate the proposed
approach. The results obtained using Bayesian methods are compared with classical
methods and the effect of choosing different types of priors is evaluated.
In summary, the dissertation provides a new, integrated Petri net based modeling
framework coupled with BMA based approach for modeling and performance analysis
of manufacturing system designs. The dissertation work allows designers to obtain
accurate performance estimates of design configurations by considering model,
parameter and stochastic uncertainties associated with representation of uncontrollable
factors. Multi-objective GA coupled with Petri nets provide a flexible and time saving
approach for searching and evaluating alternative manufacturing system designs.
|
17 |
Bayesian Approach to Three-Arm Non Inferiority TrialsBritton, Marcus Chenier 03 May 2007 (has links)
In non-inferiority trials, the goal is to show how an experimental treatment is statistically and clinically not inferior to the active control. The three-arm clinical trial usually recommended for non-inferiority trials by the FDA. The three-arm trial consists of a placebo, reference, and an experimental treatment. The three-arm trial shows the superiority of the reference over the placebo and comparison of the reference to an experimental treatment. In this paper, I will assess a non-inferiority trial with Bayesian methods. By employing Bayesian analysis, the parameters are random and assign vague prior distributions. I will compare the models involving different prior distributions to assess the best fit model.
|
18 |
Power of QTL mapping of different genome-wide association methods for traits under different genetic structures: a simulation study / Poder de mapear QTL de diferentes métodos de associação genômica ampla para características com diferentes estruturas genéticas: estudo de simulaçãoGarcia Neto, Baltasar Fernandes 27 February 2018 (has links)
Submitted by Baltasar Fernandes Garcia Neto null (baltasar.fgn@gmail.com) on 2018-03-09T19:05:14Z
No. of bitstreams: 1
Diss_Balt_Final.pdf: 637437 bytes, checksum: 99a5603df788f9d4cb2c007a3e8180fd (MD5) / Approved for entry into archive by Alexandra Maria Donadon Lusser Segali null (alexmar@fcav.unesp.br) on 2018-03-12T18:40:44Z (GMT) No. of bitstreams: 1
garcianeto_bf_me_jabo.pdf: 637437 bytes, checksum: 99a5603df788f9d4cb2c007a3e8180fd (MD5) / Made available in DSpace on 2018-03-12T18:40:44Z (GMT). No. of bitstreams: 1
garcianeto_bf_me_jabo.pdf: 637437 bytes, checksum: 99a5603df788f9d4cb2c007a3e8180fd (MD5)
Previous issue date: 2018-02-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / A complexidade das características que podem apresentar diferentes estruturas de ação gênica como, por exemplo, poligênicas ou afetadas por genes de efeito maior, aliado a diferentes herdabilidades, entre outros fatores, tornam a detecção de QTLs desafiadora. Diversos métodos têm sido empregados com o intuito de realizar estudos de associação ampla do genoma (GWAS), objetivando o mapeamento de QTL. A metodologia weighted single-step GBLUP (wssGBLUP), por exemplo, é uma alternativa para a realização de GWAS, que permite o uso simultâneo de informações genotípicas, de pedigree e fenotípicas, mesmo de animais não genotipados. Métodos Bayesianos também são utilizados para a realização de GWAS, partindo da premissa básica de que a variância observada pode variar em cada locus em uma distribuição a priori específica. O objetivo do presente estudo foi avaliar, por meio de simulações, quais métodos, dentre os avaliados, mais auxiliaria na identificação de QTLs para características poligênicas e afetadas por genes de efeito maior, apresentando diferentes herdabilidades. Utilizamos os métodos: wssGBLUP, com a inclusão ou não de informação adicional fenotípica de animais não genotipados e dois distintos ponderadores para os marcadores, onde w1 representou a mesma ponderação (w1=1) e w2 a ponderação calculada de acordo com o processo de iteração anterior (w1) ; Bayes C, assumindo dois valores para π (π=0.99 and π=0.999), onde π é a proporção de SNPs não incluída no modelo, além do LASSO Bayesiano. Os resultados mostraram que para cenários poligênicos o poder de detecção é menor e o uso adicional de fenótipos de animais não genotipados pode ajudar na detecção, ainda que com pouca intensidade. Para cenários com característica sob efeito maior, houve maior poder na detecção de QTL pelos diferentes métodos em comparação aos cenários poligênicos com destaque para a leve vantagem do método Bayes C. A inclusão de informação fenotípica adicional, entretanto, causou viés nas estimativas e atrapalhou o desempenho do wssGBLUP na presença de QTL com efeito maior. O aumento da v herdabilidade para ambas as estruturas melhorou o desempenho dos métodos e o poder de mapeamento. O método mais adequado para a detecção de QTL depende da estrutura genética e da herdabilidade da característica, não existindo um método que seja superior para todos os cenários. / The complexity of the traits that can present different genetic structures, such as polygenic or affected by genes of major effect, in addition to different heritabilities, among other factors, make the detection of QTLs challenging. Several methods have been employed with the purpose of performing genome wide association studies (GWAS), aiming the mapping of QTL. The single-step weighted GBLUP (wssGBLUP) method, for example, is an alternative to GWAS, which allows the simultaneous use of genotypic, pedigree and phenotypic information, even from non-genotyped animals. Bayesian methods are also used to perform GWAS, starting from the basic premise that the observed variance can vary at each locus with a specific priori distribution. The objective of the present study was to evaluate, through simulation, which methods, among the evaluated ones, more assist in the identification of QTLs for polygenic and major gene affected traits, presenting different heritabilities. We used the following methods: wssGBLUP, with or without additional phenotypic information from non-genotyped animals and two different weights for markers, where w1 represented the same weight (w1=1) and w2 the weight calculated according to the previous iteration process (w1); Bayes C, assuming two values for π (π = 0.99 and π = 0.999), where π is the proportion of SNPs not included in the model, and Bayesian LASSO. The results showed that for polygenic scenarios the detection power is lower and the additional use of phenotypes from non-genotyped animals may help in the detection, yet with low intensity. For scenarios with major effect, there was greater power in the detection of QTL by all different methods with slighter superior performance for the Bayes C method. However, the inclusion of additional phenotypic information caused bias in the estimates and harmed the performance of the wssGBLUP in the presence of major QTL. The increase in heritability for both structures improved the performance of the methods and the power of mapping. The most suitable method for the iii detection of QTL is dependent on the genetic structure and the heritability of the trait, and there is not a superior method for all scenarios.
|
19 |
Bayesian Visual Analytics: Interactive Visualization for High Dimensional DataHan, Chao 07 December 2012 (has links)
In light of advancements made in data collection techniques over the past two decades, data mining has become common practice to summarize large, high dimensional datasets, in hopes of discovering noteworthy data structures. However, one concern is that most data mining approaches rely upon strict criteria that may mask information in data that analysts may find useful. We propose a new approach called Bayesian Visual Analytics (BaVA) which merges Bayesian Statistics with Visual Analytics to address this concern. The BaVA framework enables experts to interact with the data and the feature discovery tools by modeling the "sense-making" process using Bayesian Sequential Updating. In this paper, we use BaVA idea to enhance high dimensional visualization techniques such as Probabilistic PCA (PPCA). However, for real-world datasets, important structures can be arbitrarily complex and a single data projection such as PPCA technique may fail to provide useful insights. One way for visualizing such a dataset is to characterize it by a mixture of local models. For example, Tipping and Bishop [Tipping and Bishop, 1999] developed an algorithm called Mixture Probabilistic PCA (MPPCA) that extends PCA to visualize data via a mixture of projectors. Based on MPPCA, we developped a new visualization algorithm called Covariance-Guided MPPCA which group similar covariance structured clusters together to provide more meaningful and cleaner visualizations. Another way to visualize a very complex dataset is using nonlinear projection methods such as the Generative Topographic Mapping algorithm(GTM). We developped an interactive version of GTM to discover interesting local data structures. We demonstrate the performance of our approaches using both synthetic and real dataset and compare our algorithms with existing ones. / Ph. D.
|
20 |
Bayesian Pollution Source Apportionment Incorporating Multiple Simultaneous MeasurementsChristensen, Jonathan Casey 12 March 2012 (has links) (PDF)
We describe a method to estimate pollution profiles and contribution levels for distinct prominent pollution sources in a region based on daily pollutant concentration measurements from multiple measurement stations over a period of time. In an extension of existing work, we will estimate common source profiles but distinct contribution levels based on measurements from each station. In addition, we will explore the possibility of extending existing work to allow adjustments for synoptic regimes—large scale weather patterns which may effect the amount of pollution measured from individual sources as well as for particular pollutants. For both extensions we propose Bayesian methods to estimate pollution source profiles and contributions.
|
Page generated in 0.0705 seconds