Spelling suggestions: "subject:"[een] BAYESIAN STATISTICS"" "subject:"[enn] BAYESIAN STATISTICS""
61 |
Identification and photometric redshifts for type-I quasars with medium- and narrow-band filter surveys / Identificação e redshifts fotométricos para quasares do tipo-I com sistemas de filtros de bandas médias e estreitasSilva, Carolina Queiroz de Abreu 16 November 2015 (has links)
Quasars are valuable sources for several cosmological applications. In particular, they can be used to trace some of the heaviest halos and their high intrinsic luminosities allow them to be detected at high redshifts. This implies that quasars (or active galactic nuclei, in a more general sense) have a huge potential to map the large-scale structure. However, this potential has not yet been fully realized, because instruments which rely on broad-band imaging to pre-select spectroscopic targets usually miss most quasars and, consequently, are not able to properly separate broad-line emitting quasars from other point-like sources (such as stars and low resolution galaxies). This work is an initial attempt to investigate the realistic gains on the identification and separation of quasars and stars when medium- and narrow-band filters in the optical are employed. The main novelty of our approach is the use of Bayesian priors both for the angular distribution of stars of different types on the sky and for the distribution of quasars as a function of redshift. Since the evidence from these priors convolve the angular dependence of stars with the redshift dependence of quasars, this allows us to control for the near degeneracy between these objects. However, our results are inconclusive to quantify the efficiency of star-quasar separation by using this approach and, hence, some critical refinements and improvements are still necessary. / Quasares são objetos valiosos para diversas aplicações cosmológicas. Em particular, eles podem ser usados para localizar alguns dos halos mais massivos e suas luminosidades intrinsecamente elevadas permitem que eles sejam detectados a altos redshifts. Isso implica que quasares (ou núcleos ativos de galáxias, de um modo geral) possuem um grande potencial para mapear a estrutura em larga escala. Entretanto, esse potencial ainda não foi completamente atingido, porque instrumentos que se baseiam no imageamento por bandas largas para pré-selecionar alvos espectroscópicos perdem a maioria dos quasares e, consequentemente, não são capazes de separar adequadamente quasares com linhas de emissão largas de outras fontes pontuais (como estrelas e galáxias de baixa resolução). Esse trabalho é uma tentativa inicial de investigar os ganhos reais na identificação e separação de quasares e estrelas quando são usados filtros de bandas médias e estreitas. A principal novidade desse método é o uso de priors Bayesianos tanto para a distribuição angular de estrelas de diferentes tipos no céu quanto para a distribuição de quasares como função do redshift. Como a evidência desses priors é uma convolução entre a dependência angular das estrelas e a dependência em redshift dos quasares, isso permite que a degenerescência entre esses objetos seja levada em consideração. Entretanto, nossos resultados ainda são inconclusivos para quantificar a eficiência da separação entre estrelas e quasares utilizando esse método e, portanto, alguns refinamentos críticos são necessários.
|
62 |
Construção de redes usando estatística clássica e Bayesiana - uma comparação / Building complex networks through classical and Bayesian statistics - a comparisonLina Dornelas Thomas 13 March 2012 (has links)
Nesta pesquisa, estudamos e comparamos duas maneiras de se construir redes. O principal objetivo do nosso estudo é encontrar uma forma efetiva de se construir redes, especialmente quando temos menos observações do que variáveis. A construção das redes é realizada através da estimação do coeficiente de correlação parcial com base na estatística clássica (inverse method) e na Bayesiana (priori conjugada Normal - Wishart invertida). No presente trabalho, para resolver o problema de se ter menos observações do que variáveis, propomos uma nova metodologia, a qual chamamos correlação parcial local, que consiste em selecionar, para cada par de variáveis, as demais variáveis que apresentam maior coeficiente de correlação com o par. Aplicamos essas metodologias em dados simulados e as comparamos traçando curvas ROC. O resultado mais atrativo foi que, mesmo com custo computacional alto, usar inferência Bayesiana é melhor quando temos menos observações do que variáveis. Em outros casos, ambas abordagens apresentam resultados satisfatórios. / This research is about studying and comparing two different ways of building complex networks. The main goal of our study is to find an effective way to build networks, particularly when we have fewer observations than variables. We construct networks estimating the partial correlation coefficient on Classic Statistics (Inverse Method) and on Bayesian Statistics (Normal - Invese Wishart conjugate prior). In this current work, in order to solve the problem of having less observations than variables, we propose a new methodology called local partial correlation, which consists of selecting, for each pair of variables, the other variables most correlated to the pair. We applied these methods on simulated data and compared them through ROC curves. The most atractive result is that, even though it has high computational costs, to use Bayesian inference is better when we have less observations than variables. In other cases, both approaches present satisfactory results.
|
63 |
Um ambiente computacional para um teste de significância bayesiano / An computational environment for a bayesian significance testSilvio Rodrigues de Faria Junior 09 October 2006 (has links)
Em 1999, Pereira e Stern [Pereira and Stern, 1999] propuseram o Full Baye- sian Significance Test (FBST), ou Teste de Significancia Completamente Bayesiano, especialmente desenhado para fornecer um valor de evidencia dando suporte a uma hip otese precisa H. Apesar de possuir boas propriedades conceituais e poder tratar virtual- mente quaisquer classes de hip oteses precisas em modelos param etricos, a difus ao deste m etodo na comunidade cient fica tem sido fortemente limitada pela ausencia de um ambiente integrado onde o pesquisador possa formular e implementar o teste de seu interesse. O objetivo deste trabalho e apresentar uma proposta de implementa c ao de um ambiente integrado para o FBST, que seja suficientemente flex vel para tratar uma grande classe de problemas. Como estudo de caso, apresentamos a formula c ao do FBST para um problema cl assico em gen etica populacional, o Equil brio de Hardy-Weinberg / In 1999, Pereira and Stern [Pereira and Stern, 1999] introduced the Full Bayesian Significance Test (FBST), developed to give a value of evidence for a precise hypothesis H. Despite having good conceptual properties and being able to dealing with virtually any classes of precise hypotheses under parametric models, the FBST did not achieve a large difusion among the academic community due to the abscence of an computational environment where the researcher can define and assess the evidence for hypothesis under investigation. In this work we propose an implementation of an flexible computatio- nal environment for FBST and show a case study in a classical problem in population genetics, the Hardy-Weinberg Equilibrium Law.
|
64 |
[en] EVOLUTIONARY INFERENCE APPROACHES FOR ADAPTIVE MODELS / [pt] ABORDAGENS DE INFERÊNCIA EVOLUCIONÁRIA EM MODELOS ADAPTATIVOSEDISON AMERICO HUARSAYA TITO 17 July 2003 (has links)
[pt] Em muitas aplicações reais de processamento de sinais, as
observações do fenômeno em estudo chegam seqüencialmente no
tempo. Consequentemente, a tarefa de análise destes dados
envolve estimar quantidades desconhecidas em cada
observação concebida do fenômeno.
Na maioria destas aplicações, entretanto, algum
conhecimento prévio sobre o fenômeno a ser modelado está
disponível. Este conhecimento prévio permite formular
modelos Bayesianos, isto é, uma distribuição a priori sobre
as quantidades desconhecidas e uma função de
verossimilhança relacionando estas quantidades com as
observações do fenômeno. Dentro desta configuração, a
inferência Bayesiana das quantidades desconhecidas é
baseada na distribuição a posteriori, que é obtida através
do teorema de Bayes.
Infelizmente, nem sempre é possível obter uma solução
analítica exata para esta distribuição a posteriori. Graças
ao advento de um formidável poder computacional a baixo
custo, em conjunto com os recentes desenvolvimentos na
área de simulações estocásticas, este problema tem sido
superado, uma vez que esta distribuição a posteriori pode
ser aproximada numericamente através de uma distribuição
discreta, formada por um conjunto de amostras.
Neste contexto, este trabalho aborda o campo de simulações
estocásticas sob a ótica da genética Mendeliana e do
princípio evolucionário da sobrevivência dos mais aptos.
Neste enfoque, o conjunto de amostras que aproxima a
distribuição a posteriori pode ser visto como uma população
de indivíduos que tentam sobreviver num ambiente
Darwiniano, sendo o indivíduo mais forte, aquele que
possui maior probabilidade. Com base nesta analogia,
introduziu-se na área de simulações estocásticas (a) novas
definições de núcleos de transição inspirados nos
operadores genéticos de cruzamento e mutação e (b) novas
definições para a probabilidade de aceitação, inspirados no
esquema de seleção, presente nos Algoritmos Genéticos.
Como contribuição deste trabalho está o estabelecimento de
uma equivalência entre o teorema de Bayes e o princípio
evolucionário, permitindo, assim, o desenvolvimento de um
novo mecanismo de busca da solução ótima das quantidades
desconhecidas, denominado de inferência evolucionária.
Destacamse também: (a) o desenvolvimento do Filtro de
Partículas Genéticas, que é um algoritmo de aprendizado
online e (b) o Filtro Evolutivo, que é um algoritmo de
aprendizado batch. Além disso, mostra-se que o Filtro
Evolutivo, é em essência um Algoritmo Genético pois, além
da sua capacidade de convergência a distribuições de
probabilidade, o Filtro Evolutivo converge também a sua moda
global. Em conseqüência, a fundamentação teórica do Filtro
Evolutivo demonstra, analiticamente, a convergência dos
Algoritmos Genéticos em espaços contínuos.
Com base na análise teórica de convergência dos algoritmos
de aprendizado baseados na inferência evolucionária e nos
resultados dos experimentos numéricos, comprova-se que esta
abordagem se aplica a problemas reais de processamento de
sinais, uma vez que permite analisar sinais complexos
caracterizados por comportamentos não-lineares, não-
gaussianos e nãoestacionários. / [en] In many real-world signal processing applications, the phenomenon s observations arrive sequentially in time; consequently, the signal data analysis task involves estimating unknown quantities for each phenomenon observation. However, in most of these applications, prior knowledge about the phenomenon being modeled is available. This prior knowledge allows us to formulate a Bayesian model, which is
a prior distribution for the unknown quantities and the likelihood functions relating these quantities to the
observations. Within these settings, the Bayesian inference on the unknown quantities is based on the posterior distributions obtained from the Bayes theorem. Unfortunately, it is not always possible to obtain a closed-form analytical solution for this posterior distribution. By the advent of a cheap and formidable computational power, in conjunction with some recent developments in stochastic simulations, this problem has been overcome, since this posterior distribution can be obtained by numerical approximation. Within this context, this work studies the stochastic simulation field from the Mendelian genetic view, as well
as the evolutionary principle of the survival of the fittest perspective. In this approach, the set of samples
that approximate the posteriori distribution can be seen as a population of individuals which are trying to survival in a Darwinian environment, where the strongest individual is the one with the highest probability. Based in this analogy, we introduce into the stochastic simulation field: (a) new definitions for the transition kernel, inspired in the genetic operators of crossover and mutation and (b) new definitions for the acceptation probability, inspired in the selection scheme used in the Genetic Algorithms. The contribution of this work is the establishment of a relation between the Bayes theorem and the evolutionary principle, allowing the development of a new optimal solution search engine for the unknown quantities, called evolutionary inference. Other contributions: (a) the development of the Genetic Particle Filter, which is an evolutionary online learning algorithm and (b) the Evolution Filter, which is an evolutionary batch learning algorithm. Moreover, we show that the Evolution Filter is a Genetic algorithm, since, besides its
capacity of convergence to probability distributions, it also converges to its global modal distribution. As a
consequence, the theoretical foundation of the Evolution Filter demonstrates the convergence of Genetic Algorithms in continuous search space. Through the theoretical convergence analysis of the learning algorithms based on the evolutionary inference, as well as the numerical experiments results, we verify that this approach can be applied to real problems of signal processing, since it allows us to analyze complex signals characterized by non-linear, nongaussian and non-stationary behaviors.
|
65 |
Terrain Aided Underwater Navigation using Bayesian Statistics / Terrängstöttad undervattensnavigering baserad på Bayesiansk statistikKarlsson, Tobias January 2002 (has links)
<p>For many years, terrain navigation has been successfully used in military airborne applications. Terrain navigation can essentially improve the performance of traditional inertial-based navigation. The latter is typically built around gyros and accelerometers, measuring the kinetic state changes. Although inertial-based systems benefit from their high independence, they, unfortunately, suffer from increasing error-growth due to accumulation of continuous measurement errors. </p><p>Undersea, the number of options for navigation support is fairly limited. Still, the navigation accuracy demands on autonomous underwater vehicles are increasing. For many military applications, surfacing to receive a GPS position- update is not an option. Lately, some attention has, instead, shifted towards terrain aided navigation. </p><p>One fundamental aim of this work has been to show what can be done within the field of terrain aided underwater navigation, using relatively simple means. A concept has been built around a narrow-beam altimeter, measuring the depth directly beneath the vehicle as it moves ahead. To estimate the vehicle location, based on the depth measurements, a particle filter algorithm has been implemented. A number of MATLAB simulations have given a qualitative evaluation of the chosen algorithm. In order to acquire data from actual underwater terrain, a small area of the Swedish lake, Lake Vättern has been charted. Results from simulations made on this data strongly indicate that the particle filter performs surprisingly well, also within areas containing relatively modest terrain variation.</p>
|
66 |
The use of Bayesian confidence propagation neural network in pharmacovigilanceBate, Andrew January 2003 (has links)
<p>The WHO database contains more than 2.8 million case reports of suspected adverse drug reactions reported from 70 countries worldwide since 1968. The Uppsala Monitoring Centre maintains and analyses this database for new signals on behalf of the WHO Programme for International Drug Monitoring. A goal of the Programme is to detect signals, where a signal is defined as "Reported information on a possible causal relationship between an adverse event and a drug, the relationship being unknown or incompletely documented previously."</p><p>The analysis of such a large amount of data on a case by case basis is impossible with the resources available. Therefore a quantitative, data mining procedure has been developed to improve the focus of the clinical signal detection process. The method used, is referred to as the BCPNN (Bayesian Confidence Propagation Neural Network). This not only assists in the early detection of adverse drug reactions (ADRs) but also further analysis of such signals. The method uses Bayesian statistical principles to quantify apparent dependencies in the data set. This quantifies the degree to which a specific drug- ADR combination is different from a background (in this case the WHO database). The measure of disproportionality used, is referred to as the Information Component (IC) because of its' origins in Information Theory. A confidence interval is calculated for the IC of each combination. A neural network approach allows all drug-ADR combinations in the database to be analysed in an automated manner. Evaluations of the effectiveness of the BCPNN in signal detection are described.</p><p>To compare how a drug association compares in unexpectedness to related drugs, which might be used for the same clinical indication, the method is extended to consideration of groups of drugs. The benefits and limitations of this approach are discussed with examples of known group effects (ACE inhibitors - coughing and antihistamines - heart rate and rhythm disorders.) An example of a clinically important, novel signal found using the BCPNN approach is also presented. The signal of antipsychotics linked with heart muscle disorder was detected using the BCPNN and reported.</p><p>The BCPNN is now routinely used in signal detection to search single drug - single ADR combinations. The extension of the BCPNN to discover 'unexpected' complex dependencies between groups of drugs and adverse reactions is described. A recurrent neural network method has been developed for finding complex patterns in incomplete and noisy data sets. The method is demonstrated on an artificial test set. Implementation on real data is demonstrated by examining the pattern of adverse reactions highlighted for the drug haloperidol. Clinically important, complex relationships in this kind of data are previously unexplored.</p><p>The BCPNN method has been shown and tested for use in routine signal detection, refining signals and in finding complex patterns. The usefulness of the output is influenced by the quality of the data in the database. Therefore, this method should be used to detect, rather than evaluate signals. The need for clinical analyses of case series remains crucial.</p>
|
67 |
Signal decompositions using trans-dimensional Bayesian methods.Roodaki, Alireza 14 May 2012 (has links) (PDF)
This thesis addresses the challenges encountered when dealing with signal decomposition problems with an unknown number of components in a Bayesian framework. Particularly, we focus on the issue of summarizing the variable-dimensional posterior distributions that typically arise in such problems. Such posterior distributions are defined over union of subspaces of differing dimensionality, and can be sampled from using modern Monte Carlo techniques, for instance the increasingly popular Reversible-Jump MCMC (RJ-MCMC) sampler. No generic approach is available, however, to summarize the resulting variable-dimensional samples and extract from them component-specific parameters. One of the main challenges that needs to be addressed to this end is the label-switching issue, which is caused by the invariance of the posterior distribution to the permutation of the components. We propose a novel approach to this problem, which consists in approximating the complex posterior of interest by a "simple"--but still variable-dimensional parametric distribution. We develop stochastic EM-type algorithms, driven by the RJ-MCMC sampler, to estimate the parameters of the model through the minimization of a divergence measure between the two distributions. Two signal decomposition problems are considered, to show the capability of the proposed approach both for relabeling and for summarizing variable dimensional posterior distributions: the classical problem of detecting and estimating sinusoids in white Gaussian noise on the one hand, and a particle counting problem motivated by the Pierre Auger project in astrophysics on the other hand.
|
68 |
The use of Bayesian confidence propagation neural network in pharmacovigilanceBate, Andrew January 2003 (has links)
The WHO database contains more than 2.8 million case reports of suspected adverse drug reactions reported from 70 countries worldwide since 1968. The Uppsala Monitoring Centre maintains and analyses this database for new signals on behalf of the WHO Programme for International Drug Monitoring. A goal of the Programme is to detect signals, where a signal is defined as "Reported information on a possible causal relationship between an adverse event and a drug, the relationship being unknown or incompletely documented previously." The analysis of such a large amount of data on a case by case basis is impossible with the resources available. Therefore a quantitative, data mining procedure has been developed to improve the focus of the clinical signal detection process. The method used, is referred to as the BCPNN (Bayesian Confidence Propagation Neural Network). This not only assists in the early detection of adverse drug reactions (ADRs) but also further analysis of such signals. The method uses Bayesian statistical principles to quantify apparent dependencies in the data set. This quantifies the degree to which a specific drug- ADR combination is different from a background (in this case the WHO database). The measure of disproportionality used, is referred to as the Information Component (IC) because of its' origins in Information Theory. A confidence interval is calculated for the IC of each combination. A neural network approach allows all drug-ADR combinations in the database to be analysed in an automated manner. Evaluations of the effectiveness of the BCPNN in signal detection are described. To compare how a drug association compares in unexpectedness to related drugs, which might be used for the same clinical indication, the method is extended to consideration of groups of drugs. The benefits and limitations of this approach are discussed with examples of known group effects (ACE inhibitors - coughing and antihistamines - heart rate and rhythm disorders.) An example of a clinically important, novel signal found using the BCPNN approach is also presented. The signal of antipsychotics linked with heart muscle disorder was detected using the BCPNN and reported. The BCPNN is now routinely used in signal detection to search single drug - single ADR combinations. The extension of the BCPNN to discover 'unexpected' complex dependencies between groups of drugs and adverse reactions is described. A recurrent neural network method has been developed for finding complex patterns in incomplete and noisy data sets. The method is demonstrated on an artificial test set. Implementation on real data is demonstrated by examining the pattern of adverse reactions highlighted for the drug haloperidol. Clinically important, complex relationships in this kind of data are previously unexplored. The BCPNN method has been shown and tested for use in routine signal detection, refining signals and in finding complex patterns. The usefulness of the output is influenced by the quality of the data in the database. Therefore, this method should be used to detect, rather than evaluate signals. The need for clinical analyses of case series remains crucial.
|
69 |
Terrain Aided Underwater Navigation using Bayesian Statistics / Terrängstöttad undervattensnavigering baserad på Bayesiansk statistikKarlsson, Tobias January 2002 (has links)
For many years, terrain navigation has been successfully used in military airborne applications. Terrain navigation can essentially improve the performance of traditional inertial-based navigation. The latter is typically built around gyros and accelerometers, measuring the kinetic state changes. Although inertial-based systems benefit from their high independence, they, unfortunately, suffer from increasing error-growth due to accumulation of continuous measurement errors. Undersea, the number of options for navigation support is fairly limited. Still, the navigation accuracy demands on autonomous underwater vehicles are increasing. For many military applications, surfacing to receive a GPS position- update is not an option. Lately, some attention has, instead, shifted towards terrain aided navigation. One fundamental aim of this work has been to show what can be done within the field of terrain aided underwater navigation, using relatively simple means. A concept has been built around a narrow-beam altimeter, measuring the depth directly beneath the vehicle as it moves ahead. To estimate the vehicle location, based on the depth measurements, a particle filter algorithm has been implemented. A number of MATLAB simulations have given a qualitative evaluation of the chosen algorithm. In order to acquire data from actual underwater terrain, a small area of the Swedish lake, Lake Vättern has been charted. Results from simulations made on this data strongly indicate that the particle filter performs surprisingly well, also within areas containing relatively modest terrain variation.
|
70 |
Bayesian Adjustment for MultiplicityScott, James Gordon January 2009 (has links)
<p>This thesis is about Bayesian approaches for handling multiplicity. It considers three main kinds of multiple-testing scenarios: tests of exchangeable experimental units, tests for variable inclusion in linear regresson models, and tests for conditional independence in jointly normal vectors. Multiplicity adjustment in these three areas will be seen to have many common structural features. Though the modeling approach throughout is Bayesian, frequentist reasoning regarding error rates will often be employed.</p><p>Chapter 1 frames the issues in the context of historical debates about Bayesian multiplicity adjustment. Chapter 2 confronts the problem of large-scale screening of functional data, where control over Type-I error rates is a crucial issue. Chapter 3 develops new theory for comparing Bayes and empirical-Bayes approaches for multiplicity correction in regression variable selection. Chapters 4 and 5 describe new theoretical and computational tools for Gaussian graphical-model selection, where multiplicity arises in performing many simultaneous tests of pairwise conditional independence. Chapter 6 introduces a new approach to sparse-signal modeling based upon local shrinkage rules. Here the focus is not on multiplicity per se, but rather on using ideas from Bayesian multiple-testing models to motivate a new class of multivariate scale-mixture priors. Finally, Chapter 7 describes some directions for future study, many of which are the subjects of my current research agenda.</p> / Dissertation
|
Page generated in 0.0571 seconds