• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 7
  • 6
  • 6
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 38
  • 24
  • 23
  • 21
  • 18
  • 13
  • 12
  • 12
  • 12
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Approaches to Find the Functionally Related Experiments Based on Enrichment Scores: Infinite Mixture Model Based Cluster Analysis for Gene Expression Data

Li, Qian 18 October 2013 (has links)
No description available.
32

Bayesian Nonparametric Methods with Applications in Longitudinal, Heterogeneous and Spatiotemporal Data

Duan, Li 19 October 2015 (has links)
No description available.
33

Semi-parametric Bayesian Models Extending Weighted Least Squares

Wang, Zhen 31 August 2009 (has links)
No description available.
34

A Bayesian Semi-parametric Model for Realized Volatility

Feng, Tian 10 1900 (has links)
<p>Due to the advancements in computing power and the availability of high-frequency data, the analyses of the high frequency stock data and market microstructure has become more and more important in econometrics. In the high frequency data setting, volatility is a very important indicator on the movement of stock prices and measure of risk. It is a key input in pricing of assets, portfolio reallocation, and risk management. In this thesis, we use the Heterogeneous Autoregressive model of realized volatility, combined with Bayesian inference as well as Markov chain Monte Carlo method’s to estimate the innovation density of the daily realized volatility. A Dirichlet process is used as the prior in a countably infinite mixture model. The semi-parametric model provides a robust alternative to the models used in the literature. I find evidence of thick tails in the density of innovations to log-realized volatility.</p> / Master of Science (MSc)
35

Out-of-distribution Recognition and Classification of Time-Series Pulsed Radar Signals / Out-of-distribution Igenkänning och Klassificering av Pulserade Radar Signaler

Hedvall, Paul January 2022 (has links)
This thesis investigates out-of-distribution recognition for time-series data of pulsedradar signals. The classifier is a naive Bayesian classifier based on Gaussian mixturemodels and Dirichlet process mixture models. In the mixture models, we model thedistribution of three pulse features in the time series, namely radio-frequency in thepulse, duration of the pulse, and pulse repetition interval which is the time betweenpulses. We found that simple thresholds on the likelihood can effectively determine ifsamples are out-of-distribution or belong to one of the classes trained on. In addition,we present a simple method that can be used for deinterleaving/pulse classification andshow that it can robustly classify 100 interleaved signals and simultaneously determineif pulses are out-of-distribution. / Det här examensarbetet undersöker hur en maskininlärnings-modell kan anpassas för attkänna igen när pulserade radar-signaler inte tillhör samma fördelning som modellen är tränadmed men också känna igen om signalen tillhör en tidigare känd klass. Klassifieringsmodellensom används här är en naiv Bayesiansk klassifierare som använder sig av Gaussian mixturemodels och Dirichlet Process mixture models. Modellen skapar en fördelning av tidsseriedatan för pulserade radar-signaler och specifikt för frekvensen av varje puls, pulsens längd och tiden till nästa puls. Genom att sätta gränser i sannolikheten av varje puls eller sannolikhetenav en sekvens kan vi känna igen om datan är okänd eller tillhör en tidigare känd klass.Vi presenterar även en enkel metod för att klassifiera specifika pulser i sammanhang närflera signaler överlappar och att metoden kan användas för att robust avgöra om pulser ärokända.
36

A Bayesian nonparametric approach for the two-sample problem / Uma abordagem bayesiana não paramétrica para o problema de duas amostras

Console, Rafael de Carvalho Ceregatti de 19 November 2018 (has links)
In this work, we discuss the so-called two-sample problem Pearson and Neyman (1930) assuming a nonparametric Bayesian approach. Considering X1; : : : ; Xn and Y1; : : : ; Ym two independent i.i.d samples generated from P1 and P2, respectively, the two-sample problem consists in deciding if P1 and P2 are equal. Assuming a nonparametric prior, we propose an evidence index for the null hypothesis H0 : P1 = P2 based on the posterior distribution of the distance d (P1; P2) between P1 and P2. This evidence index has easy computation, intuitive interpretation and can also be justified in the Bayesian decision-theoretic context. Further, in a Monte Carlo simulation study, our method presented good performance when compared with the well known Kolmogorov- Smirnov test, the Wilcoxon test as well as a recent testing procedure based on Polya tree process proposed by Holmes (HOLMES et al., 2015). Finally, we applied our method to a data set about scale measurements of three different groups of patients submitted to a questionnaire for Alzheimer\'s disease diagnostic. / Neste trabalho, discutimos o problema conhecido como problema de duas amostras Pearson and Neyman (1930) utilizando uma abordagem bayesiana não-paramétrica. Considere X1; : : : ; Xn and Y1; : : : ;Ym duas amostras independentes, geradas por P1 e P2, respectivamente, o problema de duas amostras consiste em decidir se P1 e P2 são iguais. Assumindo uma priori não-paramétrica, propomos um índice de evidência para a hipótese nula H0 : P1 = P2 baseado na distribuição a posteriori da distância d (P1; P2) entre P1 e P2. O índice de evidência é de fácil implementação, tem uma interpretação intuitiva e também pode ser justificada no contexto da teoria da decisão bayesiana. Além disso, em um estudo de simulação de Monte Carlo, nosso método apresentou bom desempenho quando comparado com o teste de Kolmogorov-Smirnov, com o teste de Wilcoxon e com o método de Holmes. Finalmente, aplicamos nosso método em um conjunto de dados sobre medidas de escala de três grupos diferentes de pacientes submetidos a um questionário para diagnóstico de doença de Alzheimer.
37

Advances in Bayesian Modelling and Computation: Spatio-Temporal Processes, Model Assessment and Adaptive MCMC

Ji, Chunlin January 2009 (has links)
<p>The modelling and analysis of complex stochastic systems with increasingly large data sets, state-spaces and parameters provides major stimulus to research in Bayesian nonparametric methods and Bayesian computation. This dissertation presents advances in both nonparametric modelling and statistical computation stimulated by challenging problems of analysis in complex spatio-temporal systems and core computational issues in model fitting and model assessment. The first part of the thesis, represented by chapters 2 to 4, concerns novel, nonparametric Bayesian mixture models for spatial point processes, with advances in modelling, computation and applications in biological contexts. Chapter 2 describes and develops models for spatial point processes in which the point outcomes are latent, where indirect observations related to the point outcomes are available, and in which the underlying spatial intensity functions are typically highly heterogenous. Spatial intensities of inhomogeneous Poisson processes are represented via flexible nonparametric Bayesian mixture models. Computational approaches are presented for this new class of spatial point process mixtures and extended to the context of unobserved point process outcomes. Two examples drawn from a central, motivating context, that of immunofluorescence histology analysis in biological studies generating high-resolution imaging data, demonstrate the modelling approach and computational methodology. Chapters 3 and 4 extend this framework to define a class of flexible Bayesian nonparametric models for inhomogeneous spatio-temporal point processes, adding dynamic models for underlying intensity patterns. Dependent Dirichlet process mixture models are introduced as core components of this new time-varying spatial model. Utilizing such nonparametric mixture models for the spatial process intensity functions allows the introduction of time variation via dynamic, state-space models for parameters characterizing the intensities. Bayesian inference and model-fitting is addressed via novel particle filtering ideas and methods. Illustrative simulation examples include studies in problems of extended target tracking and substantive data analysis in cell fluorescent microscopic imaging tracking problems.</p><p>The second part of the thesis, consisting of chapters 5 and chapter 6, concerns advances in computational methods for some core and generic Bayesian inferential problems. Chapter 5 develops a novel approach to estimation of upper and lower bounds for marginal likelihoods in Bayesian modelling using refinements of existing variational methods. Traditional variational approaches only provide lower bound estimation; this new lower/upper bound analysis is able to provide accurate and tight bounds in many problems, so facilitates more reliable computation for Bayesian model comparison while also providing a way to assess adequacy of variational densities as approximations to exact, intractable posteriors. The advances also include demonstration of the significant improvements that may be achieved in marginal likelihood estimation by marginalizing some parameters in the model. A distinct contribution to Bayesian computation is covered in Chapter 6. This concerns a generic framework for designing adaptive MCMC algorithms, emphasizing the adaptive Metropolized independence sampler and an effective adaptation strategy using a family of mixture distribution proposals. This work is coupled with development of a novel adaptive approach to computation in nonparametric modelling with large data sets; here a sequential learning approach is defined that iteratively utilizes smaller data subsets. Under the general framework of importance sampling based marginal likelihood computation, the proposed adaptive Monte Carlo method and sequential learning approach can facilitate improved accuracy in marginal likelihood computation. The approaches are exemplified in studies of both synthetic data examples, and in a real data analysis arising in astro-statistics.</p><p>Finally, chapter 7 summarizes the dissertation and discusses possible extensions of the specific modelling and computational innovations, as well as potential future work.</p> / Dissertation
38

Patterns of somatic genome rearrangement in human cancer

Roberts, Nicola Diane January 2018 (has links)
Cancer development is driven by somatic genome alterations, ranging from single point mutations to larger structural variants (SV) affecting kilobases to megabases of one or more chromosomes. Studies of somatic rearrangement have previously been limited by a paucity of whole genome sequencing data, and a lack of methods for comprehensive structural classification and downstream analysis. The ICGC project on the Pan-Cancer Analysis of Whole Genomes provides an unprecedented opportunity to analyse somatic SVs at base-pair resolution in more than 2500 samples from 30 common cancer types. In this thesis, I build on a recently developed SV classification pipeline to present a census of rearrangement across the pan-cancer cohort, including chromoplexy, replicative two-jumps, and templated insertions connecting as many as eight distant loci. By identifying the precise structure of individual breakpoint junctions and separating out complex clusters, the classification scheme empowers detailed exploration of all simple SV properties and signatures. After illustrating the various SV classes and their frequency across cancer types and samples, Chapter 2 focuses on structural properties including event size and breakpoint homology. Then, in Chapter 3, I consider the SV distribution across the genome, and show patterns of association with various genome properties. Upon examination of rearrangement hotspot loci, I describe tissue-specific fragile site deletion patterns, and a variety of SV profiles around known cancer genes, including recurrent templated insertion cycles affecting TERT and RB1. Turning to co-occurring alteration patterns, Chapter 4 introduces the Hierarchical Dirichlet Process as a non-parametric Bayesian model of mutational signatures. After developing methods for consensus signature extraction, I detour to the domain of single nucleotide variants to test the HDP method on real and simulated data, and to illustrate its utility for simultaneous signature discovery and matching. Finally, I return to the PCAWG SV dataset, and extract SV signatures delineated by structural class, size, and replication timing. In Chapter 5, I move on to the complex SV clusters (largely set aside throughout Chapters 2—4) , and develop an improved breakpoint clustering method to subdivide the complex rearrangement landscape. I propose a raft of summary metrics for groups of five or more breakpoint junctions, and explore their utility for preliminary classification of chromothripsis and other complex phenomena. This comprehensive study of somatic genome rearrangement provides detailed insight into SV patterns and properties across event classes, genome regions, samples, and cancer types. To extrapolate from the progress made in this thesis, Chapter 6 suggests future strategies for addressing unanswered questions about complex SV mechanisms, annotation of functional consequences, and selection analysis to discover novel drivers of the cancer phenotype.
39

Classification bayésienne non supervisée de données fonctionnelles en présence de covariables / Unsupervised Bayesian clustering of functional data in the presence of covariates

Juery, Damien 18 December 2014 (has links)
Un des objectifs les plus importants en classification non supervisée est d'extraire des groupes de similarité depuis un jeu de données. Avec le développement actuel du phénotypage où les données sont recueillies en temps continu, de plus en plus d'utilisateurs ont besoin d'outils capables de classer des courbes.Le travail présenté dans cette thèse se fonde sur la statistique bayésienne. Plus précisément, nous nous intéressons à la classification bayésienne non supervisée de données fonctionnelles. Les lois a priori bayésiennes non paramétriques permettent la construction de modèles flexibles et robustes.Nous généralisons un modèle de classification (DPM), basé sur le processus de Dirichlet, au cadre fonctionnel. Contrairement aux méthodes actuelles qui utilisent la dimension finie en projetant les courbes dans des bases de fonctions, ou en considérant les courbes aux temps d'observation, la méthode proposée considère les courbes complètes, en dimension infinie. La théorie des espaces de Hilbert à noyau reproduisant (RKHS) nous permet de calculer, en dimension infinie, les densités de probabilité des courbes par rapport à une mesure gaussienne. De la même façon, nous explicitons un calcul de loi a posteriori, sachant les courbes complètes et non seulement les valeurs discrétisées. Nous proposons un algorithme qui généralise l'algorithme "Gibbs sampling with auxiliary parameters" de Neal (2000). L'implémentation numérique requiert le calcul de produits scalaires, qui sont approchés à partir de méthodes numériques. Quelques applications sur données réelles et simulées sont également présentées, puis discutées.En dernier lieu, l'ajout d'une hiérarchie supplémentaire à notre modèle nous permet de pouvoir prendre en compte des covariables fonctionnelles. Nous verrons à cet effet qu'il est possible de définir plusieurs modèles. La méthode algorithmique proposée précédemment est ainsi étendue à chacun de ces nouveaux modèles. Quelques applications sur données simulées sont présentées. / One of the major objectives of unsupervised clustering is to find similarity groups in a dataset. With the current development of phenotyping, in which continuous-time data are collected, more and more users require new efficient tools capable of clustering curves.The work presented in this thesis is based on Bayesian statistics. Specifically, we are interested in unsupervised Bayesian clustering of functional data. Nonparametric Bayesian priors allow the construction of flexible and robust models.We generalize a clustering model (DPM), founded on the Dirichlet process, to the functional framework. Unlike current methods which make use of the finite dimension, either by representing curves as linear combinations of basis functions, or by regarding curves as data points, calculations are hereby carried out on complete curves, in the infinite dimension. The reproducing kernel Hilbert space (RKHS) theory allows us to derive, in the infinite dimension, probability density functions of curves with respect to a gaussian measure. In the same way, we make explicit a posterior distribution, given complete curves and not only data points. We suggest generalizing the algorithm "Gibbs sampling with auxiliary parameters" by Neal (2000). The numerical implementation requires the calculation of inner products, which are approximated from numerical methods. Some case studies on real and simulated data are also presented, then discussed.Finally, the addition of an extra hierarchy in our model allows us to take functional covariates into account. For that purpose, we will show that it is possible to define several models. The previous algorithmic method is therefore extended to each of these models. Some case studies on simulated data are presented.
40

Bayesian Nonparametric Modeling and Inference for Multiple Object Tracking

January 2019 (has links)
abstract: The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the use of physical-based models with Bayesian nonparametric methods to address the main challenges in a tracking problem. In particular, Bayesian nonparametric methods are exploited to efficiently and robustly infer object identity and learn time-dependent cardinality; together with Bayesian inference methods, they are also used to associate measurements to objects and estimate the trajectory of objects. These methods differ from the current methods to the core as the existing methods are mainly based on random finite set theory. The first contribution proposes dependent nonparametric models such as the dependent Dirichlet process and the dependent Pitman-Yor process to capture the inherent time-dependency in the problem at hand. These processes are used as priors for object state distributions to learn dependent information between previous and current time steps. Markov chain Monte Carlo sampling methods exploit the learned information to sample from posterior distributions and update the estimated object parameters. The second contribution proposes a novel, robust, and fast nonparametric approach based on a diffusion process over infinite random trees to infer information on object cardinality and trajectory. This method follows the hierarchy induced by objects entering and leaving a scene and the time-dependency between unknown object parameters. Markov chain Monte Carlo sampling methods integrate the prior distributions over the infinite random trees with time-dependent diffusion processes to update object states. The third contribution develops the use of hierarchical models to form a prior for statistically dependent measurements in a single object tracking setup. Dependency among the sensor measurements provides extra information which is incorporated to achieve the optimal tracking performance. The hierarchical Dirichlet process as a prior provides the required flexibility to do inference. Bayesian tracker is integrated with the hierarchical Dirichlet process prior to accurately estimate the object trajectory. The fourth contribution proposes an approach to model both the multiple dependent objects and multiple dependent measurements. This approach integrates the dependent Dirichlet process modeling over the dependent object with the hierarchical Dirichlet process modeling of the measurements to fully capture the dependency among both object and measurements. Bayesian nonparametric models can successfully associate each measurement to the corresponding object and exploit dependency among them to more accurately infer the trajectory of objects. Markov chain Monte Carlo methods amalgamate the dependent Dirichlet process with the hierarchical Dirichlet process to infer the object identity and object cardinality. Simulations are exploited to demonstrate the improvement in multiple object tracking performance when compared to approaches that are developed based on random finite set theory. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019

Page generated in 0.065 seconds