• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Geometry of Data: Distance on Data Manifolds

Chu, Casey 01 January 2016 (has links)
The increasing importance of data in the modern world has created a need for new mathematical techniques to analyze this data. We explore and develop the use of geometry—specifically differential geometry—as a means for such analysis, in two parts. First, we provide a general framework to discover patterns contained in time series data using a geometric framework of assigning distance, clustering, and then forecasting. Second, we attempt to define a Riemannian metric on the space containing the data in order to introduce a notion of distance intrinsic to the data, providing a novel way to probe the data for insight.
2

From here to infinity: sparse finite versus Dirichlet process mixtures in model-based clustering

Frühwirth-Schnatter, Sylvia, Malsiner-Walli, Gertraud January 2019 (has links) (PDF)
In model-based clustering mixture models are used to group data points into clusters. A useful concept introduced for Gaussian mixtures by Malsiner Walli et al. (Stat Comput 26:303-324, 2016) are sparse finite mixtures, where the prior distribution on the weight distribution of a mixture with K components is chosen in such a way that a priori the number of clusters in the data is random and is allowed to be smaller than K with high probability. The number of clusters is then inferred a posteriori from the data. The present paper makes the following contributions in the context of sparse finite mixture modelling. First, it is illustrated that the concept of sparse finite mixture is very generic and easily extended to cluster various types of non-Gaussian data, in particular discrete data and continuous multivariate data arising from non-Gaussian clusters. Second, sparse finite mixtures are compared to Dirichlet process mixtures with respect to their ability to identify the number of clusters. For both model classes, a random hyper prior is considered for the parameters determining the weight distribution. By suitable matching of these priors, it is shown that the choice of this hyper prior is far more influential on the cluster solution than whether a sparse finite mixture or a Dirichlet process mixture is taken into consideration.
3

Bayesian shrinkage in mixture-of-experts models: identifying robust determinants of class membership

Zens, Gregor 13 February 2019 (has links) (PDF)
A method for implicit variable selection in mixture-of-experts frameworks is proposed. We introduce a prior structure where information is taken from a set of independent covariates. Robust class membership predictors are identified using a normal gamma prior. The resulting model setup is used in a finite mixture of Bernoulli distributions to find homogenous clusters of women in Mozambique based on their information sources on HIV. Fully Bayesian inference is carried out via the implementation of a Gibbs sampler.
4

On standard conjugate families for natural exponential families with bounded natural parameter space.

Hornik, Kurt, Grün, Bettina 04 1900 (has links) (PDF)
Diaconis and Ylvisaker (1979) give necessary conditions for conjugate priors for distributions from the natural exponential family to be proper as well as to have the property of linear posterior expectation of the mean parameter of the family. Their conditions for propriety and linear posterior expectation are also sufficient if the natural parameter space is equal to the set of all d-dimensional real numbers. In this paper their results are extended to characterize when conjugate priors are proper if the natural parameter space is bounded. For the special case where the natural exponential family is through a spherical probability distribution n,we show that the proper conjugate priors can be characterized by the behavior of the moment generating function of n at the boundary of the natural parameter space, or the second-order tail behavior of n. In addition, we show that if these families are non-regular, then linear posterior expectation never holds. The results for this special case are also extended to natural exponential families through elliptical probability distributions.

Page generated in 0.0424 seconds