• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 7
  • 6
  • 6
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 38
  • 24
  • 23
  • 21
  • 18
  • 13
  • 12
  • 12
  • 12
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modeling Endogenous Treatment Eects with Heterogeneity: A Bayesian Nonparametric Approach

Hu, Xuequn 01 January 2011 (has links)
This dissertation explores the estimation of endogenous treatment effects in the presence of heterogeneous responses. A Bayesian Nonparametric approach is taken to model the heterogeneity in treatment effects. Specifically, I adopt the Dirichlet Process Mixture (DPM) model to capture the heterogeneity and show that DPM often outperforms Finite Mixture Model (FMM) in providing more flexible function forms and thus better model fit. Rather than fixing the number of components in a mixture model, DPM allows the data and prior knowledge to determine the number of components in the data, thus providing an automatic mechanism for model selection. Two DPM models are presented in this dissertation. The first DPM model is based on a two-equation selection model. A Dirichlet Process (DP) prior is specified on some or all the parameters of the structural equation, and marginal likelihoods are calculated to select the best DPM model. This model is used to study the incentive and selection effects of having prescription drug coverage on total drug expenditures among Medicare beneficiaries. The second DPM model utilizes a three-equation Roy-type framework to model the observed heterogeneity that arises due to the treatment status, while the unobserved heterogeneity is handled by separate DPM models for the treated and untreated outcomes. This Roy-type DPM model is applied to a data set consisting of 33,081 independent individuals from the Medical Expenditure Panel Survey (MEPS), and the treatment effects of having private medical insurance on the outpatient expenditures are estimated. Key Words: Treatment Effects, Endogeneity, Heterogeneity, Finite Mixture Model, Dirichlet Process Prior, Dirichlet Process Mixture, Roy-type Modeling, Importance Sampling, Bridge Sampling
22

Bayesian Hierarchical Models for Model Choice

Li, Yingbo January 2013 (has links)
<p>With the development of modern data collection approaches, researchers may collect hundreds to millions of variables, yet may not need to utilize all explanatory variables available in predictive models. Hence, choosing models that consist of a subset of variables often becomes a crucial step. In linear regression, variable selection not only reduces model complexity, but also prevents over-fitting. From a Bayesian perspective, prior specification of model parameters plays an important role in model selection as well as parameter estimation, and often prevents over-fitting through shrinkage and model averaging.</p><p>We develop two novel hierarchical priors for selection and model averaging, for Generalized Linear Models (GLMs) and normal linear regression, respectively. They can be considered as "spike-and-slab" prior distributions or more appropriately "spike- and-bell" distributions. Under these priors we achieve dimension reduction, since their point masses at zero allow predictors to be excluded with positive posterior probability. In addition, these hierarchical priors have heavy tails to provide robust- ness when MLE's are far from zero.</p><p>Zellner's g-prior is widely used in linear models. It preserves correlation structure among predictors in its prior covariance, and yields closed-form marginal likelihoods which leads to huge computational savings by avoiding sampling in the parameter space. Mixtures of g-priors avoid fixing g in advance, and can resolve consistency problems that arise with fixed g. For GLMs, we show that the mixture of g-priors using a Compound Confluent Hypergeometric distribution unifies existing choices in the literature and maintains their good properties such as tractable (approximate) marginal likelihoods and asymptotic consistency for model selection and parameter estimation under specific values of the hyper parameters.</p><p>While the g-prior is invariant under rotation within a model, a potential problem with the g-prior is that it inherits the instability of ordinary least squares (OLS) estimates when predictors are highly correlated. We build a hierarchical prior based on scale mixtures of independent normals, which incorporates invariance under rotations within models like ridge regression and the g-prior, but has heavy tails like the Zeller-Siow Cauchy prior. We find this method out-performs the gold standard mixture of g-priors and other methods in the case of highly correlated predictors in Gaussian linear models. We incorporate a non-parametric structure, the Dirichlet Process (DP) as a hyper prior, to allow more flexibility and adaptivity to the data.</p> / Dissertation
23

Bayesian Methods for Two-Sample Comparison

Soriano, Jacopo January 2015 (has links)
<p>Two-sample comparison is a fundamental problem in statistics. Given two samples of data, the interest lies in understanding whether the two samples were generated by the same distribution or not. Traditional two-sample comparison methods are not suitable for modern data where the underlying distributions are multivariate and highly multi-modal, and the differences across the distributions are often locally concentrated. The focus of this thesis is to develop novel statistical methodology for two-sample comparison which is effective in such scenarios. Tools from the nonparametric Bayesian literature are used to flexibly describe the distributions. Additionally, the two-sample comparison problem is decomposed into a collection of local tests on individual parameters describing the distributions. This strategy not only yields high statistical power, but also allows one to identify the nature of the distributional difference. In many real-world applications, detecting the nature of the difference is as important as the existence of the difference itself. Generalizations to multi-sample comparison and more complex statistical problems, such as multi-way analysis of variance, are also discussed.</p> / Dissertation
24

ADAPTIVE LEARNING OF NEURAL ACTIVITY DURING DEEP BRAIN STIMULATION

January 2015 (has links)
abstract: Parkinson's disease is a neurodegenerative condition diagnosed on patients with clinical history and motor signs of tremor, rigidity and bradykinesia, and the estimated number of patients living with Parkinson's disease around the world is seven to ten million. Deep brain stimulation (DBS) provides substantial relief of the motor signs of Parkinson's disease patients. It is an advanced surgical technique that is used when drug therapy is no longer sufficient for Parkinson's disease patients. DBS alleviates the motor symptoms of Parkinson's disease by targeting the subthalamic nucleus using high-frequency electrical stimulation. This work proposes a behavior recognition model for patients with Parkinson's disease. In particular, an adaptive learning method is proposed to classify behavioral tasks of Parkinson's disease patients using local field potential and electrocorticography signals that are collected during DBS implantation surgeries. Unique patterns exhibited between these signals in a matched feature space would lead to distinction between motor and language behavioral tasks. Unique features are first extracted from deep brain signals in the time-frequency space using the matching pursuit decomposition algorithm. The Dirichlet process Gaussian mixture model uses the extracted features to cluster the different behavioral signal patterns, without training or any prior information. The performance of the method is then compared with other machine learning methods and the advantages of each method is discussed under different conditions. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2015
25

Nonparametric Bayesian Modelling in Machine Learning

Habli, Nada January 2016 (has links)
Nonparametric Bayesian inference has widespread applications in statistics and machine learning. In this thesis, we examine the most popular priors used in Bayesian non-parametric inference. The Dirichlet process and its extensions are priors on an infinite-dimensional space. Originally introduced by Ferguson (1983), its conjugacy property allows a tractable posterior inference which has lately given rise to a significant developments in applications related to machine learning. Another yet widespread prior used in nonparametric Bayesian inference is the Beta process and its extensions. It has originally been introduced by Hjort (1990) for applications in survival analysis. It is a prior on the space of cumulative hazard functions and it has recently been widely used as a prior on an infinite dimensional space for latent feature models. Our contribution in this thesis is to collect many diverse groups of nonparametric Bayesian tools and explore algorithms to sample from them. We also explore machinery behind the theory to apply and expose some distinguished features of these procedures. These tools can be used by practitioners in many applications.
26

Bayesian mixture models for frequent itemset mining

He, Ruofei January 2012 (has links)
In binary-transaction data-mining, traditional frequent itemset mining often produces results which are not straightforward to interpret. To overcome this problem, probability models are often used to produce more compact and conclusive results, albeit with some loss of accuracy. Bayesian statistics have been widely used in the development of probability models in machine learning in recent years and these methods have many advantages, including their abilities to avoid overfitting. In this thesis, we develop two Bayesian mixture models with the Dirichlet distribution prior and the Dirichlet process (DP) prior to improve the previous non-Bayesian mixture model developed for transaction dataset mining. First, we develop a finite Bayesian mixture model by introducing conjugate priors to the model. Then, we extend this model to an infinite Bayesian mixture using a Dirichlet process prior. The Dirichlet process mixture model is a nonparametric Bayesian model which allows for the automatic determination of an appropriate number of mixture components. We implement the inference of both mixture models using two methods: a collapsed Gibbs sampling scheme and a variational approximation algorithm. Experiments in several benchmark problems have shown that both mixture models achieve better performance than a non-Bayesian mixture model. The variational algorithm is the faster of the two approaches while the Gibbs sampling method achieves a more accurate result. The Dirichlet process mixture model can automatically grow to a proper complexity for a better approximation. However, these approaches also show that mixture models underestimate the probabilities of frequent itemsets. Consequently, these models have a higher sensitivity but a lower specificity.
27

Automatic variance adjusted Bayesian inference with pseudo likelihood under unequal probability sampling: imputation and data synthetic

Almomani, Ayat January 2021 (has links)
No description available.
28

Semi-parametric Survival Analysis via Dirichlet Process Mixtures of the First Hitting Time Model

Race, Jonathan Andrew January 2019 (has links)
No description available.
29

Transformations and Bayesian Estimation of Skewed and Heavy-Tailed Densities

Bean, Andrew Taylor January 2017 (has links)
No description available.
30

A Non-parametric Bayesian Method for Hierarchical Clustering of Longitudinal Data

Ren, Yan 23 October 2012 (has links)
No description available.

Page generated in 0.0805 seconds