• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Statistical inference in high dimensional linear and AFT models

Chai, Hao 01 July 2014 (has links)
Variable selection procedures for high dimensional data have been proposed and studied by a large amount of literature in the last few years. Most of the previous research focuses on the selection properties as well as the point estimation properties. In this paper, our goal is to construct the confidence intervals for some low-dimensional parameters in the high-dimensional setting. The models we study are the partially penalized linear and accelerated failure time models in the high-dimensional setting. In our model setup, all variables are split into two groups. The first group consists of a relatively small number of variables that are more interesting. The second group consists of a large amount of variables that can be potentially correlated with the response variable. We propose an approach that selects the variables from the second group and produces confidence intervals for the parameters in the first group. We show the sign consistency of the selection procedure and give a bound on the estimation error. Based on this result, we provide the sufficient conditions for the asymptotic normality of the low-dimensional parameters. The high-dimensional selection consistency and the low-dimensional asymptotic normality are developed for both linear and AFT models with high-dimensional data.
2

Concave selection in generalized linear models

Jiang, Dingfeng 01 May 2012 (has links)
A family of concave penalties, including the smoothly clipped absolute deviation (SCAD) and minimax concave penalties (MCP), has been shown to have attractive properties in variable selection. The computation of concave penalized solutions, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm to compute the solutions of concave penalized generalized linear models (GLM). In contrast to the existing algorithms that uses local quadratic or local linear approximation of the penalty, the MMCD majorizes the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy avoids the computation of scaling factors in iterative steps, hence improves the efficiency of coordinate descent. Under certain regularity conditions, we establish the theoretical convergence property of the MMCD algorithm. We implement this algorithm in a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. Grouping structure among predictors exists in many regression applications. We first propose an l2 grouped concave penalty to incorporate such group information in a regression model. The l2 grouped concave penalty performs group selection and includes group Lasso as a special case. An efficient algorithm is developed and its theoretical convergence property is established under certain regularity conditions. The group selection property of the l2 grouped concave penalty is desirable in some applications; while in other applications selection at both group and individual levels is needed. Hence, we propose an l1 grouped concave penalty for variable selection at both individual and group levels. An efficient algorithm is also developed for the l1 grouped concave penalty. Simulation studies are performed to evaluate the finite-sample performance of the two grouped concave selection methods. The new grouped penalties are also used in analyzing two motivation datasets. The results from both the simulation and real data analyses demonstrate certain benefits of using grouped penalties. Therefore, the proposed concave group penalties are valuable alternatives to the standard concave penalties.

Page generated in 0.0611 seconds