Estimating large covariance and precision (inverse covariance) matrices has become increasingly important in high dimensional statistics because of its wide applications. The estimation problem is challenging not only theoretically due to the constraint of its positive definiteness, but also computationally because of the curse of dimensionality. Many types of estimators have been proposed such as thresholding under the sparsity assumption of the target matrix, banding and tapering the sample covariance matrix. However, these estimators are not always guaranteed to be positive-definite, especially, for finite samples, and the sparsity assumption is rather restrictive. We propose a novel two-stage adaptive method based on the Cholesky decomposition of a general covariance matrix. By banding the precision matrix in the first stage and adapting the estimates to the second stage estimation, we develop a computationally efficient and statistically accurate method for estimating high dimensional precision matrices. We demonstrate the finite-sample performance of the proposed method by simulations from autoregressive, moving average, and long-range dependent processes. We illustrate its wide applicability by analyzing financial data such S&P 500 index and IBM stock returns, and electric power consumption of individual households. The theoretical properties of the proposed method are also investigated within a large class of covariance matrices.
Identifer | oai:union.ndltd.org:unt.edu/info:ark/67531/metadc1538782 |
Date | 08 1900 |
Creators | Rajendran, Rajanikanth |
Contributors | Song, Kai-Sheng, Liu, Jianguo, Iaia, Joseph A. |
Publisher | University of North Texas |
Source Sets | University of North Texas |
Language | English |
Detected Language | English |
Type | Thesis or Dissertation |
Format | vii, 73 pages, Text |
Rights | Use restricted to UNT Community, Rajendran, Rajanikanth, Copyright, Copyright is held by the author, unless otherwise noted. All rights Reserved. |
Page generated in 0.002 seconds