• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 20
  • 8
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 16
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Novel Two-Stage Adaptive Method for Estimating Large Covariance and Precision Matrices

Rajendran, Rajanikanth 08 1900 (has links)
Estimating large covariance and precision (inverse covariance) matrices has become increasingly important in high dimensional statistics because of its wide applications. The estimation problem is challenging not only theoretically due to the constraint of its positive definiteness, but also computationally because of the curse of dimensionality. Many types of estimators have been proposed such as thresholding under the sparsity assumption of the target matrix, banding and tapering the sample covariance matrix. However, these estimators are not always guaranteed to be positive-definite, especially, for finite samples, and the sparsity assumption is rather restrictive. We propose a novel two-stage adaptive method based on the Cholesky decomposition of a general covariance matrix. By banding the precision matrix in the first stage and adapting the estimates to the second stage estimation, we develop a computationally efficient and statistically accurate method for estimating high dimensional precision matrices. We demonstrate the finite-sample performance of the proposed method by simulations from autoregressive, moving average, and long-range dependent processes. We illustrate its wide applicability by analyzing financial data such S&P 500 index and IBM stock returns, and electric power consumption of individual households. The theoretical properties of the proposed method are also investigated within a large class of covariance matrices.
2

High frequency and large dimension volatility

Shi, Zhangbo January 2010 (has links)
Three main issues are explored in this thesis—volatility measurement, volatility spillover and large-dimension covariance matrices. For the first question of volatility measurement, this thesis compares two newly-proposed, high-frequency volatility measurement models, namely realized volatility and realized range-based volatility. It does so in the aim of trying to use empirical results to assess whether one volatility model is better than the other. The realized volatility model and realized range-based volatility model are compared based on three markets, five forecast models, two data frequencies and two volatility proxies, making sixty scenarios in total. Seven different loss functions are also used for the evaluation tests. This necessarily ensures that the empirical results are highly robust. After making some simple adjustments to the original realized range-based volatility, this thesis concludes that it is clear that the scaled realized range-based volatility model outperforms the realized volatility model. For the second research question on volatility spillover, realized range-based volatility and realized volatility models are employed to study the volatility spillover among the S&P 500 index markets, with the aim of finding out empirically whether volatility spillover exists between the markets. Volatility spillover is divided into the two categories of statistically significant volatility spillover and economically significant volatility spillover. Economically significant spillover is defined as spillover that can help forecast the volatility of another market, and is therefore a more powerful measurement than statistically significant spillover. The findings show that, in reality, the existence of volatility spillover depends on the choice of model, choice of volatility proxy and value of parameters used. The third and final research question in this thesis involves the comparison of various large-dimension multivariate models. The main contribution made by this specific study is threefold. First, a number of good performance multivariate volatility models are introduced by adjusting some commonly used models. Second, different models and various choices of parameters for these models are tested based on 26 currency pairs. Third, the evaluation criteria adopted possess much more practical implications than those used in most other papers on this subject area.
3

Comparative Analysis of Ledoit's Covariance Matrix and Comparative Adjustment Liability Model (CALM) Within the Markowitz Framework

McArthur, Gregory D 09 May 2014 (has links)
Estimation of the covariance matrix of asset returns is a key component of portfolio optimization. Inherent in any estimation technique is the capacity to inaccurately reflect current market conditions. Typical of Markowitz portfolio optimization theory, which we use as the basis for our analysis, is to assume that asset returns are stationary. This assumption inevitably causes an optimized portfolio to fail during a market crash since estimates of covariance matrices of asset returns no longer reflect current conditions. We use the market crash of 2008 to exemplify this fact. A current industry-standard benchmark for estimation is the Ledoit covariance matrix, which attempts to adjust a portfolio’s aggressiveness during varying market conditions. We test this technique against the CALM (Covariance Adjustment for Liability Management Method), which incorporates forward-looking signals for market volatility to reduce portfolio variance, and assess under certain criteria how well each model performs during recent market crash. We show that CALM should be preferred against the sample convariance matrix and Ledoit covariance matrix under some reasonable weight constraints.
4

WALD TYPE TESTS WITH THE WRONG DISPERSION MATRIX

Rajapaksha, Kosman Watte Gedara Dimuthu Hansana 01 September 2021 (has links)
A Wald type test with the wrong dispersion matrix is used when the dispersion matrix is not a consistent estimator of the asymptotic covariance matrixof the test statistic. One class of such tests occurs when there are k groups and it is assumed that the population covariance matrices from the k groups are equal, but the common covariance matrix assumption does not hold. The pooled t test, one way AVOVA F test, and one way MANOVA F test are examples of this class. Two bootstrap confidence regions are modified to obtain large sample Wald type tests with the wrong dispersion matrix.
5

Contributions to Large Covariance and Inverse Covariance Matrices Estimation

Kang, Xiaoning 25 August 2016 (has links)
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimating large covariance and inverse covariance matrices with different applications. In Chapter 2, I consider an estimation of time-varying covariance matrices in the analysis of multivariate financial data. An order-invariant Cholesky-log-GARCH model is developed for estimating the time-varying covariance matrices based on the modified Cholesky decomposition. This decomposition provides a statistically interpretable parametrization of the covariance matrix. The key idea of the proposed model is to consider an ensemble estimation of covariance matrix based on the multiple permutations of variables. Chapter 3 investigates the sparse estimation of inverse covariance matrix for the highdimensional data. This problem has attracted wide attention, since zero entries in the inverse covariance matrix imply the conditional independence among variables. I propose an orderinvariant sparse estimator based on the modified Cholesky decomposition. The proposed estimator is obtained by assembling a set of estimates from the multiple permutations of variables. Hard thresholding is imposed on the ensemble Cholesky factor to encourage the sparsity in the estimated inverse covariance matrix. The proposed method is able to catch the correct sparse structure of the inverse covariance matrix. Chapter 4 focuses on the sparse estimation of large covariance matrix. Traditional estimation approach is known to perform poorly in the high dimensions. I propose a positive-definite estimator for the covariance matrix using the modified Cholesky decomposition. Such a decomposition provides a exibility to obtain a set of covariance matrix estimates. The proposed method considers an ensemble estimator as the center" of these available estimates with respect to Frobenius norm. The proposed estimator is not only guaranteed to be positive definite, but also able to catch the underlying sparse structure of the true matrix. / Ph. D.
6

Econometric computing with HC and HAC covariance matrix estimators

Zeileis, Achim January 2004 (has links) (PDF)
Data described by econometric models typically contains autocorrelation and/or heteroskedasticity of unknown form and for inference in such models it is essential to use covariance matrix estimators that can consistently estimate the covariance of the model parameters. Hence, suitable heteroskedasticity-consistent (HC) and heteroskedasticity and autocorrelation consistent (HAC) estimators have been receiving attention in the econometric literature over the last 20 years. To apply these estimators in practice, an implementation is needed that preferably translates the conceptual properties of the underlying theoretical frameworks into computational tools. In this paper, such an implementation in the package sandwich in the R system for statistical computing is described and it is shown how the suggested functions provide reusable components that build on readily existing functionality and how they can be integrated easily into new inferential procedures or applications. The toolbox contained in sandwich is extremely flexible and comprehensive, including specific functions for the most important HC and HAC estimators from the econometric literature. Several real-world data sets are used to illustrate how the functionality can be integrated into applications. / Series: Research Report Series / Department of Statistics and Mathematics
7

Object-oriented Computation of Sandwich Estimators

Zeileis, Achim January 2006 (has links) (PDF)
Sandwich covariance matrix estimators are a popular tool in applied regression modeling for performing inference that is robust to certain types of model misspecification. Suitable implementations are available in the R system for statistical computing for certain model fitting functions only (in particular lm()), but not for other standard regression functions, such as glm(), nls(), or survreg(). Therefore, conceptual tools and their translation to computational tools in the package sandwich are discussed, enabling the computation of sandwich estimators in general parametric models. Object orientation can be achieved by providing a few extractor functions-most importantly for the empirical estimating functions-from which various types of sandwich estimators can be computed. / Series: Research Report Series / Department of Statistics and Mathematics
8

Gráficos de controle para monitoramento de processos multivariados

Machado, Marcela Aparecida Guerreiro [UNESP] 24 April 2009 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:35:41Z (GMT). No. of bitstreams: 0 Previous issue date: 2009-04-24Bitstream added on 2014-06-13T19:25:06Z : No. of bitstreams: 1 machado_mag_dr_guara.pdf: 610414 bytes, checksum: a4882bb4bd510483bda7193169fa5824 (MD5) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Esta tese oferece algumas contribuições à área de monitoramento de processos multivariados. Com respeito ao monitoramento do vetor de médias, investigou-se o desempenho dos gráficos de 2 T baseados em componentes principais e também o desempenho dos gráficos de médias utilizados em conjunto, sendo que cada gráfico monitora a média de uma das características de qualidade. Com respeito ao monitoramento da matriz de covariâncias, foi proposta uma nova estatística baseada nas variâncias amostrais (estatística de VMAX). O gráfico de VMAX é mais eficiente do que o gráfico da variância amostral generalizada S , que é o gráfico usual para o monitoramento da matriz de covariâncias. Uma vantagem adicional dessa nova estatística é que o usuário já está bem familiarizado com o cálculo de variâncias amostrais; o mesmo não pode ser dito em relação à variância amostral generalizada S . O desempenho do gráfico de VMAX foi também avaliado quando se utiliza a amostragem dupla, quando se variam os parâmetros do gráfico de controle, quando se adota o esquema de EWMA e quando se aplicam regras especiais de decisão. Investigou-se também o desempenho dos gráficos de controle destinados ao monitoramento simultâneo do vetor de médias e da matriz de covariâncias. / This thesis offers some contributions to the field of monitoring multivariate processes. Regarding to the monitoring of the mean vector, we investigated the performance of the 2 T charts based on principal components and also the performance of the mean charts used simultaneously, where each chart is assigned to control one quality characteristic. Regarding to the monitoring of the covariance matrix, we propose a new statistic based on the sample variances (the VMAX statistic). The VMAX chart is more efficient than the generalized variance S chart, which is the usual chart for monitoring the covariance matrix. An additional advantage of this new statistic is that the user is already well familiar with the calculation of sample variances; we can’t say the same regarding to the generalized variance S statistic. We also studied the performance of the VMAX chart with double sampling, with adaptive schemes, with the EWMA procedure and also with special run rules. We also investigated the performance of the control charts designed for monitoring the mean vector and the covariance matrix simultaneously.
9

Practical usage of optimal portfolio diversification using maximum entropy principle / Practical usage of optimal portfolio diversification using maximum entropy principle

Chopyk, Ostap January 2015 (has links)
"Practical usage of optimal portfolio diversification using maximum entropy principle" by Ostap Chopyk Abstract This thesis enhances the investigation of the principle of maximum entropy, implied in the portfolio diversification problem, when portfolio consists of stocks. Entropy, as a measure of diversity, is used as the objective function in the optimization problem with given side constraints. The principle of maximum entropy, by the nature itself, suggests the solution for two problems; it reduces the estimation error of inputs, as it has a shrinkage interpretation and it leads to more diversified portfolio. Furthermore, improvement to the portfolio optimization is made by using design-free estimation of variance-covariance matrices of stock returns. Design-free estimation is proven to provide superior estimate of large variance-covariance matrices and for data with heavy-tailed densities. To asses and compare the performance of the portfolios, their out-of-sample Sharpe ratios are used. In nominal terms, the out-of- sample Sharpe ratios are almost always lower for the portfolios, created using maximum entropy principle, than for 'classical' Markowitz's efficient portfolio. However, this out-of-sample Sharpe ratios are not statistically different, as it was tested by constructing studentized time-series...
10

Comparative Analysis of Ledoit's Covariance Matrix and Comparative Adjustment Liability Management (CALM) Model Within the Markowitz Framework

Zhang, Yafei 08 May 2014 (has links)
Estimation of the covariance matrix of asset returns is a key component of portfolio optimization. Inherent in any estimation technique is the capacity to inaccurately reflect current market conditions. Typical of Markowitz portfolio optimization theory, which we use as the basis for our analysis, is to assume that asset returns are stationary. This assumption inevitably causes an optimized portfolio to fail during a market crash since estimates of covariance matrices of asset returns no longer re ect current conditions. We use the market crash of 2008 to exemplify this fact. A current industry standard benchmark for estimation is the Ledoit covariance matrix, which attempts to adjust a portfolio's aggressiveness during varying market conditions. We test this technique against the CALM (Covariance Adjustment for Liability Management Method), which incorporates forward-looking signals for market volatility to reduce portfolio variance, and assess under certain criteria how well each model performs during recent market crash. We show that CALM should be preferred against the sample convariance matrix and Ledoit covariance matrix under some reasonable weight constraints.

Page generated in 0.0732 seconds