1 |
Comparative Analysis of Ledoit's Covariance Matrix and Comparative Adjustment Liability Model (CALM) Within the Markowitz FrameworkMcArthur, Gregory D 09 May 2014 (has links)
Estimation of the covariance matrix of asset returns is a key component of portfolio optimization. Inherent in any estimation technique is the capacity to inaccurately reflect current market conditions. Typical of Markowitz portfolio optimization theory, which we use as the basis for our analysis, is to assume that asset returns are stationary. This assumption inevitably causes an optimized portfolio to fail during a market crash since estimates of covariance matrices of asset returns no longer reflect current conditions. We use the market crash of 2008 to exemplify this fact. A current industry-standard benchmark for estimation is the Ledoit covariance matrix, which attempts to adjust a portfolio’s aggressiveness during varying market conditions. We test this technique against the CALM (Covariance Adjustment for Liability Management Method), which incorporates forward-looking signals for market volatility to reduce portfolio variance, and assess under certain criteria how well each model performs during recent market crash. We show that CALM should be preferred against the sample convariance matrix and Ledoit covariance matrix under some reasonable weight constraints.
|
2 |
Comparative Analysis of Ledoit's Covariance Matrix and Comparative Adjustment Liability Management (CALM) Model Within the Markowitz FrameworkZhang, Yafei 08 May 2014 (has links)
Estimation of the covariance matrix of asset returns is a key component of portfolio optimization. Inherent in any estimation technique is the capacity to inaccurately reflect current market conditions. Typical of Markowitz portfolio optimization theory, which we use as the basis for our analysis, is to assume that asset returns are stationary. This assumption inevitably causes an optimized portfolio to fail during a market crash since estimates of covariance matrices of asset returns no longer re ect current conditions. We use the market crash of 2008 to exemplify this fact. A current industry standard benchmark for estimation is the Ledoit covariance matrix, which attempts to adjust a portfolio's aggressiveness during varying market conditions. We test this technique against the CALM (Covariance Adjustment for Liability Management Method), which incorporates forward-looking signals for market volatility to reduce portfolio variance, and assess under certain criteria how well each model performs during recent market crash. We show that CALM should be preferred against the sample convariance matrix and Ledoit covariance matrix under some reasonable weight constraints.
|
3 |
The effects of high dimensional covariance matrix estimation on asset pricing and generalized least squaresKim, Soo-Hyun 23 June 2010 (has links)
High dimensional covariance matrix estimation is considered in the context of empirical asset pricing. In order to see the effects of covariance matrix estimation on asset pricing, parameter estimation, model specification test, and misspecification problems are explored. Along with existing techniques, which is not yet tested in applications, diagonal variance matrix is simulated to evaluate the performances in these problems. We found that modified Stein type estimator outperforms all the other methods in all three cases. In addition, it turned out that heuristic method of diagonal variance matrix works far better than existing methods in Hansen-Jagannathan distance test. High dimensional covariance matrix as a transformation matrix in generalized least squares is also studied. Since the feasible generalized least squares estimator requires ex ante knowledge of the covariance structure, it is not applicable in general cases. We propose fully banding strategy for the new estimation technique. First we look into the sparsity of covariance matrix and the performances of GLS. Then we move onto the discussion of diagonals of covariance matrix and column summation of inverse of covariance matrix to see the effects on GLS estimation. In addition, factor analysis is employed to model the covariance matrix and it turned out that communality truly matters in efficiency of GLS estimation.
|
4 |
Explicit Estimators for a Banded Covariance Matrix in a Multivariate Normal DistributionKarlsson, Emil January 2014 (has links)
The problem of estimating mean and covariances of a multivariate normal distributedrandom vector has been studied in many forms. This thesis focuses on the estimatorsproposed in [15] for a banded covariance structure with m-dependence. It presents theprevious results of the estimator and rewrites the estimator when m = 1, thus makingit easier to analyze. This leads to an adjustment, and a proposition for an unbiasedestimator can be presented. A new and easier proof of consistency is then presented.This theory is later generalized into a general linear model where the correspondingtheorems and propositions are made to establish unbiasedness and consistency. In thelast chapter some simulations with the previous and new estimator verifies that thetheoretical results indeed makes an impact.
|
5 |
Methods for Covariance Matrix Estimation : A Comparison of Shrinkage Estimators in Financial ApplicationsSpector, Erik January 2024 (has links)
This paper explores different covariance matrix estimators in application to geometric Brownian motion. Particular interest is given to shrinkage estimation methods. In collaboration with Söderberg & Partners risk management team, the goal is to find an estimation that performs well in low-data scenarios and is robust against erroneous model assumptions, particularly the Gaussian assumption of the stock price distribution. Estimations are compared by two criteria: Frobenius norm distance between the estimate and the true covariance matrix, and the condition number of the estimate. By considering four estimates — the sample covariance matrix, Ledoit-Wolf, Tyler M-estimator, and a novel Tyler-Ledoit-Wolf (TLW) estimator — this paper concludes that the TLW estimator performs best when considering the two criteria.
|
6 |
Neural Networks for improved signal source enumeration and localization with unsteered antenna arraysRogers, John T, II 08 December 2023 (has links) (PDF)
Direction of Arrival estimation using unsteered antenna arrays, unlike mechanically scanned or phased arrays, requires complex algorithms which perform poorly with small aperture arrays or without a large number of observations, or snapshots. In general, these algorithms compute a sample covriance matrix to obtain the direction of arrival and some require a prior estimate of the number of signal sources. Herein, artificial neural network architectures are proposed which demonstrate improved estimation of the number of signal sources, the true signal covariance matrix, and the direction of arrival. The proposed number of source estimation network demonstrates robust performance in the case of coherent signals where conventional methods fail. For covariance matrix estimation, four different network architectures are assessed and the best performing architecture achieves a 20 times improvement in performance over the sample covariance matrix. Additionally, this network can achieve comparable performance to the sample covariance matrix with 1/8-th the amount of snapshots. For direction of arrival estimation, preliminary results are provided comparing six architectures which all demonstrate high levels of accuracy and demonstrate the benefits of progressively training artificial neural networks by training on a sequence of sub- problems and extending to the network to encapsulate the entire process.
|
7 |
Efficient formulation and implementation of ensemble based methods in data assimilationNino Ruiz, Elias David 11 January 2016 (has links)
Ensemble-based methods have gained widespread popularity in the field of data assimilation. An ensemble of model realizations encapsulates information about the error correlations driven by the physics and the dynamics of the numerical model. This information can be used to obtain improved estimates of the state of non-linear dynamical systems such as the atmosphere and/or the ocean. This work develops efficient ensemble-based methods for data assimilation.
A major bottleneck in ensemble Kalman filter (EnKF) implementations is the solution of a linear system at each analysis step. To alleviate it an EnKF implementation based on an iterative Sherman Morrison formula is proposed. The rank deficiency of the ensemble covariance matrix is exploited in order to efficiently compute the analysis increments during the assimilation process. The computational effort of the proposed method is comparable to those of the best EnKF implementations found in the current literature. The stability analysis of the new algorithm is theoretically proven based on the positiveness of the data error covariance matrix.
In order to improve the background error covariance matrices in ensemble-based data assimilation we explore the use of shrinkage covariance matrix estimators from ensembles. The resulting filter has attractive features in terms of both memory usage and computational complexity. Numerical results show that it performs better that traditional EnKF formulations.
In geophysical applications the correlations between errors corresponding to distant model components decreases rapidly with the distance. We propose a new and efficient implementation of the EnKF based on a modified Cholesky decomposition for inverse covariance matrix estimation. This approach exploits the conditional independence of background errors between distant model components with regard to a predefined radius of influence. Consequently, sparse estimators of the inverse background error covariance matrix can be obtained. This implies huge memory savings during the assimilation process under realistic weather forecast scenarios. Rigorous error bounds for the resulting estimator in the context of data assimilation are theoretically proved. The conclusion is that the resulting estimator converges to the true inverse background error covariance matrix when the ensemble size is of the order of the logarithm of the number of model components.
We explore high-performance implementations of the proposed EnKF algorithms. When the observational operator can be locally approximated for different regions of the domain, efficient parallel implementations of the EnKF formulations presented in this dissertation can be obtained. The parallel computation of the analysis increments is performed making use of domain decomposition. Local analysis increments are computed on (possibly) different processors. Once all local analysis increments have been computed they are mapped back onto the global domain to recover the global analysis. Tests performed with an atmospheric general circulation model at a T-63 resolution, and varying the number of processors from 96 to 2,048, reveal that the assimilation time can be decreased multiple fold for all the proposed EnKF formulations.Ensemble-based methods can be used to reformulate strong constraint four dimensional variational data assimilation such as to avoid the construction of adjoint models, which can be complicated for operational models. We propose a trust region approach based on ensembles in which the analysis increments are computed onto the space of an ensemble of snapshots. The quality of the resulting increments in the ensemble space is compared against the gains in the full space. Decisions on whether accept or reject solutions rely on trust region updating formulas. Results based on a atmospheric general circulation model with a T-42 resolution reveal that this methodology can improve the analysis accuracy. / Ph. D.
|
Page generated in 0.1224 seconds