Spelling suggestions: "subject:"[een] REALIZED COVARIANCE"" "subject:"[enn] REALIZED COVARIANCE""
1 |
Portfolio Value at Risk and Expected Shortfall using High-frequency data / Portfólio Value at Risk a Expected Shortfall s použitím vysoko frekvenčních datZváč, Marek January 2015 (has links)
The main objective of this thesis is to investigate whether multivariate models using Highfrequency data provide significantly more accurate forecasts of Value at Risk and Expected Shortfall than multivariate models using only daily data. Our objective is very topical since the Basel Committee announced in 2013 that is going to change the risk measure used for calculation of capital requirement from Value at Risk to Expected Shortfall. The further improvement of accuracy of both risk measures can be also achieved by incorporation of high-frequency data that are rapidly more available due to significant technological progress. Therefore, we employed parsimonious Heterogeneous Autoregression and its asymmetric version that uses high-frequency data for the modeling of realized covariance matrix. The benchmark models are chosen well established DCC-GARCH and EWMA. The computation of Value at Risk (VaR) and Expected Shortfall (ES) is done through parametric, semi-parametric and Monte Carlo simulations. The loss distributions are represented by multivariate Gaussian, Student t, multivariate distributions simulated by Copula functions and multivariate filtered historical simulations. There are used univariate loss distributions: Generalized Pareto Distribution from EVT, empirical and standard parametric distributions. The main finding is that Heterogeneous Autoregression model using high-frequency data delivered superior or at least the same accuracy of forecasts of VaR to benchmark models based on daily data. Finally, the backtesting of ES remains still very challenging and applied Test I. and II. did not provide credible validation of the forecasts.
|
2 |
[en] FORECASTING LARGE REALIZED COVARIANCE MATRICES: THE BENEFITS OF FACTOR MODELS AND SHRINKAGE / [pt] PREVISÃO DE MATRIZES DE COVARIÂNCIA REALIZADA DE ALTA DIMENSÃO: OS BENEFÍCIOS DE MODELOS DE FATORES E SHRINKAGEDIEGO SIEBRA DE BRITO 19 September 2018 (has links)
[pt] Este trabalho propõe um modelo de previsão de matrizes de covariância realizada de altíssima dimensão, com aplicação para os componentes do índice S e P 500. Para lidar com o altíssimo número de parâmetros (maldição da dimensionalidade), propõe-se a decomposição da matriz de covariância de retornos por meio do uso de um modelo de fatores padrão (e.g. tamanho, valor, investimento) e uso de restrições setoriais na matriz de covariância residual. O modelo restrito é estimado usando uma especificação de vetores auto regressivos heterogêneos (VHAR) estimados com LASSO (Least Absolute Shrinkage and Selection Operator). O uso da metodologia proposta melhora a precisão de previsão em relação a benchmarks padrões e leva a melhores estimativas de portfólios de menor variância. / [en] We propose a model to forecast very large realized covariance matrices of returns, applying it to the constituents of the S and P 500 on a daily basis. To deal with the curse of dimensionality, we decompose the return covariance matrix using standard firm-level factors (e.g. size, value, profitability) and use sectoral restrictions in the residual covariance matrix. This restricted model is then estimated using Vector Heterogeneous Autoregressive (VHAR) models estimated with the Least Absolute Shrinkage and Selection
Operator (LASSO). Our methodology improves forecasting precision relative to standard benchmarks and leads to better estimates of the minimum variance portfolios.
|
3 |
Essays on multivariate volatility and dependence models for financial time seriesNoureldin, Diaa January 2011 (has links)
This thesis investigates the modelling and forecasting of multivariate volatility and dependence in financial time series. The first paper proposes a new model for forecasting changes in the term structure (TS) of interest rates. Using the level, slope and curvature factors of the dynamic Nelson-Siegel model, we build a time-varying copula model for the factor dynamics allowing for departure from the normality assumption typically adopted in TS models. To induce relative immunity to structural breaks, we model and forecast the factor changes and not the factor levels. Using US Treasury yields for the period 1986:3-2010:12, our in-sample analysis indicates model stability and we show statistically significant gains due to allowing for a time-varying dependence structure which permits joint extreme factor movements. Our out-of-sample analysis indicates the model's superior ability to forecast the conditional mean in terms of root mean square error reductions and directional forecast accuracy. The forecast gains are stronger during the recent financial crisis. We also conduct out-of-sample model evaluation based on conditional density forecasts. The second paper introduces a new class of multivariate volatility models that utilizes high-frequency data. We discuss the models' dynamics and highlight their differences from multivariate GARCH models. We also discuss their covariance targeting specification and provide closed-form formulas for multi-step forecasts. Estimation and inference strategies are outlined. Empirical results suggest that the HEAVY model outperforms the multivariate GARCH model out-of-sample, with the gains being particularly significant at short forecast horizons. Forecast gains are obtained for both forecast variances and correlations. The third paper introduces a new class of multivariate volatility models which is easy to estimate using covariance targeting. The key idea is to rotate the returns and then fit them using a BEKK model for the conditional covariance with the identity matrix as the covariance target. The extension to DCC type models is given, enriching this class. We focus primarily on diagonal BEKK and DCC models, and a related parameterisation which imposes common persistence on all elements of the conditional covariance matrix. Inference for these models is computationally attractive, and the asymptotics is standard. The techniques are illustrated using recent data on the S&P 500 ETF and some DJIA stocks, including comparisons to the related orthogonal GARCH models.
|
Page generated in 0.035 seconds