Spelling suggestions: "subject:"realized volatility"" "subject:"ealized volatility""
41 |
Volatility Modelling in the Swedish and US Fixed Income Market : A comparative study of GARCH, ARCH, E-GARCH and GJR-GARCH Models on Government BondsMortimore, Sebastian, Sturehed, William January 2023 (has links)
Volatility is an important variable in financial markets, risk management and making investment decisions. Different volatility models are beneficial tools to use when predicting future volatility. The purpose of this study is to compare the accuracy of various volatility models, including ARCH, GARCH and extensions of the GARCH framework. The study applies these volatility models to the Swedish and American Fixed Income Market for government bonds. The performance of these models is based on out-of-sample forecasting using different loss functions such as RMSE, MAE and MSE, specifically investigating their ability to forecast future volatility. Daily volatility forecasts from daily bid prices from Swedish and American 2, 5- and 10-year governments bonds will be compared against realized volatility which will act as the proxy for volatility. The result show US government bonds, excluding the US 2 YTM, did not show any significant negative volatility, volatility asymmetry or leverage effects. In overall, the ARCH and GARCH models outperformed E-GARCH and GJR-GARCH except the US 2-year YTM showing negative volatility, asymmetry, and leverage effects and the GJR-GARCH model outperforming the ARCH and GARCH models. / Volatilitet är en viktig variabel på finansmarknaden när det kommer till både riskhantering samt investeringsbeslut. Olika volatilitets modeller är fördelaktiga verktyg när det kommer till att göra prognoser av framtida volatilitet. Syftet med denna studie är att jämföra det olika volatilitetsmodellerna ARCH, GARCH och förlängningar av GARCH-ramverket för att ta reda på vilken av modellerna är den bästa att prognosera framtida volatilitet. Studien kommer tillämpa dessa modeller på den svenska och amerikanska marknaden för statsskuldväxlar. Prestandan för modellerna kommer baseras på out-of-sample prognoser med hjälp av det olika förlustfunktionerna RMSE, MAE och MSE. Förlustfunktionernas används endast till att undersöka deras förmåga till att prognostisera framtida volatilitet. Dagliga volatilitetsprognoser baseras på dagliga budpriser för amerikanska och svenska statsobligationer med 2, 5 och 10 års löptid. Dessa kommer jämföras med verklig volatilitet som agerar som Proxy för volatiliteten. Resultatet tyder på att amerikanska statsobligationer förutom den tvååriga, inte visar signifikant negativ volatilitet, asymmetri i volatilitet samt hävstångseffekt. De tvååriga amerikanska statsobligationerna visar bevis för negativ volatilitet, hävstångseffekt samt asymmetri i volatiliteten. ARCH och GARCH modellerna presterade övergripande sett bäst för både svenska och amerikanska statsobligationer förutom den tvååriga där GJR-GARCH modellen presterade bäst.
|
42 |
Evaluating volatility forecasts, A study in the performance of volatility forecasting methods / Utvärdering av volatilitetsprognoser, En undersökning av kvaliteten av metoder för volatilitetsprognostiseringVerhage, Billy January 2023 (has links)
In this thesis, the foundations of evaluating the performance of volatility forecasting methods are explored, and a mathematical framework is created to determine the overall forecasting performance based on observed daily returns across multiple financial instruments. Multiple volatility responses are investigated, and theoretical corrections are derived under the assumption that the log returns follow a normal distribution. Performance measures that are independent of the long-term volatility profile are explored and tested. Well-established volatility forecasting methods, such as moving average and GARCH (p,q) models, are implemented and validated on multiple volatility responses. The obtained results reveal no significant difference in the performances between the moving average and GARCH (1,1) volatility forecast. However, the observed non-zero bias and a separate analysis of the distribution of the log returns reveal that the theoretically derived corrections are insufficient in correcting the not-normally distributed log returns. Furthermore, it is observed that there is a high dependency of abslute performances on the considered evaluation period, suggesting that comparisons between periods should not be made. This study is limited by the fact that the bootstrapped confidence regions are ill-suited for determining significant performance differences between forecasting methods. In future work, statistical significance can be gained by bootstrapping the difference in performance measures. Furthermore, a more in-depth analysis is needed to determine more appropriate theoretical corrections for the volatility responses based on the observed distribution of the log returns. This will increase the overall forecasting performance and improve the overall quality of the evaluation framework. / I detta arbete utforskas grunderna för utvärdering av prestandan av volatilitetsprognoser och ett matematiskt ramverk skapas för att bestämma den övergripande prestandan baserat på observerade dagliga avkastningar för flera finansiella instrument. Ett antal volatilitetsskattningar undersökts och teoretiska korrigeringar härleds under antagandet att log-avkastningen följer en normalfördelningen. Prestationsmått som är oberoende av den långsiktiga volatilitetsprofilen utforskas och testas. Väletablerare metoder för volatilitetsprognostisering, såsom glidande medelvärden och GARCH-modeller, implementeras och utvärderas mot flera volatilitetsskattningar. De erhållna resultaten visar att det inte finns någon signifikant skillnad i prestation mellan prognoser producerade av det glidande medelvärdet och GARCH (1,1). Det observerade icke-noll bias och en separat analys av fördelningen av log-avkastningen visar dock att de teoretiskt härledda korrigeringarna är otillräckliga för att fullständigt korrigera volatilitesskattningarna under icke-normalfördelade log-avkastningar. Dessutom observeras att det finns ett stort beroende på den använda utvärderingsperioden, vilket tyder på att jämförelser mellan perioder inte bör göras. Denna studie är begränsad av det faktum att de använda bootstrappade konfidensregionerna inte är lämpade för att fastställa signifikanta skillnader i prestanda mellan prognosmetoder. I framtida arbeten behövs fortsatt analys för att bestämma mer lämpliga teoretiska korrigeringar för volatilitetsskattningarna baserat på den observerade fördelningen av log-avkastningen. Detta kommer att öka den övergripande prestandan och förbättra den övergripande kvaliteten på prognoserna.
|
43 |
Bootstrapping high frequency dataHounyo, Koomla Ulrich 07 1900 (has links)
Nous développons dans cette thèse, des méthodes de bootstrap pour les données financières de hautes fréquences. Les deux premiers essais focalisent sur les méthodes de bootstrap appliquées à l’approche de "pré-moyennement" et robustes à la présence d’erreurs de microstructure. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. En se basant sur cette ap- proche d’estimation de la volatilité intégrée en présence d’erreurs de microstructure, nous développons plusieurs méthodes de bootstrap qui préservent la structure de dépendance et l’hétérogénéité dans la moyenne des données originelles. Le troisième essai développe une méthode de bootstrap sous l’hypothèse de Gaussianité locale des données financières de hautes fréquences.
Le premier chapitre est intitulé: "Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns". Nous proposons dans ce chapitre, des méthodes de bootstrap robustes à la présence d’erreurs de microstructure. Particulièrement nous nous sommes focalisés sur la volatilité réalisée utilisant des rendements "pré-moyennés" proposés par Podolskij et Vetter (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à hautes fréquences consécutifs qui ne se chevauchent pas. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. Le non-chevauchement des blocs fait que les rendements "pré-moyennés" sont asymptotiquement indépendants, mais possiblement hétéroscédastiques. Ce qui motive l’application du wild bootstrap dans ce contexte. Nous montrons la validité théorique du bootstrap pour construire des intervalles de type percentile et percentile-t. Les simulations Monte Carlo montrent que le bootstrap peut améliorer les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques, pourvu que le choix de la variable externe soit fait de façon appropriée. Nous illustrons ces méthodes en utilisant des données financières réelles.
Le deuxième chapitre est intitulé : "Bootstrapping pre-averaged realized volatility under market microstructure noise". Nous développons dans ce chapitre une méthode de bootstrap par bloc basée sur l’approche "pré-moyennement" de Jacod et al. (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à haute fréquences consécutifs qui se chevauchent. Le chevauchement des blocs induit une forte dépendance dans la structure des rendements "pré-moyennés". En effet les rendements "pré-moyennés" sont m-dépendant avec m qui croît à une vitesse plus faible que la taille d’échantillon n. Ceci motive l’application d’un bootstrap par bloc spécifique. Nous montrons que le bloc bootstrap suggéré par Bühlmann et Künsch (1995) n’est valide que lorsque la volatilité est constante. Ceci est dû à l’hétérogénéité dans la moyenne des rendements "pré-moyennés" au carré lorsque la volatilité est stochastique. Nous proposons donc une nouvelle procédure de bootstrap qui combine le wild bootstrap et le bootstrap par bloc, de telle sorte que la dépendance sérielle des rendements "pré-moyennés" est préservée à l’intérieur des blocs et la condition d’homogénéité nécessaire pour la validité du bootstrap est respectée. Sous des conditions de taille de bloc, nous montrons que cette méthode est convergente. Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques. Nous illustrons cette méthode en utilisant des données financières réelles.
Le troisième chapitre est intitulé: "Bootstrapping realized covolatility measures under local Gaussianity assumption". Dans ce chapitre nous montrons, comment et dans quelle mesure on peut approximer les distributions des estimateurs de mesures de co-volatilité sous l’hypothèse de Gaussianité locale des rendements. En particulier nous proposons une nouvelle méthode de bootstrap sous ces hypothèses. Nous nous sommes focalisés sur la volatilité réalisée et sur le beta réalisé. Nous montrons que la nouvelle méthode de bootstrap appliquée au beta réalisé était capable de répliquer les cummulants au deuxième ordre, tandis qu’il procurait une amélioration au troisième degré lorsqu’elle est appliquée à la volatilité réalisée. Ces résultats améliorent donc les résultats existants dans cette littérature, notamment ceux de Gonçalves et Meddahi (2009) et de Dovonon, Gonçalves et Meddahi (2013). Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques et les résultats de bootstrap existants. Nous illustrons cette méthode en utilisant des données financières réelles. / We develop in this thesis bootstrap methods for high frequency financial data. The first two chapters focalise on bootstrap methods for the "pre-averaging" approach, which is robust to the presence of market microstructure effects. The main idea underlying this approach is that we can reduce the impact of the noise by pre-averaging high frequency returns that are possibly contaminated with market microstructure noise before applying a realized volatility-like statistic. Based on this approach, we develop several bootstrap methods, which preserve the dependence structure and the heterogeneity in the mean of the original data. The third chapter shows how and to what extent the local Gaussian- ity assumption can be explored to generate a bootstrap approximation for covolatility measures.
The first chapter is entitled "Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns". The main contribution of this chapter is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias correction term) as the (scaled) sum of squared pre-averaged returns, where the pre-averaging is done over all possible non-overlapping blocks of consecutive observations. Pre-averaging reduces the influence of the noise and allows for realized volatility estimation on the pre-averaged returns. The non-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately.
The second chapter is entitled "Bootstrapping pre-averaged realized volatility under market microstructure noise ". In this chapter we propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre-averaged returns implies that these are m-dependent with m growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the “blocks of blocks” bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that combines the wild bootstrap with the blocks of blocks bootstrap. We provide a proof of the first order asymptotic validity of this method for percentile intervals. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves the finite sample properties of the existing first order asymptotic theory.
The third chapter is entitled "Bootstrapping realized volatility and realized beta under a local Gaussianity assumption". The financial econometric of high frequency data litera- ture often assumed a local constancy of volatility and the Gaussianity properties of high frequency returns in order to carry out inference. In this chapter, we show how and to what extent the local Gaussianity assumption can be explored to generate a bootstrap approximation. We show the first-order asymptotic validity of the new wild bootstrap method, which uses the conditional local normality properties of financial high frequency returns. In addition to that we use Edgeworth expansions and Monte Carlo simulations to compare the accuracy of the bootstrap with other existing approaches. It is shown that at second order, the new wild bootstrap matches the cumulants of realized betas-based t-statistics, whereas it provides a third-order asymptotic refinement for realized volatility. Monte Carlo simulations suggest that our new wild bootstrap methods improve upon the first-order asymptotic theory in finite samples and outperform the existing bootstrap methods for realized covolatility measures. We use empirical work to illustrate its uses in practice.
|
44 |
Análise de previsões de volatilidade para modelos de Valor em Risco (VaR)Vargas, Rafael de Morais 27 February 2018 (has links)
Submitted by Sara Ribeiro (sara.ribeiro@ucb.br) on 2018-06-18T18:53:22Z
No. of bitstreams: 1
RafaeldeMoraisVargasDissertacao2018.pdf: 2179808 bytes, checksum: e2993cd35f13b4bd6411d626aefa0043 (MD5) / Approved for entry into archive by Sara Ribeiro (sara.ribeiro@ucb.br) on 2018-06-18T18:54:14Z (GMT) No. of bitstreams: 1
RafaeldeMoraisVargasDissertacao2018.pdf: 2179808 bytes, checksum: e2993cd35f13b4bd6411d626aefa0043 (MD5) / Made available in DSpace on 2018-06-18T18:54:14Z (GMT). No. of bitstreams: 1
RafaeldeMoraisVargasDissertacao2018.pdf: 2179808 bytes, checksum: e2993cd35f13b4bd6411d626aefa0043 (MD5)
Previous issue date: 2018-02-27 / Given the importance of market risk measures, such as value at risk (VaR), in this paper, we
compare traditionally accepted volatility forecast models, in particular, the GARCH family
models, with more recent models such as HAR-RV and GAS in terms of the accuracy of their
VaR forecasts. For this purpose, we use intraday prices, at the 5-minute frequency, of the S&P
500 index and the General Electric stocks, for the period from January 4, 2010 to December 30,
2013. Based on the tick loss function and the Diebold-Mariano test, we did not find difference in
the predictive performance of the HAR-RV and GAS models in comparison with the Exponential
GARCH (EGARCH) model, considering daily VaR forecasts at the 1% and 5% significance levels
for the return series of the S&P 500 index. Regarding the return series of General Electric, the
1% VaR forecasts obtained from the HAR-RV models, assuming a t-Student distribution for the
daily returns, are more accurate than the forecasts of the EGARCH model. In the case of the
5% VaR forecasts, all variations of the HAR-RV model perform better than the EGARCH. Our
empirical study provides evidence of the good performance of HAR-RV models in forecasting
value at risk. / Dada a importância de medidas de risco de mercado, como o valor em risco (VaR), nesse
trabalho, comparamos modelos de previsão de volatilidade tradicionalmente mais aceitos, em
particular, os modelos da família GARCH, com modelos mais recentes, como o HAR-RV e o
GAS, em termos da acurácia de suas previsões de VaR. Para isso, usamos preços intradiários,
na frequência de 5 minutos, do índice S&P 500 e das ações da General Electric, para o período
de 4 de janeiro de 2010 a 30 de dezembro de 2013. Com base na função perda tick e no teste de
Diebold-Mariano, não encontramos diferença no desempenho preditivo dos modelos HAR-RV
e GAS em relação ao modelo Exponential GARCH (EGARCH), considerando as previsões de
VaR diário a 1% e 5% de significância para a série de retornos do índice S&P 500. Já com
relação à série de retornos da General Electric, as previsões de VaR a 1% obtidas a partir dos
modelos HAR-RV, assumindo uma distribuição t-Student para os retornos diários, mostram-se
mais acuradas do que as previsões do modelo EGARCH. No caso das previsões de VaR a 5%,
todas as variações do modelo HAR-RV apresentam desempenho superior ao EGARCH. Nosso
estudo empírico traz evidências do bom desempenho dos modelos HAR-RV na previsão de valor
em risco.
|
45 |
資產配置,波動率與交易密集度 / Asset allocation, Volatility and Trading Intensity張炳善, Chang, Ping Shan Unknown Date (has links)
本文旨在探討具有捕捉交易密集度特性的波動率測度模型是否能幫助投資者改
善其資產配置的決策。因此,本文分別考量了利用兩種不同價格抽樣方式所計算
出來的實現波動率 (realized volatility) 模型: (1) 日曆時間抽樣法 (calendar time sampling scheme) 與 (2) 交易次數時間抽樣法 (transaction time sampling scheme)。相較於另一廣為應用的一般化自我迴歸條件異質變異 (Generalized Autoregressive
Conditional Heteroskedasticity) 模型而言,這兩種實現波動率模型的優點除了在於它們可以捕捉日內資產報酬率的動態變化之外,交易次數時間抽樣法更可以另外捕捉市場的交易密集度。因此利用交易次數間抽樣法所計算出的實現波動率相對提供給投資者較多的訊息。本文利用了West, Edison and Cho (1993) 所提出的資產組合期望效用模型衡量三種波動率測度的預測績效:(1) 實現波動率 - 日曆時間抽樣法 (2) 實現波動率 - 交易次數時間抽樣法 (3) 指數型一般化自我迴歸條件異質變異 (Exponential Generalized Autoregressive Conditional Heteroskedasticity)。我們的實證結果發現,只有在投資者風險趨避係數越小的條件下,此三種波動率測度模型兩兩之間才有較大的期望效用差距;另外,有趣的是,當市場存在異常的交易波動現象時,交易次數時間抽樣法下的實現波動率所產生的期望效用值總是不輸給另外兩種波動率測度模型的結果。 / This paper examines whether volatility measures that account for trading intensity would help investors make better decisions in their asset allocation. Specifically, we consider two versions of realized volatility (RV), namely, one (RV-C) constructed by regular calendar time sampling, and the other one (RV-T) constructed by transaction time sampling. Comparing to models in the GARCH family, both of these two RVs can capture intraday variations of asset return dynamics. In particular, the RV-T incorporates intraday trading intensity, and hence provides even more valuable information for investors. With the utility-based approach developed by West, Edison, and Cho (1993), we compare the predictive performance of RV-C, RV-T, and the EGARCH model in terms of utility generated with each of these three volatility measures. Our empirical results show that the three measures differ from each other mostly when investors are less risk-averse. Most interestingly, the time-deformed RV-T weakly dominates the RV-C and the EGARCH model when the markets are extremely volatile.
|
46 |
S&P500波動度的預測 - 考慮狀態轉換與指數風險中立偏態及VIX期貨之資訊內涵 / The Information Content of S&P 500 Risk-neutral Skewness and VIX Futures for S&P 500 Volatility Forecasting:Markov Switching Approach黃郁傑, Huang, Yu Jie Unknown Date (has links)
本研究探討VIX 期貨價格所隱含的資訊對於S&P 500 指數波動度預測的解釋力。過去許多文獻主要運用線性預測模型探討歷史波動度、隱含波動度和風險中立偏態對於波動度預測的資訊內涵。然而過去研究顯示,波動度具有長期記憶與非線性的特性,因此本文主要研究非線性預測模型對於波動度預測的有效性。本篇論文特別著重在不同市場狀態下(高波動與低波動)的實現波動度及隱含波動度異質自我迴歸模型(HAR-RV-IV model)。因此,本研究以考慮馬可夫狀態轉化下的異質自我迴歸模型(MRS-HAR model)進行實證分析。
本研究主要目的有以下三點: (1) 以VIX期貨價格所隱含的資訊提升S&P 500波動度預測的準確性。(2) 結合風險中立偏態與VIX期貨的資訊內涵,進一步提升S&P 500 波動度預測的準確性。(3) 考慮狀態轉換後的波動度預測模型是否優於過去文獻的線性迴歸模型。
本研究實證結果發現: (1) 相對於過去的實現波動度及隱含波動度,VIX 期貨可以提供對於預測未來波動度的額外資訊。 (2) 與其他模型比較,加入風險中立偏態和VIX 期貨萃取出的隱含波動度之波動度預測模型,只顯著提高預測未來一天波動度的準確性。 (3) 考慮狀態轉換後的波動度預測模型優於線性迴歸模型。 / This paper explores whether the information implied from VIX futures prices has incremental explanatory power for future volatility in the S&P 500 index. Most of prior studies adopt linear forecasting models to investigate the usefulness of historical volatility, implied volatility and risk-neutral skewness for volatility forecasting. However, previous literatures find out the long-memory and nonlinear property in volatility. Therefore, this study focuses on the nonlinear forecasting models to examine the effectiveness for volatility forecasting. In particular, we concentrate on Heterogeneous Autoregressive model of Realized Volatility and Implied Volatility (HAR-RV-IV) under different market conditions (i.e., high and low volatility state).
This study has three main goals: First, to investigate whether the information extracted from VIX futures prices could improve the accuracy for future volatility forecasting. Second, combining the information content of risk-neutral skewness and VIX futures to enhance the predictive power for future volatility forecasting. Last, to explore whether the nonlinear models are superior to the linear models.
This study finds that VIX futures prices contain additional information for future volatility, relative to past realized volatilities and implied volatility. Out-of-sample analysis confirms that VIX futures improves significantly the accuracy for future volatility forecasting. However, the improvement in the accuracy of volatility forecasts is significant only at daily forecast horizon after incorporating the information of risk-neutral skewness and VIX futures prices into the volatility forecasting model. Last, the volatility forecasting models are superior after taking the regime-switching into account.
|
47 |
[en] ASYMMETRIC EFFECTS AND LONG MEMORY IN THE VOLATILITY OF DJIA STOCKS / [pt] EFEITOS DE ASSIMETRIA E MEMÓRIA LONGA NA VOLATILIDADE DE AÇÕES DO ÍNDICE DOW JONESMARCEL SCHARTH FIGUEIREDO PINTO 16 October 2006 (has links)
[pt] volatilidade dos ativos financeiros reflete uma reação
prosseguida dos agentes a choques no passado ou alterações
nas condições dos mercados determinam mudanças na dinâmica
da variável? Enquanto modelos fracionalmente integrados
vêm sendo extensamente utilizados como uma descrição
adequada do processo gerador de séries de volatilidade,
trabalhos teóricos recentes indicaram que mudanças
estruturais podem ser uma relevante alternativa empírica
para o fato estilizado de memória longa. O presente
trabalho investiga o que alterações nos mercados
significam nesse contexto, introduzindo variações de
preços como uma possível fonte de mudanças no nível da
volatilidade durante algum período, com grandes quedas
(ascensões) nos preços trazendo regimes persistentes de
variância alta (baixa). Uma estratégia de modelagem
sistemática e flexível é estabelecida para testar e
estimar essa assimetria através da incorporação de
retornos acumulados passados num arcabouço não-linear. O
principal resultado revela que o efeito é altamente
significante - estima-se que níveis de volatilidade 25% e
50% maiores estão associados a quedas nos preços em
períodos curtos - e é capaz de explicar altos valores de
estimativas do parâmetro de memória longa. Finalmente,
mostra-se que a modelagem desse efeito traz ganhos
importantes para aplicações fora da amostra em períodos de
volatilidade alta. / [en] Does volatility reflect lasting reactions to past shocks
or changes in the
markets induce shifts in this variable dynamics? In this
work, we argue
that price variations are an essential source of
information about multiple
regimes in the realized volatility of stocks, with large
falls (rises) in prices
bringing persistent regimes of high (low) variance. The
study shows that
this asymmetric effect is highly significant (we estimate
that falls of different
magnitudes over less than two months are associated with
volatility levels
20% and 60% higher than the average of periods with stable
or rising prices)
and support large empirical values of long memory
parameter estimates.
We show that a model based on those findings significantly
improves out of
sample performance in relation to standard methods
{specially in periods
of high volatility.
|
48 |
O uso da volatilidade realizada na simulação histórica ajustada para cálculo do VaRCosta, Fabiola Medina 26 May 2010 (has links)
Submitted by Fabiola Costa (famedina06@hotmail.com) on 2010-08-24T14:18:56Z
No. of bitstreams: 1
Dissertacao_Fabiola_Medina_Costa.pdf: 981365 bytes, checksum: 368c8b3a6a54c3a8e7c0f62130bcf2a3 (MD5) / Approved for entry into archive by Vitor Souza(vitor.souza@fgv.br) on 2010-08-24T14:39:21Z (GMT) No. of bitstreams: 1
Dissertacao_Fabiola_Medina_Costa.pdf: 981365 bytes, checksum: 368c8b3a6a54c3a8e7c0f62130bcf2a3 (MD5) / Made available in DSpace on 2010-08-24T17:42:48Z (GMT). No. of bitstreams: 1
Dissertacao_Fabiola_Medina_Costa.pdf: 981365 bytes, checksum: 368c8b3a6a54c3a8e7c0f62130bcf2a3 (MD5)
Previous issue date: 2010-05-28 / This paper proposes the historical simulation model to calculate the VaR, considering return ajusted by the realized volatility measured from intraday returns. The database consists of five most liquid share among the different segments of Bovespa Index. For the proposed methodology we used two of the empirical theories of the empirical literature - adjusted historical simulation and realized volatility. The Kupiec tes and Christoffersen test are used to analized and veryfy the proposed methodology performance. / O presente trabalho propõe para o cálculo VaR o modelo de simulação histórica, com os retornos atualizados pela volatilidade realizada calculada a partir de dados intradiários. A base de dados consiste de cinco ações entre as mais líquidas do Ibovespa de distintos segmentos. Para a metodologia proposta utilizamos duas teorias da literatura empírica – simulação histórica ajustada e volatilidade realizada. Para análise e verificação do desempenho da metodologia proposta utilizamos o Teste de Kupiec e o Teste de Christoffersen.
|
49 |
[en] DISTRIBUTIONS OF RETURNS, VOLATILITIES AND CORRELATIONS IN THE BRAZILIAN STOCK MARKET / [pt] DISTRIBUIÇÕES DE RETORNOS, VOLATILIDADES E CORRELAÇÕES NO MERCADO ACIONÁRIO BRASILEIROMARCO AURELIO SIMAO FREIRE 24 February 2005 (has links)
[pt] A hipótese de normalidade é comumente utilizada na área de
análise de risco para descrever as distribuições dos
retornos padronizados pelas volatilidades. No entanto,
utilizando cinco dos ativos mais líquidos na Bovespa, este
trabalho mostra que tal hipótese não é compatível com
medidas de volatilidades estimadas pela metodologia EWMA ou
modelos GARCH. Em contraposição, ao extrair a informação
contida em cotações intradiárias, a metodologia de
volatilidade realizada origina retornos padronizados
normais, potencializando ganhos no cálculo de medidas de
Valor em Risco. Além disso, são caracterizadas as
distribuições de volatilidades e correlações de ativos
brasileiros e, em especial, mostra-se que as distribuições
das volatilidades são aproximadamente lognormais, enquanto
as distribuições das correlações são aproximadamente
normais. A análise é feita tanto de um ponto de vista
univariado quanto multivariado e fornece subsídio para a
melhor modelagem de variâncias e correlações em um contexto
de grande dimensionalidade. / [en] The normality assumption is commonly used in the risk
management area to describe the distributions of returns
standardized by volatilities. However, using five of the
most actively traded stocks in Bovespa, this paper shows
that this assumption is not compatible with volatilities
estimated by EWMA or GARCH models. In sharp contrast, when
we use the information contained in high frequency data to
construct the realized volatilies measures, we attain the
normality of the standardized returns, giving promise of
improvements in Value at Risk statistics. We also describe
the distributions of volatilities and correlations of the
brazilian stocks, showing that the distributions of
volatilities are nearly lognormal and the distribuitions of
correlations are nearly Gaussian. All analysis is traced
both in a univariate and a multivariate framework and
provides background for improved high-dimensional
volatility and correlation modelling in the brazilian stock
market.
|
50 |
Mémoire longue, volatilité et gestion de portefeuille / Long memory, volatility and portfolio managementCoulon, Jérôme 20 May 2009 (has links)
Cette thèse porte sur l’étude de la mémoire longue de la volatilité des rendements d’actions. Dans une première partie, nous apportons une interprétation de la mémoire longue en termes de comportement d’agents grâce à un modèle de volatilité à mémoire longue dont les paramètres sont reliés aux comportements hétérogènes des agents pouvant être rationnels ou à rationalité limitée. Nous déterminons de manière théorique les conditions nécessaires à l’obtention de mémoire longue. Puis nous calibrons notre modèle à partir des séries de volatilité réalisée journalière d’actions américaines de moyennes et grandes capitalisations et observons le changement de comportement des agents entre la période précédant l’éclatement de la bulle internet et celle qui la suit. La deuxième partie est consacrée à la prise en compte de la mémoire longue en gestion de portefeuille. Nous commençons par proposer un modèle de choix de portefeuille à volatilité stochastique dans lequel la dynamique de la log-volatilité est caractérisée par un processus d’Ornstein-Uhlenbeck. Nous montrons que l’augmentation du niveau d’incertitude sur la volatilité future induit une révision du plan de consommation et d’investissement. Puis dans un deuxième modèle, nous introduisons la mémoire longue grâce au mouvement brownien fractionnaire. Cela a pour conséquence de transposer le système économique d’un cadre markovien à un cadre non-markovien. Nous fournissons donc une nouvelle méthode de résolution fondée sur la technique de Monte Carlo. Puis, nous montrons toute l’importance de modéliser correctement la volatilité et mettons en garde le gérant de portefeuille contre les erreurs de spécification de modèle. / This PhD thesis is about the study of the long memory of the volatility of asset returns. In a first part, we bring an interpretation of long memory in terms of agents’ behavior through a long memory volatility model whose parameters are linked with the bounded rational agents’ heterogeneous behavior. We determine theoretically the necessary condition to get long memory. Then we calibrate our model from the daily realized volatility series of middle and large American capitalization stocks. Eventually, we observe the change in the agents’ behavior between the period before the internet bubble burst and the one after. The second part is devoted to the consideration of long memory in portfolio management. We start by suggesting a stochastic volatility portfolio model in which the dynamics of the log-volatility is characterized by an Ornstein-Uhlenbeck process. We show that when the uncertainty of the future volatility level increases, it induces the revision of the consumption and investment plan. Then in a second model, we introduce a long memory component by the use of a fractional Brownian motion. As a consequence, it transposes the economic system from a Markovian framework to a non-Markovian one. So we provide a new resolution method based on Monte Carlo technique. Then we show the high importance to well model the volatility and warn the portfolio manager against the misspecification errors of the model.
|
Page generated in 0.1467 seconds