• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 7
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 16
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Předpovídání realizované volatility: Záleží na skocích v cenách? / Forecasting realized volatility: Do jumps in prices matter?

Lipták, Štefan January 2012 (has links)
This thesis uses Heterogeneous Autoregressive models of Realized Volatility on five-minute data of three of the most liquid financial assets - S&P 500 Futures index, Euro FX and Light Crude NYMEX. The main contribution lies in the length of the datasets which span the time period of 25 years (13 years in case of Euro FX). Our aim is to show that decomposing realized variance into continuous and jump components improves the predicatability of RV also on extremely long high frequency datasets. The main goal is to investigate the dynamics of the HAR model parameters in time. Also, we examine if volatilities of various assets behave differently. The results reveal that decomposing RV into its components indeed im- proves the modeling and forecasting of volatility on all datasets. However, we found that forecasts are best when based on short, 1-2 years, pre-forecast periods due to high dynamics of HAR model's parameters in time. This dynamics is revealed also by a year-by-year estimation on all datasets. Con- sequently, we consider HAR models to be inapproppriate for modeling RV on such long datasets as they are not able to capture the dynamics of RV. This was indicated on all three datasets, thus, we conclude that volatility behaves similarly for different types of assets with similar liquidity. 1
12

Jump Detection With Power And Bipower Variation Processes

Dursun, Havva Ozlem 01 September 2007 (has links) (PDF)
In this study, we show that realized bipower variation which is an extension of realized power variation is an alternative method that estimates integrated variance like realized variance. It is seen that realized bipower variation is robust to rare jumps. Robustness means that if we add rare jumps to a stochastic volatility process, realized bipower variation process continues to estimate integrated variance although realized variance estimates integrated variance plus the quadratic variation of the jump component. This robustness is crucial since it separates the discontinuous component of quadratic variation which comes from the jump part of the logarithmic price process. Thus, we demonstrate that if the logarithmic price process is in the class of stochastic volatility plus rare jumps processes then the difference between realized variance and realized bipower variation process estimates the discontinuous component of the quadratic variation. So, quadratic variation of the jump component can be estimated and jump detection can be achieved.
13

Optimizable Multiresolution Quadratic Variation Filter For High-frequency Financial Data

Sen, Aykut 01 February 2009 (has links) (PDF)
As the tick-by-tick data of financial transactions become easier to reach, processing that much of information in an efficient and correct way to estimate the integrated volatility gains importance. However, empirical findings show that, this much of data may become unusable due to microstructure effects. Most common way to get over this problem is to sample the data in equidistant intervals of calendar, tick or business time scales. The comparative researches on that subject generally assert that, the most successful sampling scheme is a calendar time sampling which samples the data every 5 to 20 minutes. But this generally means throwing out more than 99 percent of the data. So it is obvious that a more efficient sampling method is needed. Although there are some researches on using alternative techniques, none of them is proven to be the best. Our study is concerned with a sampling scheme that uses the information in different scales of frequency and is less prone to microstructure effects. We introduce a new concept of business intensity, the sampler of which is named Optimizable Multiresolution Quadratic Variation Filter. Our filter uses multiresolution analysis techniques to decompose the data into different scales and quadratic variation to build up the new business time scale. Our empirical findings show that our filter is clearly less prone to microstructure effects than any other common sampling method. We use the classified tick-by-tick data for Turkish Interbank FX market. The market is closed for nearly 14 hours of the day, so big jumps occur between closing and opening prices. We also propose a new smoothing algorithm to reduce the effects of those jumps.
14

Univariate GARCH models with realized variance

Börjesson, Carl, Löhnn, Ossian January 2019 (has links)
This essay investigates how realized variance affects the GARCH-models (GARCH, EGARCH, GJRGARCH) when added as an external regressor. The GARCH models are estimated with three different distributions; Normal-, Student’s t- and Normal inverse gaussian distribution. The results are ambiguous - the models with realized variance improves the model fit, but when applied to forecasting, the models with realized variance are performing similar Value at Risk predictions compared to the models without realized variance.
15

Análise de previsões de volatilidade para modelos de Valor em Risco (VaR)

Vargas, Rafael de Morais 27 February 2018 (has links)
Submitted by Sara Ribeiro (sara.ribeiro@ucb.br) on 2018-06-18T18:53:22Z No. of bitstreams: 1 RafaeldeMoraisVargasDissertacao2018.pdf: 2179808 bytes, checksum: e2993cd35f13b4bd6411d626aefa0043 (MD5) / Approved for entry into archive by Sara Ribeiro (sara.ribeiro@ucb.br) on 2018-06-18T18:54:14Z (GMT) No. of bitstreams: 1 RafaeldeMoraisVargasDissertacao2018.pdf: 2179808 bytes, checksum: e2993cd35f13b4bd6411d626aefa0043 (MD5) / Made available in DSpace on 2018-06-18T18:54:14Z (GMT). No. of bitstreams: 1 RafaeldeMoraisVargasDissertacao2018.pdf: 2179808 bytes, checksum: e2993cd35f13b4bd6411d626aefa0043 (MD5) Previous issue date: 2018-02-27 / Given the importance of market risk measures, such as value at risk (VaR), in this paper, we compare traditionally accepted volatility forecast models, in particular, the GARCH family models, with more recent models such as HAR-RV and GAS in terms of the accuracy of their VaR forecasts. For this purpose, we use intraday prices, at the 5-minute frequency, of the S&P 500 index and the General Electric stocks, for the period from January 4, 2010 to December 30, 2013. Based on the tick loss function and the Diebold-Mariano test, we did not find difference in the predictive performance of the HAR-RV and GAS models in comparison with the Exponential GARCH (EGARCH) model, considering daily VaR forecasts at the 1% and 5% significance levels for the return series of the S&P 500 index. Regarding the return series of General Electric, the 1% VaR forecasts obtained from the HAR-RV models, assuming a t-Student distribution for the daily returns, are more accurate than the forecasts of the EGARCH model. In the case of the 5% VaR forecasts, all variations of the HAR-RV model perform better than the EGARCH. Our empirical study provides evidence of the good performance of HAR-RV models in forecasting value at risk. / Dada a importância de medidas de risco de mercado, como o valor em risco (VaR), nesse trabalho, comparamos modelos de previsão de volatilidade tradicionalmente mais aceitos, em particular, os modelos da família GARCH, com modelos mais recentes, como o HAR-RV e o GAS, em termos da acurácia de suas previsões de VaR. Para isso, usamos preços intradiários, na frequência de 5 minutos, do índice S&P 500 e das ações da General Electric, para o período de 4 de janeiro de 2010 a 30 de dezembro de 2013. Com base na função perda tick e no teste de Diebold-Mariano, não encontramos diferença no desempenho preditivo dos modelos HAR-RV e GAS em relação ao modelo Exponential GARCH (EGARCH), considerando as previsões de VaR diário a 1% e 5% de significância para a série de retornos do índice S&P 500. Já com relação à série de retornos da General Electric, as previsões de VaR a 1% obtidas a partir dos modelos HAR-RV, assumindo uma distribuição t-Student para os retornos diários, mostram-se mais acuradas do que as previsões do modelo EGARCH. No caso das previsões de VaR a 5%, todas as variações do modelo HAR-RV apresentam desempenho superior ao EGARCH. Nosso estudo empírico traz evidências do bom desempenho dos modelos HAR-RV na previsão de valor em risco.
16

Statistical properties of the liquidity and its influence on the volatility prediction / Statistical properties of the liquidity and its influence on the volatility prediction

Brandejs, David January 2016 (has links)
This master thesis concentrates on the influence of liquidity measures on the prediction of volatility and given the magic triangle phenomena subsequently on the expected return. Liquidity measures Amihud Illiquidity, Amivest Liquidity and Roll adjusted for high frequency data have been utilized. Dataset used for the modeling was consisting of 98 shares that were traded on S&P 100. The time range was from 1st January 2013 to 31st December 2014. We have found out that the liquidity truly enters into the return-volatility relationship and influences these variables - the magic triangle interacts. However, contrary to our hypothesis, the model shows up that lower liquidity signifies lower realized risk. This inference has been suggested by all three models (3SLS, 2SLS and OLS). Furthermore, we have used the realized variance and bi-power variation to separate the jump. Our second hypothesis that lower liquidity signifies higher frequency of jumps was confirmed only for one of two liquidity proxies (Roll) included in the resulting logit FE model. Keywords liquidity, risk, volatility, expected return, magic triangle, price jumps, realized variance, bi-power variation, three-stage least squares model, logit, high-frequency data, S&P 100 Author's e-mail david.brandejs@seznam.cz Supervisor's e-mail...
17

Modélisation des données financières par les modèles à chaîne de Markov cachée de haute dimension

Maoude, Kassimou Abdoul Haki 04 1900 (has links)
La classe des modèles à chaîne de Markov cachée (HMM, Hidden Markov Models) permet, entre autres, de modéliser des données financières. Par exemple, dans ce type de modèle, la distribution du rendement sur un actif financier est exprimée en fonction d'une variable non-observée, une chaîne de Markov, qui représente la volatilité de l'actif. Notons que les dynamiques de cette volatilité sont difficiles à reproduire, car la volatilité est très persistante dans le temps. Les HMM ont la particularité de permettre une variation de la volatilité selon les états de la chaîne de Markov. Historiquement, ces modèles ont été estimés avec un nombre faible de régimes (états), car le nombre de paramètres à estimer explose rapidement avec le nombre de régimes et l'optimisation devient vite difficile. Pour résoudre ce problème une nouvelle sous-classe de modèles à chaîne de Markov cachée, dite à haute dimension, a vu le jour grâce aux modèles dits factoriels et à de nouvelles méthodes de paramétrisation de la matrice de transition. L'objectif de cette thèse est d'étendre cette classe de modèles avec de nouvelles approches plus générales et de montrer leurs applications dans le domaine financier. Dans sa première partie, cette thèse formalise la classe des modèles factoriels à chaîne de Markov cachée et étudie les propriétés théoriques de cette classe de modèles. Dans ces modèles, la dynamique de la volatilité dépend d'une chaîne de Markov latente de haute dimension qui est construite en multipliant des chaînes de Markov de dimension plus faible, appelées composantes. Cette classe englobe les modèles factoriels à chaîne de Markov cachée précédemment proposés dont les composantes sont de dimension deux. Le modèle MDSV (Multifractal Discrete Stochastic Volatility) est introduit afin de pouvoir considérer des composantes de dimension supérieure à deux, généralisant ainsi les modèles factoriels existants. La paramétrisation particulière de ce modèle lui offre suffisamment de flexibilité pour reproduire différentes allures de décroissance de la fonction d'autocorrélation, comme celles qui sont observées sur les données financières. Un cadre est également proposé pour modéliser séparément ou simultanément les données de rendements financiers et de variances réalisées. Une analyse empirique sur 31 séries d'indices financiers montre que le modèle MDSV présente de meilleures performances en termes d'estimation et de prévision par rapport au modèle realized EGARCH. La modélisation par l'entremise des modèles factoriels à chaîne de Markov cachée nécessite qu'on définisse le nombre N de composantes à multiplier et cela suppose qu'il n'existe pas d'incertitude lié à ce nombre. La seconde partie de cette thèse propose, à travers une approche bayésienne, le modèle iFHMV (infinite Factorial Hidden Markov Volatility) qui autorise les données à déterminer le nombre de composantes nécessaires à leur modélisation. En s'inspirant du processus du buffet indien (IBP, Indian Buffet Process), un algorithme est proposé pour estimer ce modèle, sur les données de rendements financiers. Une analyse empirique sur les données de deux indices financiers et de deux actions permet de remarquer que le modèle iFHMV intègre l'incertitude liée au nombre de composantes pour les estimations et les prévisions. Cela lui permet de produire de meilleures prévisions par rapport à des modèles de référence. / Hidden Markov Models (HMMs) are popular tools to interpret, model and forecast financial data. In these models, the return dynamics on a financial asset evolve according to a non-observed variable, a Markov chain, which generally represents the volatility of the asset. This volatility is notoriously difficult to reproduce with statistical models as it is very persistent in time. HMMs allow the volatility to vary according to the states of a Markov chain. Historically, these models are estimated with a very small number of regimes (states), because the number of parameters to be estimated grows quickly with the number of regimes and the optimization becomes difficult. The objective of this thesis is to propose a general framework to construct HMMs with a richer state space and a higher level of volatility persistence. In the first part, this thesis studies a general class of high-dimensional HMMs, called factorial HMMs, and derives its theoretical properties. In these models, the volatility is linked to a high-dimensional Markov chain built by multiplying lower-dimensional Markov chains, called components. We discuss how previously proposed models based on two-dimensional components adhere to the factorial HMM framework. Furthermore, we propose a new process---the Multifractal Discrete Stochastic Volatility (MDSV) process---which generalizes existing factorial HMMs to dimensions larger than two. The particular parametrization of the MDSV model allows for enough flexibility to reproduce different decay rates of the autocorrelation function, akin to those observed on financial data. A framework is also proposed to model financial log-returns and realized variances, either separately or jointly. An empirical analysis on 31 financial indices reveals that the MDSV model outperforms the realized EGARCH model in terms of fitting and forecasting performance. Our MDSV model requires us to pre-specify the number of components and assumes that there is no uncertainty on that number. In the second part of the thesis, we propose the infinite Factorial Hidden Markov Volatility (iFHMV) model as part of a Bayesian framework to let the data drive the selection of the number of components and take into account the uncertainty related to the number of components in the fitting and forecasting procedure. We also develop an algorithm inspired by the Indian Buffet Process (IBP) to estimate the iFHMV model on financial log-returns. Empirical analyses on two financial indices and two stocks show that the iFHMV model outperforms popular benchmarks in terms of forecasting performance.

Page generated in 0.0691 seconds