Spelling suggestions: "subject:"highfrequency data"" "subject:"highfrequency mata""
51 |
Four essays on the econometric modelling of volatility and durationsAmado, Cristina January 2009 (has links)
The thesis "Four Essays on the Econometric Modelling of Volatility and Durations" consists of four research papers in the area of financial econometrics on topics of the modelling of financial market volatility and the econometrics of ultra-high-frequency data. The aim of the thesis is to develop new econometric methods for modelling and hypothesis testing in these areas. The second chapter introduces a new model, the time-varying GARCH (TV-GARCH) model, in which volatility has a smooth time-varying structure of either additive or multiplicative type. To characterize smooth changes in the (un)conditional variance we assume that the parameters vary smoothly over time according to the logistic transition function. A data-based modelling technique is used for specifying the parametric structure of the TV-GARCH models. This is done by testing a sequence of hypotheses by Lagrange multiplier tests presented in the chapter. Misspecification tests are also provided for evaluating the adequacy of the estimated model. The third chapter addresses the issue of modelling deterministic changes in the unconditional variance over a long return series. The modelling strategy is illustrated with an application to the daily returns of the Dow Jones Industrial Average (DJIA) index from 1920 until 2003. The empirical results sustain the hypothesis that the assumption of constancy of the unconditional variance is not adequate over long return series and indicate that deterministic changes in the unconditional variance may be associated with macroeconomic factors. In the fourth chapter we propose an extension of the univariate multiplicative TV-GARCH model to the multivariate Conditional Correlation GARCH (CC-GARCH) framework. The variance equations are parameterized such that they combine the long-run and the short-run dynamic behaviour of the volatilities. In this framework, the long-run behaviour is described by the individual unconditional variances, and it is allowed to vary smoothly over time according to the logistic transition function. The effects of modelling the nonstationary variance component are examined empirically in several CC-GARCH models using pairs of seven daily stock return series from the S&P 500 index. The results show that the magnitude of such effect varies across different stock series and depends on the structure of the conditional correlation matrix. An important feature of financial durations is the evidence of a strong diurnal variation over the trading day. In the fifth chapter we propose a new parameterization for describing the diurnal pattern of trading activity. The parametric structure of the diurnal component allows the duration process to change smoothly over the time-of-day according to the logistic transition function. The empirical results suggest that the diurnal variation may not always have the inverted U-shaped pattern for the trade durations as documented in earlier studies.
|
52 |
Stochastic Modelling of Random Variables with an Application in Financial Risk Management.Moldovan, Max January 2003 (has links)
The problem of determining whether or not a theoretical model is an accurate representation of an empirically observed phenomenon is one of the most challenging in the empirical scientific investigation. The following study explores the problem of stochastic model validation. Special attention is devoted to the unusual two-peaked shape of the empirically observed distributions of the conditional on realised volatility financial returns. The application of statistical hypothesis testing and simulation techniques leads to the conclusion that the conditional on realised volatility returns are distributed with a specific previously undocumented distribution. The probability density that represents this distribution is derived, characterised and applied for validation of the financial model.
|
53 |
Análise das cotações e transações intradiárias da Petrobrás utilizando dados irregularmente espaçadosSilva, Marília Gabriela Elias da 27 August 2014 (has links)
Submitted by Marília Gabriela Elias da Silva (marilia.gabriela.es@gmail.com) on 2014-09-18T19:07:04Z
No. of bitstreams: 1
Marilia_Gabriela_tese.pdf: 512980 bytes, checksum: 8ab7fc0b5b89fa1bd8f99a705ae51920 (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2014-09-18T19:43:45Z (GMT) No. of bitstreams: 1
Marilia_Gabriela_tese.pdf: 512980 bytes, checksum: 8ab7fc0b5b89fa1bd8f99a705ae51920 (MD5) / Made available in DSpace on 2014-09-18T19:51:23Z (GMT). No. of bitstreams: 1
Marilia_Gabriela_tese.pdf: 512980 bytes, checksum: 8ab7fc0b5b89fa1bd8f99a705ae51920 (MD5)
Previous issue date: 2014-08-27 / This study uses data provided by BM&FBovespa to analyze Petrobras' stock for the months between July and August 2010 and October 2008. First, we present a detailed discussion about handling data, we show the impossibility of using the mid-price quote due to the high number of buy / sell orders that present very high / low prices. We checked some of the empirical stylized facts pointed out by Cont (2001), among others enshrined in the microstructure literature. In general, the stylized facts were replicated by the data. We apply the filter, proposed by Brownlees and Gallo (2006), to Petrobras' stock and we analyze the sensitivity of the number of possible outliers found by the filter with respect to the filter's parameters variation. We propose using the Akaike criterion to sort and select conditional duration models whose samples have different length sizes. The selected models are not always those in which the data has been filtered. For the ACD (1,1) setting, when we consider only well-adjusted models, the Akaike criterion indicates as better model as one in which the data were not filtered. / O presente trabalho utiliza os dados disponibilizados pela BM&FBovespa para analisar as ações da Petrobrás para os meses compreendidos entre julho e agosto de 2010 e outubro de 2008. Primeiramente, apresentamos uma discussão detalhada sobre a manipulação desses dados, na qual explicitamos a impossibilidade de se usar o mid-price quote devido ao número elevado de ofertas de compra/venda com preços muito elevados/baixos. Verificamos alguns dos fatos estilizados empíricos apontados por Cont (2001), entre outros consagrados na literatura de microestrutura. Em geral, os dados replicaram os fatos estilizados. Aplicamos o filtro proposto por Brownlees e Gallo (2006) às ações da Petrobrás e analisamos a sensibilidade do número de possíveis outliers encontrados pelo filtro a variação dos parâmetros desse filtro. Propomos utilizar o critério de Akaike para ordenar e selecionar modelos de duração condicional cujas amostras de duração possuem tamanhos distintos. Os modelos selecionados, nem sempre são aqueles em que os dados foram filtrados. Para o ajuste ACD (1,1), quando considerados apenas os modelos bem ajustados (resíduos não autocorrelacionados), o critério de Akaike indica como melhor modelo aquele em que os dados não foram filtrados.
|
54 |
Descoberta de preço nas opções de PetrobrásSuzuki, Yurie Yassunaga January 2015 (has links)
Submitted by Yurie Yassunaga Suzuki (yurieyassunaga@gmail.com) on 2015-09-03T02:42:33Z
No. of bitstreams: 1
Dissertação - YYS.pdf: 638348 bytes, checksum: 7a31f5f0e578eed37d240d302f503e27 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2015-09-03T16:04:18Z (GMT) No. of bitstreams: 1
Dissertação - YYS.pdf: 638348 bytes, checksum: 7a31f5f0e578eed37d240d302f503e27 (MD5) / Made available in DSpace on 2015-09-03T16:44:56Z (GMT). No. of bitstreams: 1
Dissertação - YYS.pdf: 638348 bytes, checksum: 7a31f5f0e578eed37d240d302f503e27 (MD5)
Previous issue date: 2015 / This work aims to study market behavior involving Petrobras’ stock and options markets applying price discovery methodology. Using high-frequency data, provided by BM&FBOVESPA, econometric models used in this methodology were estimated and measures of Information Share (IS) and Component Share (CS) were calculated. The results of the analyzes indicated dominance of the spot market in the process of price discovery, since, for this market, were observed values over 66% for IS and above 74% for CS. Graphical analysis of the impulse response function indicated that the spot market is more efficient than the option market. / Este trabalho tem como objetivo estudar o comportamento do mercado de ações e opções de Petrobrás utilizando a metodologia de price discovery (descoberta de preços). A partir de dados de alta frequência de ambos os mercados, fornecidos pela BM&FBOVESPA, os modelos econométricos utilizados nessa metodologia foram estimados e as medidas de Information Share (IS) e Component Share (CS) foram calculadas. Os resultados das análises indicaram dominância do mercado à vista no processo de descoberta de preços, dado que, para este mercado, foram observados valores acima de 66% para a medida IS e acima de 74% para a medida CS. Análises gráficas da função resposta ao impulso indicaram, também, que o mercado à vista é o mais eficiente.
|
55 |
Efektivnost trhu a automatické obchodní systémy / Market efficiency and automated tradingZEMAN, Petr January 2013 (has links)
The dissertation thesis deals with the problem efficiency of the spot currency market. The main aim of this thesis is to verify the Efficient-market hypothesis on the majo foreign exchange pairs, and especially in the short term. The author focuses on the effective functioning of foreign exchange markets. The behaiour of the five main spot foreign exchange pairs - EUR/USD, GBP/USD, USD/CHF, USD/JPY and USD/CAD was analyzed in the thesis. Due to the increasing rise of intraday trades and growing popularity of margin accounts among retail investors, spot rates have been investigated primarily through a high-frequency data, that were collected for a period equal to or shorter than one day. The hypothesis of the effective exchange rate behaviour was verified by both using statistical methods, such as through automated trading systems, which were designed to assess the economic importance of the theory and to exclude or confirm the possibility of achieving above-average profits of retail investors on the foreign exchange markets.
|
56 |
Modelování durací mezi finančními transakcemi / Modeling of duration between financial transactionsVoráčková, Andrea January 2018 (has links)
❆❜str❛❝t ❚❤✐s ❞✐♣❧♦♠❛ t❤❡s✐s ❞❡❛❧s ✇✐t❤ ♣r♦♣❡rt✐❡s ♦❢ ❆❈❉ ♣r♦❝❡ss ❛♥❞ ♠❡t❤♦❞s ♦❢ ✐ts ❡st✐♠❛t✐♦♥✳ ❋✐rst✱ t❤❡ ❜❛s✐❝ ❞❡☞♥✐t✐♦♥s ❛♥❞ r❡❧❛t✐♦♥s ❜❡t✇❡❡♥ ❆❘▼❆ ❛♥❞ ●❆❘❈❍ ♣r♦❝❡ss❡s ❛r❡ st❛t❡❞✳ ■♥ t❤❡ s❡❝♦♥❞ ♣❛rt ♦❢ t❤❡ t❤❡s✐s✱ t❤❡ ❆❈❉ ♣r♦❝❡ss ✐s ❞❡☞♥❡❞ ❛♥❞ t❤❡ r❡❧❛t✐♦♥ ❜❡t✇❡❡♥ ❆❘▼❆ ❛♥❞ ❆❈❉ ✐s s❤♦✇♥✳ ❚❤❡♥ ✇❡ s❤♦✇ t❤❡ ♠❡t❤♦❞s ♦❢ ❞❛t❛ ❛❞❥✉st♠❡♥t✱ ❡st✐♠❛t✐♦♥✱ ♣r❡❞✐❝t✐♦♥ ❛♥❞ ✈❡r✐☞❝❛t✐♦♥ ♦❢ t❤❡ ❆❈❉ ♠♦❞❡❧✳ ❆❢t❡r t❤❛t✱ t❤❡ ♣❛rt✐❝✉❧❛r ❝❛s❡s ♦❢ ❆❈❉ ♣r♦❝❡ss✿ ❊❆❈❉✱ ❲❆❈❉✱ ●❆❈❉✱ ●❊❱❆❈❉ ✇✐t❤ ✐ts ♣r♦♣❡rt✐❡s ❛♥❞ t❤❡ ♠♦t✐✈❛t✐♦♥❛❧ ❡①❛♠♣❧❡s ❛r❡ ✐♥tr♦❞✉❝❡❞✳ ❚❤❡ ♥✉♠❡r✐❝❛❧ ♣❛rt ✐s ♣❡r❢♦r♠❡❞ ✐♥ ❘ s♦❢t✇❛r❡ ❛♥❞ ❝♦♥❝❡r♥s t❤❡ ♣r❡❝✐s✐♦♥ ♦❢ t❤❡ ❡st✐♠❛t❡s ❛♥❞ ♣r❡❞✐❝t✐♦♥s ♦❢ t❤❡ s♣❡❝✐❛❧ ❝❛s❡s ♦❢ ❆❈❉ ♠♦❞❡❧ ❞❡♣❡♥❞✐♥❣ ♦♥ t❤❡ ❧❡♥❣t❤ ♦❢ s❡r✐❡s ❛♥❞ ♥✉♠❜❡r ♦❢ s✐♠✉❧❛t✐♦♥s✳ ■♥ t❤❡ ❧❛st ♣❛rt✱ ✇❡ ❛♣♣❧② t❤❡ ♠❡t❤♦❞s st❛t❡❞ ✐♥ t❤❡♦r❡t✐❝❛❧ ♣❛rt ♦♥ r❡❛❧ ❞❛t❛✳ ❚❤❡ ❛❞❥✉st♠❡♥t ♦❢ t❤❡ ❞❛t❛ ❛♥❞ ❡st✐♠❛t✐♦♥ ♦❢ t❤❡ ♣❛r❛♠❡t❡rs ✐s ♣❡r❢♦r♠❡❞ ❛s ✇❡❧❧ ❛s t❤❡ ✈❡r✐☞❝❛t✐♦♥ ♦❢ t❤❡ ❆❈❉ ♠♦❞❡❧✳ ❆❢t❡r t❤❛t✱ ✇❡ ♣r❡❞✐❝t ❢❡✇ st❡♣s ❛♥❞ ❝♦♠♣❛r❡ t❤❡♠ ✇✐t❤ r❡❛❧ ❞✉r❛t✐♦♥s✳ ✶
|
57 |
Combinação de projeções de volatilidade baseadas em medidas de risco para dados em alta frequência / Volatility forecast combination using risk measures based on high frequency dataAlcides Carlos de Araújo 29 April 2016 (has links)
Operações em alta frequência demonstraram crescimento nos últimos anos; em decorrência disso, surgiu a necessidade de estudar o mercado de ações brasileiro no contexto dos dados em alta frequência. Os estimadores da volatilidade dos preços de ações utilizando dados de negociações em alta frequência são os principais objetos de estudo. Conforme Aldridge (2010) e Vuorenmaa (2013), o HFT foi definido como a rápida realocação de capital feita de modo que as transações possam ocorrer em milésimos de segundos por uso de algoritmos complexos que gerenciam envio de ordens, análise dos dados obtidos e tomada das melhores decisões de compra e venda. A principal fonte de informações para análise do HFT são os dados tick by tick, conhecidos como dados em alta frequência. Uma métrica oriunda da análise de dados em alta frequência e utilizada para gestão de riscos é a Volatilidade Percebida. Conforme Andersen et al. (2003), Pong et al. (2004), Koopman et al. (2005) e Corsi (2009) há um consenso na área de finanças de que as projeções da volatilidade utilizando essa métrica de risco são mais eficientes de que a estimativa da volatilidade por meio de modelos GARCH. Na gestão financeira, a projeção da volatilidade é uma ferramenta fundamental para provisionar reservas para possíveis perdas;, devido à existência de vários métodos de projeção da volatilidade e em decorrência desta necessidade torna-se necessário selecionar um modelo ou combinar diversas projeções. O principal desafio para combinar projeções é a escolha dos pesos: as diversas pesquisas da área têm foco no desenvolvimento de métodos para escolhê-los visando minimizar os erros de previsão. A literatura existente carece, no entanto, de uma proposição de método que considere o problema de eventual projeção de volatilidade abaixo do esperado. Buscando preencher essa lacuna, o objetivo principal desta tese é propor uma combinação dos estimadores da volatilidade dos preços de ações utilizando dados de negociações em alta frequência para o mercado brasileiro. Como principal ponto de inovação, propõe-se aqui de forma inédita a utilização da função baseada no Lower Partial Moment (LPM) para estimativa dos pesos para combinação das projeções. Ainda que a métrica LPM seja bastante conhecida na literatura, sua utilização para combinação de projeções ainda não foi analisada. Este trabalho apresenta contribuições ao estudo de combinações de projeções realizadas pelos modelos HAR, MIDAS, ARFIMA e Nearest Neighbor, além de propor dois novos métodos de combinação -- estes denominados por LPMFE (Lower Partial Moment Forecast Error) e DLPMFE (Discounted LPMFE). Os métodos demonstraram resultados promissores pretendem casos cuja pretensão seja evitar perdas acima do esperado e evitar provisionamento excessivo do ponto de vista orçamentário. / The High Frequency Trading (HFT) has grown significantly in the last years, in this way, this raises the need for research of the high frequency data on the Brazilian stock market.The volatility estimators of the asset prices using high frequency data are the main objects of study. According to Aldridge (2010) and Vuorenmaa (2013), the HFT was defined as the fast reallocation of trading capital that the negotiations may occur on milliseconds by complex algorithms scheduled for optimize the process of sending orders, data analysis and to make the best decisions of buy or sell. The principal information source for HFT analysis is the tick by tick data, called as high frequency data. The Realized Volatility is a risk measure from the high frequency data analysis, this metric is used for risk management.According to Andersen et al. (2003), Pong et al. (2004), Koopman et al.(2005) and Corsi (2009) there is a consensus in the finance field that the volatility forecast using this risk measure produce better results than estimating the volatility by GARCH models. The volatility forecasting is a key issue in the financial management to provision capital resources to possible losses. However, because there are several volatility forecast methods, this problem raises the need to choice a specific model or combines the projections. The main challenge to combine forecasts is the choice of the weights, with the aim of minimizingthe forecast errors, several research in the field have been focusing on development of methods to choice the weights.Nevertheless, it is missing in the literature the proposition of amethod which consider the minimization of the risk of an inefficient forecast for the losses protection. Aiming to fill the gap, the main goal of the thesis is to propose a combination of the asset prices volatility forecasts using high frequency data for Brazilian stock market. As the main focus of innovation, the thesis proposes, in an unprecedented way, the use of the function based on the Lower Partial Moment (LPM) to estimate the weights for the combination of volatility forecasts. Although the LPM measure is well known in the literature, the use of this metric for forecast combination has not been yet studied.The thesis contributes to the literature when studying the forecasts combination made by the models HAR, MIDAS, ARFIMA and Nearest Neighbor. The thesis also contributes when proposing two new methods of combinations, these methodologies are referred to as LPMFE (Lower Partial Moment Forecast Error) and DLPMFE (Discounted LPMFE). The methods have shown promising results when it is intended to avoid losses above the expected it is not intended to cause provisioning excess in the budget.
|
58 |
Bootstrapping high frequency dataHounyo, Koomla Ulrich 07 1900 (has links)
Nous développons dans cette thèse, des méthodes de bootstrap pour les données financières de hautes fréquences. Les deux premiers essais focalisent sur les méthodes de bootstrap appliquées à l’approche de "pré-moyennement" et robustes à la présence d’erreurs de microstructure. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. En se basant sur cette ap- proche d’estimation de la volatilité intégrée en présence d’erreurs de microstructure, nous développons plusieurs méthodes de bootstrap qui préservent la structure de dépendance et l’hétérogénéité dans la moyenne des données originelles. Le troisième essai développe une méthode de bootstrap sous l’hypothèse de Gaussianité locale des données financières de hautes fréquences.
Le premier chapitre est intitulé: "Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns". Nous proposons dans ce chapitre, des méthodes de bootstrap robustes à la présence d’erreurs de microstructure. Particulièrement nous nous sommes focalisés sur la volatilité réalisée utilisant des rendements "pré-moyennés" proposés par Podolskij et Vetter (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à hautes fréquences consécutifs qui ne se chevauchent pas. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. Le non-chevauchement des blocs fait que les rendements "pré-moyennés" sont asymptotiquement indépendants, mais possiblement hétéroscédastiques. Ce qui motive l’application du wild bootstrap dans ce contexte. Nous montrons la validité théorique du bootstrap pour construire des intervalles de type percentile et percentile-t. Les simulations Monte Carlo montrent que le bootstrap peut améliorer les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques, pourvu que le choix de la variable externe soit fait de façon appropriée. Nous illustrons ces méthodes en utilisant des données financières réelles.
Le deuxième chapitre est intitulé : "Bootstrapping pre-averaged realized volatility under market microstructure noise". Nous développons dans ce chapitre une méthode de bootstrap par bloc basée sur l’approche "pré-moyennement" de Jacod et al. (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à haute fréquences consécutifs qui se chevauchent. Le chevauchement des blocs induit une forte dépendance dans la structure des rendements "pré-moyennés". En effet les rendements "pré-moyennés" sont m-dépendant avec m qui croît à une vitesse plus faible que la taille d’échantillon n. Ceci motive l’application d’un bootstrap par bloc spécifique. Nous montrons que le bloc bootstrap suggéré par Bühlmann et Künsch (1995) n’est valide que lorsque la volatilité est constante. Ceci est dû à l’hétérogénéité dans la moyenne des rendements "pré-moyennés" au carré lorsque la volatilité est stochastique. Nous proposons donc une nouvelle procédure de bootstrap qui combine le wild bootstrap et le bootstrap par bloc, de telle sorte que la dépendance sérielle des rendements "pré-moyennés" est préservée à l’intérieur des blocs et la condition d’homogénéité nécessaire pour la validité du bootstrap est respectée. Sous des conditions de taille de bloc, nous montrons que cette méthode est convergente. Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques. Nous illustrons cette méthode en utilisant des données financières réelles.
Le troisième chapitre est intitulé: "Bootstrapping realized covolatility measures under local Gaussianity assumption". Dans ce chapitre nous montrons, comment et dans quelle mesure on peut approximer les distributions des estimateurs de mesures de co-volatilité sous l’hypothèse de Gaussianité locale des rendements. En particulier nous proposons une nouvelle méthode de bootstrap sous ces hypothèses. Nous nous sommes focalisés sur la volatilité réalisée et sur le beta réalisé. Nous montrons que la nouvelle méthode de bootstrap appliquée au beta réalisé était capable de répliquer les cummulants au deuxième ordre, tandis qu’il procurait une amélioration au troisième degré lorsqu’elle est appliquée à la volatilité réalisée. Ces résultats améliorent donc les résultats existants dans cette littérature, notamment ceux de Gonçalves et Meddahi (2009) et de Dovonon, Gonçalves et Meddahi (2013). Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques et les résultats de bootstrap existants. Nous illustrons cette méthode en utilisant des données financières réelles. / We develop in this thesis bootstrap methods for high frequency financial data. The first two chapters focalise on bootstrap methods for the "pre-averaging" approach, which is robust to the presence of market microstructure effects. The main idea underlying this approach is that we can reduce the impact of the noise by pre-averaging high frequency returns that are possibly contaminated with market microstructure noise before applying a realized volatility-like statistic. Based on this approach, we develop several bootstrap methods, which preserve the dependence structure and the heterogeneity in the mean of the original data. The third chapter shows how and to what extent the local Gaussian- ity assumption can be explored to generate a bootstrap approximation for covolatility measures.
The first chapter is entitled "Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns". The main contribution of this chapter is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias correction term) as the (scaled) sum of squared pre-averaged returns, where the pre-averaging is done over all possible non-overlapping blocks of consecutive observations. Pre-averaging reduces the influence of the noise and allows for realized volatility estimation on the pre-averaged returns. The non-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately.
The second chapter is entitled "Bootstrapping pre-averaged realized volatility under market microstructure noise ". In this chapter we propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre-averaged returns implies that these are m-dependent with m growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the “blocks of blocks” bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that combines the wild bootstrap with the blocks of blocks bootstrap. We provide a proof of the first order asymptotic validity of this method for percentile intervals. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves the finite sample properties of the existing first order asymptotic theory.
The third chapter is entitled "Bootstrapping realized volatility and realized beta under a local Gaussianity assumption". The financial econometric of high frequency data litera- ture often assumed a local constancy of volatility and the Gaussianity properties of high frequency returns in order to carry out inference. In this chapter, we show how and to what extent the local Gaussianity assumption can be explored to generate a bootstrap approximation. We show the first-order asymptotic validity of the new wild bootstrap method, which uses the conditional local normality properties of financial high frequency returns. In addition to that we use Edgeworth expansions and Monte Carlo simulations to compare the accuracy of the bootstrap with other existing approaches. It is shown that at second order, the new wild bootstrap matches the cumulants of realized betas-based t-statistics, whereas it provides a third-order asymptotic refinement for realized volatility. Monte Carlo simulations suggest that our new wild bootstrap methods improve upon the first-order asymptotic theory in finite samples and outperform the existing bootstrap methods for realized covolatility measures. We use empirical work to illustrate its uses in practice.
|
59 |
Analysis of Interdependencies among Central European Stock Markets / Analysis of Interdependencies among Central European Stock MarketsMašková, Jana January 2011 (has links)
The objective of the thesis is to examine interdependencies among the stock markets of the Czech Republic, Hungary, Poland and Germany in the period 2008-2010. Two main methods are applied in the analysis. The first method is based on the use of high-frequency data and consists in the computation of realized correlations, which are then modeled using the heterogeneous autoregressive (HAR) model. In addition, we employ realized bipower correlations, which should be robust to the presence of jumps in prices. The second method involves modeling of correlations by means of the Dynamic Conditional Correlation GARCH (DCC-GARCH) model, which is applied to daily data. The results indicate that when high-frequency data are used, the correlations are biased towards zero (the so-called "Epps effect"). We also find quite significant differences between the dynamics of the correlations from the DCC-GARCH models and those of the realized correlations. Finally, we show that accuracy of the forecasts of correlations can be improved by combining results obtained from different models (HAR models for realized correlations, HAR models for realized bipower correlations, DCC-GARCH models).
|
60 |
Three essays on the econometric analysis of high-frequency dataMalec, Peter 27 June 2013 (has links)
Diese Dissertation behandelt die ökonometrische Analyse von hochfrequenten Finanzmarktdaten. Kapitel 1 stellt einen neuen Ansatz zur Modellierung von seriell abhängigen positiven Variablen, die einen nichttrivialen Anteil an Nullwerten aufweisen, vor. Letzteres ist ein weitverbreitetes Phänomen in hochfrequenten Finanzmarktzeitreihen. Eingeführt wird eine flexible Punktmassenmischverteilung, ein maßgeschneiderter semiparametrischer Spezifikationstest sowie eine neue Art von multiplikativem Fehlermodell (MEM). Kapitel 2 beschäftigt sich mit dem Umstand, dass feste symmetrische Kerndichteschätzer eine geringe Präzision aufweisen, falls eine positive Zufallsvariable mit erheblicher Wahrscheinlichkeitsmasse nahe Null gegeben ist. Wir legen dar, dass Gammakernschätzer überlegen sind, wobei ihre relative Präzision von der genauen Form der Dichte sowie des Kerns abhängt. Wir führen einen verbesserten Gammakernschätzer sowie eine datengetriebene Methodik für die Wahl des geeigneten Typs von Gammakern ein. Kapitel 3 wendet sich der Frage nach dem Nutzen von Hochfrequenzdaten für hochdimensionale Portfolioallokationsanwendungen zu. Wir betrachten das Problem der Konstruktion von globalen Minimum-Varianz-Portfolios auf der Grundlage der Konstituenten des S&P 500. Wir zeigen auf, dass Prognosen, welche auf Hochfrequenzdaten basieren, im Vergleich zu Methoden, die tägliche Renditen verwenden, eine signifikant geringere Portfoliovolatilität implizieren. Letzteres geht mit spürbaren Nutzengewinnen aus der Sicht eines Investors mit hoher Risikoaversion einher. / In three essays, this thesis deals with the econometric analysis of financial market data sampled at intraday frequencies. Chapter 1 presents a novel approach to model serially dependent positive-valued variables realizing a nontrivial proportion of zero outcomes. This is a typical phenomenon in financial high-frequency time series. We introduce a flexible point-mass mixture distribution, a tailor-made semiparametric specification test and a new type of multiplicative error model (MEM). Chapter 2 addresses the problem that fixed symmetric kernel density estimators exhibit low precision for positive-valued variables with a large probability mass near zero, which is common in high-frequency data. We show that gamma kernel estimators are superior, while their relative performance depends on the specific density and kernel shape. We suggest a refined gamma kernel and a data-driven method for choosing the appropriate type of gamma kernel estimator. Chapter 3 turns to the debate about the merits of high-frequency data in large-scale portfolio allocation. We consider the problem of constructing global minimum variance portfolios based on the constituents of the S&P 500. We show that forecasts based on high-frequency data can yield a significantly lower portfolio volatility than approaches using daily returns, implying noticeable utility gains for a risk-averse investor.
|
Page generated in 0.0384 seconds