• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 14
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 78
  • 78
  • 78
  • 27
  • 20
  • 17
  • 14
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Dados de alta frequência : averiguando o impacto de microestrutura de mercado e sazonalidade intradiária na detecção de saltos e estimação da variação quadrática

Marmitt, Juliano January 2012 (has links)
Neste trabalho, visamos mostrar as características usuais dos dados de alta frequência, bem como utilizar modelagem não paramétrica para estimar a variância/volatilidade para esses dados. Após uma revisão sobre microestrutura de mercado, sazonalidade intradiária, variação quadrática e saltos, utilizamos os dados da PETR4 para estimar a variância realizada e variação bipotente. Determinadas essas séries, testamos se há saltos nas mesmas. Em seguida, analisamos o impacto que a microestrutura de mercado e a sazonalidade intradiária causam na detecção dos saltos. Concluímos que, enquanto a presença de microestrutura aponta para um número de saltos menor que o esperado, a sazonalidade intradiária aponta para o lado contrário, ou seja, ela causa um viés para detectar mais saltos, dada a estrutura típica da curva de volatilidade ao longo do dia em formato de J invertido, causando mais saltos incorretamente detectados no período mais volátil do dia (que corresponde a abertura da bolsa de valores). / In this work, we aim to show the usual characteristics of high-frequency data and the estimation of variance/volatility for this kind of data using nonparametric models. After reviewing concepts about market microstructure, intraday seasonality, quadratic variation and jumps, we use PETR4 data to estimate realized variance and bipower variation. With these series determined, we test for jumps. Then, we analyze the impact that market microstructure and intraday seasonality causes in jump detection. We conclude that while microstructure noise indicates fewer jumps than the ideal amount, intraday seasonality goes in the opposite direction, i.e., it detects more jumps than it should, since the typical inverted-J-shaped intraday volatility pattern tends to incorrectly detect more jumps at the most volatile period (which is when stock markets start negotiations).
42

Measuring, Modeling, and Forecasting Volatility and Correlations from High-Frequency Data

Vander Elst, Harry-Paul 20 May 2016 (has links)
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
43

Portfolio Value at Risk and Expected Shortfall using High-frequency data / Portfólio Value at Risk a Expected Shortfall s použitím vysoko frekvenčních dat

Zváč, Marek January 2015 (has links)
The main objective of this thesis is to investigate whether multivariate models using Highfrequency data provide significantly more accurate forecasts of Value at Risk and Expected Shortfall than multivariate models using only daily data. Our objective is very topical since the Basel Committee announced in 2013 that is going to change the risk measure used for calculation of capital requirement from Value at Risk to Expected Shortfall. The further improvement of accuracy of both risk measures can be also achieved by incorporation of high-frequency data that are rapidly more available due to significant technological progress. Therefore, we employed parsimonious Heterogeneous Autoregression and its asymmetric version that uses high-frequency data for the modeling of realized covariance matrix. The benchmark models are chosen well established DCC-GARCH and EWMA. The computation of Value at Risk (VaR) and Expected Shortfall (ES) is done through parametric, semi-parametric and Monte Carlo simulations. The loss distributions are represented by multivariate Gaussian, Student t, multivariate distributions simulated by Copula functions and multivariate filtered historical simulations. There are used univariate loss distributions: Generalized Pareto Distribution from EVT, empirical and standard parametric distributions. The main finding is that Heterogeneous Autoregression model using high-frequency data delivered superior or at least the same accuracy of forecasts of VaR to benchmark models based on daily data. Finally, the backtesting of ES remains still very challenging and applied Test I. and II. did not provide credible validation of the forecasts.
44

Mesure et Prévision de la Volatilité pour les Actifs Liquides

Chaker, Selma 04 1900 (has links)
Le prix efficient est latent, il est contaminé par les frictions microstructurelles ou bruit. On explore la mesure et la prévision de la volatilité fondamentale en utilisant les données à haute fréquence. Dans le premier papier, en maintenant le cadre standard du modèle additif du bruit et le prix efficient, on montre qu’en utilisant le volume de transaction, les volumes d’achat et de vente, l’indicateur de la direction de transaction et la différence entre prix d’achat et prix de vente pour absorber le bruit, on améliore la précision des estimateurs de volatilité. Si le bruit n’est que partiellement absorbé, le bruit résiduel est plus proche d’un bruit blanc que le bruit original, ce qui diminue la misspécification des caractéristiques du bruit. Dans le deuxième papier, on part d’un fait empirique qu’on modélise par une forme linéaire de la variance du bruit microstructure en la volatilité fondamentale. Grâce à la représentation de la classe générale des modèles de volatilité stochastique, on explore la performance de prévision de différentes mesures de volatilité sous les hypothèses de notre modèle. Dans le troisième papier, on dérive de nouvelles mesures réalizées en utilisant les prix et les volumes d’achat et de vente. Comme alternative au modèle additif standard pour les prix contaminés avec le bruit microstructure, on fait des hypothèses sur la distribution du prix sans frictions qui est supposé borné par les prix de vente et d’achat. / The high frequency observed price series is contaminated with market microstructure frictions or noise. We explore the measurement and forecasting of the fundamental volatility through novel approaches to the frictions’ problem. In the first paper, while maintaining the standard framework of a noise-frictionless price additive model, we use the trading volume, quoted depths, trade direction indicator and bid-ask spread to get rid of the noise. The econometric model is a price impact linear regression. We show that incorporating the cited liquidity costs variables delivers more precise volatility estimators. If the noise is only partially absorbed, the remaining noise is closer to a white noise than the original one, which lessens misspecification of the noise characteristics. Our approach is also robust to a specific form of endogeneity under which the common robust to noise measures are inconsistent. In the second paper, we model the variance of the market microstructure noise that contaminates the frictionless price as an affine function of the fundamental volatility. Under our model, the noise is time-varying intradaily. Using the eigenfunction representation of the general stochastic volatility class of models, we quantify the forecasting performance of several volatility measures under our model assumptions. In the third paper, instead of assuming the standard additive model for the observed price series, we specify the conditional distribution of the frictionless price given the available information which includes quotes and volumes. We come up with new volatility measures by characterizing the conditional mean of the integrated variance.
45

Combinação de projeções de volatilidade baseadas em medidas de risco para dados em alta frequência / Volatility forecast combination using risk measures based on high frequency data

Araújo, Alcides Carlos de 29 April 2016 (has links)
Operações em alta frequência demonstraram crescimento nos últimos anos; em decorrência disso, surgiu a necessidade de estudar o mercado de ações brasileiro no contexto dos dados em alta frequência. Os estimadores da volatilidade dos preços de ações utilizando dados de negociações em alta frequência são os principais objetos de estudo. Conforme Aldridge (2010) e Vuorenmaa (2013), o HFT foi definido como a rápida realocação de capital feita de modo que as transações possam ocorrer em milésimos de segundos por uso de algoritmos complexos que gerenciam envio de ordens, análise dos dados obtidos e tomada das melhores decisões de compra e venda. A principal fonte de informações para análise do HFT são os dados tick by tick, conhecidos como dados em alta frequência. Uma métrica oriunda da análise de dados em alta frequência e utilizada para gestão de riscos é a Volatilidade Percebida. Conforme Andersen et al. (2003), Pong et al. (2004), Koopman et al. (2005) e Corsi (2009) há um consenso na área de finanças de que as projeções da volatilidade utilizando essa métrica de risco são mais eficientes de que a estimativa da volatilidade por meio de modelos GARCH. Na gestão financeira, a projeção da volatilidade é uma ferramenta fundamental para provisionar reservas para possíveis perdas;, devido à existência de vários métodos de projeção da volatilidade e em decorrência desta necessidade torna-se necessário selecionar um modelo ou combinar diversas projeções. O principal desafio para combinar projeções é a escolha dos pesos: as diversas pesquisas da área têm foco no desenvolvimento de métodos para escolhê-los visando minimizar os erros de previsão. A literatura existente carece, no entanto, de uma proposição de método que considere o problema de eventual projeção de volatilidade abaixo do esperado. Buscando preencher essa lacuna, o objetivo principal desta tese é propor uma combinação dos estimadores da volatilidade dos preços de ações utilizando dados de negociações em alta frequência para o mercado brasileiro. Como principal ponto de inovação, propõe-se aqui de forma inédita a utilização da função baseada no Lower Partial Moment (LPM) para estimativa dos pesos para combinação das projeções. Ainda que a métrica LPM seja bastante conhecida na literatura, sua utilização para combinação de projeções ainda não foi analisada. Este trabalho apresenta contribuições ao estudo de combinações de projeções realizadas pelos modelos HAR, MIDAS, ARFIMA e Nearest Neighbor, além de propor dois novos métodos de combinação -- estes denominados por LPMFE (Lower Partial Moment Forecast Error) e DLPMFE (Discounted LPMFE). Os métodos demonstraram resultados promissores pretendem casos cuja pretensão seja evitar perdas acima do esperado e evitar provisionamento excessivo do ponto de vista orçamentário. / The High Frequency Trading (HFT) has grown significantly in the last years, in this way, this raises the need for research of the high frequency data on the Brazilian stock market.The volatility estimators of the asset prices using high frequency data are the main objects of study. According to Aldridge (2010) and Vuorenmaa (2013), the HFT was defined as the fast reallocation of trading capital that the negotiations may occur on milliseconds by complex algorithms scheduled for optimize the process of sending orders, data analysis and to make the best decisions of buy or sell. The principal information source for HFT analysis is the tick by tick data, called as high frequency data. The Realized Volatility is a risk measure from the high frequency data analysis, this metric is used for risk management.According to Andersen et al. (2003), Pong et al. (2004), Koopman et al.(2005) and Corsi (2009) there is a consensus in the finance field that the volatility forecast using this risk measure produce better results than estimating the volatility by GARCH models. The volatility forecasting is a key issue in the financial management to provision capital resources to possible losses. However, because there are several volatility forecast methods, this problem raises the need to choice a specific model or combines the projections. The main challenge to combine forecasts is the choice of the weights, with the aim of minimizingthe forecast errors, several research in the field have been focusing on development of methods to choice the weights.Nevertheless, it is missing in the literature the proposition of amethod which consider the minimization of the risk of an inefficient forecast for the losses protection. Aiming to fill the gap, the main goal of the thesis is to propose a combination of the asset prices volatility forecasts using high frequency data for Brazilian stock market. As the main focus of innovation, the thesis proposes, in an unprecedented way, the use of the function based on the Lower Partial Moment (LPM) to estimate the weights for the combination of volatility forecasts. Although the LPM measure is well known in the literature, the use of this metric for forecast combination has not been yet studied.The thesis contributes to the literature when studying the forecasts combination made by the models HAR, MIDAS, ARFIMA and Nearest Neighbor. The thesis also contributes when proposing two new methods of combinations, these methodologies are referred to as LPMFE (Lower Partial Moment Forecast Error) and DLPMFE (Discounted LPMFE). The methods have shown promising results when it is intended to avoid losses above the expected it is not intended to cause provisioning excess in the budget.
46

Essays on nonparametric estimation of asset pricing models

Dalderop, Jeroen Wilhelmus Paulus January 2018 (has links)
This thesis studies the use of nonparametric econometric methods to reconcile the empirical behaviour of financial asset prices with theoretical valuation models. The confrontation of economic theory with asset price data requires various functional form assumptions about the preferences and beliefs of investors. Nonparametric methods provide a flexible class of models that can prevent misspecification of agents’ utility functions or the distribution of asset returns. Evidence for potential nonlinearity is seen in the presence of non-Gaussian distributions and excessive volatility of stock returns, or non-monotonic stochastic discount factors in option prices. More robust model specifications are therefore likely to contribute to risk management and return predictability, and lend credibility to economists’ assertions. Each of the chapters in this thesis relaxes certain functional form assumptions that seem most important for understanding certain asset price data. Chapter 1 focuses on the state-price density in option prices, which confounds the nonlinearity in both the preferences and the beliefs of investors. To understand both sources of nonlinearity in equity prices, Chapter 2 introduces a semiparametric generalization of the standard representative agent consumption-based asset pricing model. Chapter 3 returns to option prices to understand the relative importance of changes in the distribution of returns and in the shape of the pricing kernel. More specifically, Chapter 1 studies the use of noisy high-frequency data to estimate the time-varying state-price density implicit in European option prices. A dynamic kernel estimator of the conditional pricing function and its derivatives is proposed that can be used for model-free risk measurement. Infill asymptotic theory is derived that applies when the pricing function is either smoothly varying or driven by diffusive state variables. Trading times and moneyness levels are modelled by marked point processes to capture intraday trading patterns. A simulation study investigates the performance of the estimator using an iterated plug-in bandwidth in various scenarios. Empirical results using S&P 500 E-mini European option quotes finds significant time-variation at intraday frequencies. An application towards delta- and minimum variance-hedging further illustrates the use of the estimator. Chapter 2 proposes a semiparametric asset pricing model to measure how consumption and dividend policies depend on unobserved state variables, such as economic uncertainty and risk aversion. Under a flexible specification of the stochastic discount factor, the state variables are recovered from cross-sections of asset prices and volatility proxies, and the shape of the policy functions is identified from the pricing functions. The model leads to closed-form price-dividend ratios under polynomial approximations of the unknown functions and affine state variable dynamics. In the empirical application uncertainty and risk aversion are separately identified from size-sorted stock portfolios exploiting the heterogeneous impact of uncertainty on dividend policy across small and large firms. I find an asymmetric and convex response in consumption (-) and dividend growth (+) towards uncertainty shocks, which together with moderate uncertainty aversion, can generate large leverage effects and divergence between macroeconomic and stock market volatility. Chapter 3 studies the nonparametric identification and estimation of projected pricing kernels implicit in the pricing of options, the underlying asset, and a riskfree bond. The sieve minimum-distance estimator based on conditional moment restrictions avoids the need to compute ratios of estimated risk-neutral and physical densities, and leads to stable estimates even in regions with low probability mass. The conditional empirical likelihood (CEL) variant of the estimator is used to extract implied densities that satisfy the pricing restrictions while incorporating the forwardlooking information from option prices. Moreover, I introduce density combinations in the CEL framework to measure the relative importance of changes in the physical return distribution and in the pricing kernel. The nonlinear dynamic pricing kernels can be used to understand return predictability, and provide model-free quantities that can be compared against those implied by structural asset pricing models.
47

Předpovídání realizované volatility: Záleží na skocích v cenách? / Forecasting realized volatility: Do jumps in prices matter?

Lipták, Štefan January 2012 (has links)
This thesis uses Heterogeneous Autoregressive models of Realized Volatility on five-minute data of three of the most liquid financial assets - S&P 500 Futures index, Euro FX and Light Crude NYMEX. The main contribution lies in the length of the datasets which span the time period of 25 years (13 years in case of Euro FX). Our aim is to show that decomposing realized variance into continuous and jump components improves the predicatability of RV also on extremely long high frequency datasets. The main goal is to investigate the dynamics of the HAR model parameters in time. Also, we examine if volatilities of various assets behave differently. The results reveal that decomposing RV into its components indeed im- proves the modeling and forecasting of volatility on all datasets. However, we found that forecasts are best when based on short, 1-2 years, pre-forecast periods due to high dynamics of HAR model's parameters in time. This dynamics is revealed also by a year-by-year estimation on all datasets. Con- sequently, we consider HAR models to be inapproppriate for modeling RV on such long datasets as they are not able to capture the dynamics of RV. This was indicated on all three datasets, thus, we conclude that volatility behaves similarly for different types of assets with similar liquidity. 1
48

Five contributions to econometric theory and the econometrics of ultra-high-frequency data

Meitz, Mika January 2006 (has links)
No description available.
49

Improvement And Development Of High-frequency Wireless Token-ring Protocol

Kurtulus, Taner 01 January 2011 (has links) (PDF)
STANAG 5066 Edition 2 is a node-to-node protocol developed by NATO in order to communicate via HF media. IP integration is made to be able to spread the use of STANAG 5066 protocol. However, this integration made the communication much slower which is already slow. In order to get faster the speed and communicate within single-frequency multi-node network, HFTRP, which is a derivative of WTRP, is developed. This protocol is in two parts, first is a message design for management tokens exchanged by communicating nodes, and second is the algorithms used to create, maintain, and repair the ring of nodes in the network. Scope of this thesis is to find out a faster ring setup, growing procedure and to implement. Beside, finding optimum values of tuning parameters for HFTRP is also in the scope of this thesis.
50

Mesure et Prévision de la Volatilité pour les Actifs Liquides

Chaker, Selma 04 1900 (has links)
Le prix efficient est latent, il est contaminé par les frictions microstructurelles ou bruit. On explore la mesure et la prévision de la volatilité fondamentale en utilisant les données à haute fréquence. Dans le premier papier, en maintenant le cadre standard du modèle additif du bruit et le prix efficient, on montre qu’en utilisant le volume de transaction, les volumes d’achat et de vente, l’indicateur de la direction de transaction et la différence entre prix d’achat et prix de vente pour absorber le bruit, on améliore la précision des estimateurs de volatilité. Si le bruit n’est que partiellement absorbé, le bruit résiduel est plus proche d’un bruit blanc que le bruit original, ce qui diminue la misspécification des caractéristiques du bruit. Dans le deuxième papier, on part d’un fait empirique qu’on modélise par une forme linéaire de la variance du bruit microstructure en la volatilité fondamentale. Grâce à la représentation de la classe générale des modèles de volatilité stochastique, on explore la performance de prévision de différentes mesures de volatilité sous les hypothèses de notre modèle. Dans le troisième papier, on dérive de nouvelles mesures réalizées en utilisant les prix et les volumes d’achat et de vente. Comme alternative au modèle additif standard pour les prix contaminés avec le bruit microstructure, on fait des hypothèses sur la distribution du prix sans frictions qui est supposé borné par les prix de vente et d’achat. / The high frequency observed price series is contaminated with market microstructure frictions or noise. We explore the measurement and forecasting of the fundamental volatility through novel approaches to the frictions’ problem. In the first paper, while maintaining the standard framework of a noise-frictionless price additive model, we use the trading volume, quoted depths, trade direction indicator and bid-ask spread to get rid of the noise. The econometric model is a price impact linear regression. We show that incorporating the cited liquidity costs variables delivers more precise volatility estimators. If the noise is only partially absorbed, the remaining noise is closer to a white noise than the original one, which lessens misspecification of the noise characteristics. Our approach is also robust to a specific form of endogeneity under which the common robust to noise measures are inconsistent. In the second paper, we model the variance of the market microstructure noise that contaminates the frictionless price as an affine function of the fundamental volatility. Under our model, the noise is time-varying intradaily. Using the eigenfunction representation of the general stochastic volatility class of models, we quantify the forecasting performance of several volatility measures under our model assumptions. In the third paper, instead of assuming the standard additive model for the observed price series, we specify the conditional distribution of the frictionless price given the available information which includes quotes and volumes. We come up with new volatility measures by characterizing the conditional mean of the integrated variance.

Page generated in 0.0519 seconds