21 |
Copulas for High Dimensions: Models, Estimation, Inference, and ApplicationsOh, Dong Hwan January 2014 (has links)
<p>The dissertation consists of four chapters that concern topics on copulas for high dimensions. Chapter 1 proposes a new general model for high dimension joint distributions of asset returns that utilizes high frequency data and copulas. The dependence between returns is decomposed into linear and nonlinear components, which enables the use of high frequency data to accurately measure and forecast linear dependence, and the use of a new class of copulas designed to capture nonlinear dependence among the resulting linearly uncorrelated residuals. Estimation of the new class of copulas is conducted using a composite likelihood, making the model feasible even for hundreds of variables. A realistic simulation study verifies that multistage estimation with composite likelihood results in small loss in efficiency and large gain in computation speed. </p><p>Chapter 2, which is co-authored with Professor Andrew Patton, presents new models for the dependence structure, or copula, of economic variables based on a factor structure. The proposed models are particularly attractive for high dimensional applications, involving fifty or more variables. This class of models generally lacks a closed-form density, but analytical results for the implied tail dependence can be obtained using extreme value theory, and estimation via a simulation-based method using rank statistics is simple and fast. We study the finite-sample properties of the estimation method for applications involving up to 100 variables, and apply the model to daily returns on all 100 constituents of the S\&P 100 index. We find significant evidence of tail dependence, heterogeneous dependence, and asymmetric dependence, with dependence being stronger in crashes than in booms. </p><p>Chapter 3, which is co-authored with Professor Andrew Patton, considers the estimation of the parameters of a copula via a simulated method of moments type approach. This approach is attractive when the likelihood of the copula model is not known in closed form, or when the researcher has a set of dependence measures or other functionals of the copula that are of particular interest. The proposed approach naturally also nests method of moments and generalized method of moments estimators. Drawing on results for simulation based estimation and on recent work in empirical copula process theory, we show the consistency and asymptotic normality of the proposed estimator, and obtain a simple test of over-identifying restrictions as a goodness-of-fit test. The results apply to both $iid$ and time series data. We analyze the finite-sample behavior of these estimators in an extensive simulation study.</p><p>Chapter 4, which is co-authored with Professor Andrew Patton, proposes a new class of copula-based dynamic models for high dimension conditional distributions, facilitating the estimation of a wide variety of measures of systemic risk. Our proposed models draw on successful ideas from the literature on modelling high dimension covariance matrices and on recent work on models for general time-varying distributions. Our use of copula-based models enable the estimation of the joint model in stages, greatly reducing the computational burden. We use the proposed new models to study a collection of daily credit default swap (CDS) spreads on 100 U.S. firms over the period 2006 to 2012. We find that while the probability of distress for individual firms has greatly reduced since the financial crisis of 2008-09, the joint probability of distress (a measure of systemic risk) is substantially higher now than in the pre-crisis period.</p> / Dissertation
|
22 |
Assessing the contribution of garch-type models with realized measures to BM&FBovespa stocks allocationBoff, Tainan de Bacco Freitas January 2018 (has links)
Neste trabalho realizamos um amplo estudo de simulação com o objetivo principal de avaliar o desempenho de carteiras de mínima variância global construídas com base em modelos de previsão da volatilidade que utilizam dados de alta frequência (em comparação a dados diários). O estudo é baseado em um abrangente conjunto de dados financeiros, compreendendo 41 ações listadas na BM&FBOVESPA entre 2009 e 2017. Nós avaliamos modelos de previsão de volatilidade que são inspirados na literatura ARCH, mas que também incluem medidas realizadas. Eles são os modelos GARCH-X, HEAVY e Realized GARCH. Seu desempenho é comparado com o de carteiras construídas com base na matriz de covariância amostral, métodos de encolhimento e DCC-GARCH, bem como com a carteira igualmente ponderada e o índice Ibovespa. Uma vez que a natureza do trabalho é multivariada, e a fim de possibilitar a estimação de matrizes de covariância de grandes dimensões, recorremos à especificação DCC. Utilizamos três frequências de rebalanceamento (diária, semanal e mensal) e quatro conjuntos diferentes de restrições sobre os pesos das carteiras. A avaliação de desempenho baseia-se em medidas econômicas tais como retornos anualizados, volatilidade anualizada, razão de Sharpe, máximo drawdown, Valor em Risco, Valor em Risco condicional e turnover. Como conclusão, para o nosso conjunto de dados o uso de retornos intradiários (amostrados a cada 5 e 10 minutos) não melhora o desempenho das carteiras de mínima variância global. / In this work we perform an extensive backtesting study targeting as a main goal to assess the performance of global minimum variance (GMV) portfolios built on volatility forecasting models that make use of high frequency (compared to daily) data. The study is based on a broad intradaily financial dataset comprising 41 assets listed on the BM&FBOVESPA from 2009 to 2017. We evaluate volatility forecasting models that are inspired by the ARCH literature, but also include realized measures. They are the GARCH-X, the High-Frequency Based Volatility (HEAVY) and the Realized GARCH models. Their perfomances are benchmarked against portfolios built on the sample covariance matrix, covariance matrix shrinkage methods, DCC-GARCH as well as the naive (equally weighted) portfolio and the Ibovespa index. Since the nature of this work is multivariate and in order to make possible the estimation of large covariance matrices, we resort to the Dynamic Conditional Correlation (DCC) specification. We use three different rebalancing schemes (daily, weekly and monthly) and four different sets of constraints on portfolio weights. The performance assessment relies on economic measures such as annualized portfolio returns, annualized volatility, Sharpe ratio, maximum drawdown, Value at Risk, Expected Shortfall and turnover. We also account for transaction costs. As a conclusion, for our dataset the use of intradaily returns (sampled every 5 and 10 minutes) does not enhance the performance of GMV portfolios.
|
23 |
Eseje ve finanční ekonometrii / Essays in Financial EconometricsAvdulaj, Krenar January 2016 (has links)
vi Abstract Proper understanding of the dependence between assets is a crucial ingredient for a number of portfolio and risk management tasks. While the research in this area has been lively for decades, the recent financial crisis of 2007-2008 reminded us that we might not understand the dependence properly. This crisis served as catalyst for boosting the demand for models capturing the dependence structures. Reminded by this urgent call, literature is responding by moving to nonlinear de- pendence models resembling the dependence structures observed in the data. In my dissertation, I contribute to this surge with three papers in financial econo- metrics, focusing on nonlinear dependence in financial time series from different perspectives. I propose a new empirical model which allows capturing and forecasting the conditional time-varying joint distribution of the oil - stocks pair accurately. Em- ploying a recently proposed conditional diversification benefits measure that con- siders higher-order moments and nonlinear dependence from tail events, I docu- ment decreasing benefits from diversification over the past ten years. The diver- sification benefits implied by my empirical model are, moreover, strongly varied over time. These findings have important implications for asset allocation, as the benefits of...
|
24 |
Assessing the contribution of garch-type models with realized measures to BM&FBovespa stocks allocationBoff, Tainan de Bacco Freitas January 2018 (has links)
Neste trabalho realizamos um amplo estudo de simulação com o objetivo principal de avaliar o desempenho de carteiras de mínima variância global construídas com base em modelos de previsão da volatilidade que utilizam dados de alta frequência (em comparação a dados diários). O estudo é baseado em um abrangente conjunto de dados financeiros, compreendendo 41 ações listadas na BM&FBOVESPA entre 2009 e 2017. Nós avaliamos modelos de previsão de volatilidade que são inspirados na literatura ARCH, mas que também incluem medidas realizadas. Eles são os modelos GARCH-X, HEAVY e Realized GARCH. Seu desempenho é comparado com o de carteiras construídas com base na matriz de covariância amostral, métodos de encolhimento e DCC-GARCH, bem como com a carteira igualmente ponderada e o índice Ibovespa. Uma vez que a natureza do trabalho é multivariada, e a fim de possibilitar a estimação de matrizes de covariância de grandes dimensões, recorremos à especificação DCC. Utilizamos três frequências de rebalanceamento (diária, semanal e mensal) e quatro conjuntos diferentes de restrições sobre os pesos das carteiras. A avaliação de desempenho baseia-se em medidas econômicas tais como retornos anualizados, volatilidade anualizada, razão de Sharpe, máximo drawdown, Valor em Risco, Valor em Risco condicional e turnover. Como conclusão, para o nosso conjunto de dados o uso de retornos intradiários (amostrados a cada 5 e 10 minutos) não melhora o desempenho das carteiras de mínima variância global. / In this work we perform an extensive backtesting study targeting as a main goal to assess the performance of global minimum variance (GMV) portfolios built on volatility forecasting models that make use of high frequency (compared to daily) data. The study is based on a broad intradaily financial dataset comprising 41 assets listed on the BM&FBOVESPA from 2009 to 2017. We evaluate volatility forecasting models that are inspired by the ARCH literature, but also include realized measures. They are the GARCH-X, the High-Frequency Based Volatility (HEAVY) and the Realized GARCH models. Their perfomances are benchmarked against portfolios built on the sample covariance matrix, covariance matrix shrinkage methods, DCC-GARCH as well as the naive (equally weighted) portfolio and the Ibovespa index. Since the nature of this work is multivariate and in order to make possible the estimation of large covariance matrices, we resort to the Dynamic Conditional Correlation (DCC) specification. We use three different rebalancing schemes (daily, weekly and monthly) and four different sets of constraints on portfolio weights. The performance assessment relies on economic measures such as annualized portfolio returns, annualized volatility, Sharpe ratio, maximum drawdown, Value at Risk, Expected Shortfall and turnover. We also account for transaction costs. As a conclusion, for our dataset the use of intradaily returns (sampled every 5 and 10 minutes) does not enhance the performance of GMV portfolios.
|
25 |
Verificação e análise dos fatos estilizados no mercado de ações brasileiroNervis, Jonis Jecks [UNESP] 17 December 2010 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:26:16Z (GMT). No. of bitstreams: 0
Previous issue date: 2010-12-17Bitstream added on 2014-06-13T18:54:26Z : No. of bitstreams: 1
nervis_jj_me_bauru.pdf: 692937 bytes, checksum: a9ef52afbfe5a4ae6482213875cfd47d (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Estudos que proporcionem conhecer de forma mais adequada o mercado de capitais brasileiro são uma necessidade para um país que a cada dia tem a sua importância no cenário internacional acentuada. Compreender a dinâmica das flutuações do mercado de ações é um desafio científico possibilitado, no Brasil, por dois aspectos importantes: disponibilidade de dados de alta frequencia sobre os preços praticados no mercado e a utilização de métodos computacionais. O objetivo dessa pesquisa é verificar e analisar os principais fatos estilizados observados em séries temporais financeiras: agrupamento de volatilidade, distribuições de probabilidade com caudas gordas e a presença de memória de longo alcance na série temporal dos retornos absolutos. Para isso, foram utilizadas e analisadas as cotações intraday de ações de dez companhias negociadas na Bolsa de Valores, Mercadorias e Futuros que correspondem juntas a uma participação de 52,1%, para a data de 01/09/2009, no Ìndice Bovespa. Verificou-se a existência de vários fatos estilizados em todas as ações da amostra, bem como se procedeu a caracterização desses comportamentos por meio de gráficos e medidas estatísticas / Studies that provide to know in a more suitable way the Brazilian money market are a necessity for a country that has its importance increased in the international scenery every day. Understanding the dynamics of the stock market fluctuation is a scientific challenge possible, in Brazil, because of two important aspects: availability of high frequency data on the prices practiced in the stock market and the use of computing methods. The objective of this survey is to verify and analyze the stylized facts observed in financial seasonal series: gathering of volatility, probability distribution with fat tails and the presence of high reaching memory in the seasonal series of abolute recurrence. For this, it was used and analyzed the intraday quotations over stocks of ten enterprises in the stock exchange, commodities and futures that correspond together to a participation of 52,1% to th data of 09/01/2009, in the Bovespa index. It was verified the existence os several stylized fact in all stock samples and how it was preceded the characterization of this behavior by graphic displays and statistical measures
|
26 |
Essays on the microstructure of emerging commodities futures markets / Ensaios sobre a microestrutura de mercados futuros agrícolas emergentesGeraldo Costa Júnior 04 September 2017 (has links)
Commodities futures trading went through unparalleled structural transformation during the first decade of the 2000s, which ultimately resulted in long lasting impacts on the volume and open interest levels as well as on the access to these markets and inclusion of new participants. Benefiting from the new sets of high frequency data made available due to these transformations, this dissertation is composed of three papers that investigate different market microstructure aspects of the commodities futures markets at BM&F-Bovespa. The first paper analyzes the modelling and forecasting of realized volatility in the corn and live cattle markets. For this purpose, the heterogeneous autoregressive model (HAR-RV) proposed by Corsi (2009) was used, as well as its extensions adapted to include jump components (Andersen et al., 2007) and leverage components (Corsi and Reno, 2012). Using measurements to compare both analysis, results show that modelling in-sample realized volatility is best performed if jumps and leverage components are included in the model. Out-of-sample forecasts results for the live cattle market show that there was no statistically significant difference between the models. For the corn markets, difference in the models\' forecast performance was found at the daily horizon only. The second paper delves into the relationship between volatility, volume and bid-ask spread in the corn and live cattle markets. Considering that these are emerging agricultural markets, concentration measures were also included. A three-equation structural model was used to capture the relationship between volatility, volume and bid-ask spread and the estimation was performed using the IV-GMM approach. Findings show that bid-ask spread levels are higher for live cattle markets than it is for corn markets. In addition, bid-ask spread is negatively related to volume and positively related to volatility. The significance and magnitude of the responses depend on the level of liquidity in each market. Further, results point out that concentration impacts corn and live cattle differently. The third paper examines the dynamic relationship between dealers activity and market structure in the live cattle inter-dealer market at BM&F-Bovespa. First, a descriptive analysis of the live cattle inter-dealer market structure is carried out and then follows an investigation of the dynamic of dealers\' activity and its determinants. Results indicate that the live cattle inter-dealer market is not competitive and that dealers\' activity is positively related to market concentration, quoted bid-ask spread, number of active dealers and the dealer\'s traded quantity. / Negociações nos mercados futuros de commodities passaram por transformações estruturais significativas durante a primeira década dos anos 2000, resultando em uma elevação dos níveis de volume e open interest, e também em uma maior facilidade de acesso a esses mercados e inclusão de novos participantes. Beneficiando-se da divulgação de dados de alta frequência possibilitada por estas transformações, esta tese, composta por três artigos, tem por objetivo investigar diferentes aspectos da microestrutura dos mercados de commodities da BM&F-Bovespa. O primeiro artigo analisa a modelagem e previsão de volatilidade realizada nos mercados futuros de milho e boi gordo. Para este fim, utilizou-se o modelo heterogêneo auto regressivo proposto por Corsi (2009), bem como suas extensões adaptadas para a inclusão dos componentes de saltos (jumps) (Andersen et al., 2007) e alavancagem (Corsi e Reno, 2012). Utilizando diferentes métricas de comparação, os resultados encontrados mostram que os modelos que incluem os componentes de saltos e os de alavancagem tem melhor desempenho que os demais em análises in-sample (modelagem). Por outro lado, a análise das previsões out-of-sample mostra que, para o mercado de boi gordo, não há diferença entre os modelos empregados, enquanto que para o mercado de milho verificou-se uma diferenciação preditiva no horizonte diário, porém para os horizontes semanal e mensal, os quatro modelos tiveram performance indistinta. O segundo artigo explora a relação entre volatilidade, volume e bid-ask spread nos mercados de milho e boi gordo. Levando em conta que se trata de mercados emergentes, métricas de concentração de mercado foram incluídas na análise. Para capturar a relação entre volatilidade, volume e bid-ask spread, um modelo estrutural de três equações simultâneas foi utilizado e a estimação foi feita através do modelo GMM com variáveis instrumentais. Os resultados indicam que os níveis de bid-ask spread encontrados para o mercado de boi gordo são maiores que os encontrados para o mercado de milho. Além disso, o bid-ask spread é negativamente relacionado ao volume e positivamente relacionado à volatilidade. Entretanto, a intensidade e magnitude da relação entre as variáveis depende dos níveis de liquidez dos mercados analisados. A concentração impacta o mercado de milho e boi gordo de forma diferente. O terceiro artigo investiga a dinâmica da relação entre a atividade dos dealers e estrutura do mercado de boi gordo na BM&F-Bovespa. Primeiramente, faz-se uma análise descritiva deste mercado e posteriormente estuda-se o comportamento dos dealers e seus determinantes. Os resultados indicam que os dealers no mercado de boi gordo não operam em uma estrutura competitiva e que a atividade destes é positivamente relacionada à concentração de mercado, ao bid-ask spread, ao número de dealers ativos e à quantidade de contratos transacionada pelos dealers.
|
27 |
Nowcasting by the BSTS-U-MIDAS ModelDuan, Jun 23 September 2015 (has links)
Using high frequency data for forecasting or nowcasting, we have to deal with
three major problems: the mixed frequency problem, the high dimensionality (fat re-
gression, parameter proliferation) problem, and the unbalanced data problem (miss-
ing observations, ragged edge data). We propose a BSTS-U-MIDAS model (Bayesian
Structural Time Series-Unlimited-Mixed-Data Sampling model) to handle these prob-
lem. This model consists of four parts. First of all, a structural time series with
regressors model (STM) is used to capture the dynamics of target variable, and the
regressors are chosen to boost the forecast accuracy. Second, a MIDAS model is
adopted to handle the mixed frequency of the regressors in the STM. Third, spike-
and-slab regression is used to implement variable selection. Fourth, Bayesian model
averaging (BMA) is used for nowcasting. We use this model to nowcast quarterly
GDP for Canada, and find that this model outperform benchmark models: ARIMA
model and Boosting model, in terms of MAE (mean absolute error) and MAPE (mean
absolute percentage error). / Graduate / 0501 / 0508 / 0463 / jonduan@uvic.ca
|
28 |
MODELLING TRADE DURATIONS WITH THE BIRNBAUM-SAUNDERS AUTOREGRESSIVE MODELMayorov, Kirill 10 1900 (has links)
<p>In this thesis we study the Birnbaum-Saunders autoregressive conditional du- ration (BS-ACD) model. As opposed to the standard ACD model, formulated in terms of the conditional mean duration, the BS-ACD model specifies the time-varying model dynamics in terms of the conditional median duration. By means of Monte Carlo simulations, we examine the asymptotic behaviour of the maximum likelihood estimators. We then present a study of numerical efficacy of some optimization algorithms in relation to the BS-ACD model. On a practical side, we fit the BS-ACD model to samples for six securities listed on the New York Stock Exchange.</p> / Master of Science (MSc)
|
29 |
Forecasting using high-frequency data: a comparison of asymmetric financial duration modelsZhang, Q., Cai, Charlie X., Keasey, K. January 2009 (has links)
No / The first purpose of this paper is to assess the short-run forecasting capabilities of two competing financial duration models. The forecast performance of the Autoregressive Conditional Multinomial–Autoregressive Conditional Duration (ACM-ACD) model is better than the Asymmetric Autoregressive Conditional Duration (AACD) model. However, the ACM-ACD model is more complex in terms of the computational setting and is more sensitive to starting values. The second purpose is to examine the effects of market microstructure on the forecasting performance of the two models. The results indicate that the forecast performance of the models generally decreases as the liquidity of the stock increases, with the exception of the most liquid stocks. Furthermore, a simple filter of the raw data improves the performance of both models. Finally, the results suggest that both models capture the characteristics of the micro data very well with a minimum sample length of 20 days.
|
30 |
A Multiscale Analysis of the Factors Controlling Nutrient Dynamics and Cyanobacteria Blooms in Lake ChamplainIsles, Peter D. F. 01 January 2016 (has links)
Cyanobacteria blooms have increased in Lake Champlain due to excessive nutrient loading, resulting in negative impacts on the local economy and environmental health. While climate warming is expected to promote increasingly severe cyanobacteria blooms globally, predicting the impacts of complex climate changes on individual lakes is complicated by the many physical, chemical, and biological processes which mediate nutrient dynamics and cyanobacteria growth across time and space. Furthermore, processes influencing bloom development operate on a variety of temporal scales (hourly, daily, seasonal, decadal, episodic), making it difficult to identify important factors controlling bloom development using traditional methods or coarse temporal resolution datasets. To resolve these inherent problems of scale, I use 4 years of high-frequency biological, hydrodynamic, and biogeochemical data from Missisquoi Bay, Lake Champlain; 23 years of lake-wide monitoring data; and integrated process-based climate-watershed-lake models driven by regional climate projections to answer the following research questions: 1) To what extent do external nutrient inputs or internal nutrient processing control nutrient concentrations and cyanobacteria blooms in Lake Champlain; 2) how do internal and external nutrient inputs interact with meteorological drivers to promote or suppress bloom development; and 3) how is climate change likely to impact these drivers and the risk of cyanobacteria blooms in the future? I find that cyanobacteria blooms are driven by specific combinations of meteorological and biogeochemical conditions in different areas of the lake, and that in the absence of strong management actions cyanobacteria blooms are likely to become more severe in the future due to climate change.
|
Page generated in 0.1012 seconds