11 |
Projeção de preços de alumínio: modelo ótimo por meio de combinação de previsões / Aluminum price forecasting: optimal forecast combinationJoão Bosco Barroso de Castro 15 June 2015 (has links)
Commodities primárias, tais como metais, petróleo e agricultura, constituem matérias-primas fundamentais para a economia mundial. Dentre os metais, destaca-se o alumínio, usado em uma ampla gama de indústrias, e que detém o maior volume de contratos na London Metal Exchange (LME). Como o preço não está diretamente relacionado aos custos de produção, em momentos de volatilidade ou choques econômicos, o impacto financeiro na indústria global de alumínio é significativo. Previsão de preços do alumínio é fundamental, portanto, para definição de política industrial, bem como para produtores e consumidores. Este trabalho propõe um modelo ótimo de previsões para preços de alumínio, por meio de combinações de previsões e de seleção de modelos através do Model Confidence Set (MCS), capaz de aumentar o poder preditivo em relação a métodos tradicionais. A abordagem adotada preenche uma lacuna na literatura para previsão de preços de alumínio. Foram ajustados 5 modelos individuais: AR(1), como benchmarking, ARIMA, dois modelos ARIMAX e um modelo estrutural, utilizando a base de dados mensais de janeiro de 1999 a setembro de 2014. Para cada modelo individual, foram geradas 142 previsões fora da amostra, 12 meses à frente, por meio de uma janela móvel de 36 meses. Nove combinações de modelos foram desenvolvidas para cada ajuste dos modelos individuais, resultando em 60 previsões fora da amostra, 12 meses à frente. A avaliação de desempenho preditivo dos modelos foi realizada por meio do MCS para os últimos 60, 48 e 36 meses. Um total de 1.250 estimações foram realizadas e 1.140 variáveis independentes e suas transformadas foram avaliadas. A combinação de previsões usando ARIMA e um ARMAX foi o único modelo que permaneceu no conjunto de modelos com melhor acuracidade de previsão para 36, 48 e 60 meses a um nível descritivo do MCS de 0,10. Para os últimos 36 meses, o modelo combinado proposto apresentou resultados superiores em relação a todos os demais modelos. Duas co-variáveis identificadas no modelo ARMAX, preço futuro de três meses e estoques mundiais, aumentaram a acuracidade de previsão. A combinação ótima apresentou um intervalo de confiança pequeno, equivalente a 5% da média global da amostra completa analisada, fornecendo subsídio importante para tomada de decisão na indústria global de alumínio. iii / Primary commodities, including metals, oil and agricultural products are key raw materials for the global economy. Among metals, aluminum stands out for its large use in several industrial applications and for holding the largest contract volume on the London Metal Exchange (LME). As the price is not directly related to production costs, during volatility periods or economic shocks, the financial impact on the global aluminum industry is significant. Aluminum price forecasting, therefore, is critical for industrial policy as well as for producers and consumers. This work has proposed an optimal forecast model for aluminum prices by using forecast combination and the Model Confidence Set for model selection, resulting in superior performance compared to tradicional methods. The proposed approach was not found in the literature for aluminum price forecasting. Five individual models were developed: AR(1) for benchmarking, ARIMA, two ARIMAX models and a structural model, using monthly data from January 1999 to September 2014. For each individual model, 142 out-of-sample, 12 month ahead, forecasts were generated through a 36 month rolling window. Nine foreast combinations were deveoped for each individual model estimation, resulting in 60 out-of-sample, 12 month ahead forecasts. Model predictive performace was assessed through the Model Confidence Set for the latest 36, 48, and 60 months, through 12-month ahead out-of-sample forecasts. A total of 1,250 estimations were performed and 1,140 independent variables and their transformations were assessed. The forecast combination using ARMA and ARIMAX was the only model among the best set of models presenting equivalent performance at 0.10 MCS p-value in all three periods. For the latest 36 months, the proposed combination was the best model at 0.1 MCS p-value. Two co-variantes, identified for the ARMAX model, namely, 3-month forward price and global inventories increased forecast accuracy. The optimal forecast combination has generated a small confidence interval, equivalent to 5% of average aluminum price for the entire sample, proving relevant support for global industry decision makers.
|
12 |
Analýza a předpověď ekonomických časových řad pomocí vybraných statistických metod / Analyze and economic time series forecasting by using selected statistical methodsSkopal, Martin January 2019 (has links)
V této diplomové práci se zaměřujeme na vytvoření plně automatizovaného algoritmu pro předpovědi finančních řad, který se snaží využít kombinační proceduru na dvou úrovních mezi dvěma rodinami předpovědních modelů, Box-Jenkins a Exponenciální stavové modely, které jsou schopny modelovat jak homoskedastické tak heteroskedastické časové řady. Pro tento účel jsme navrhli selekční proceduru v prostředí MATLAB pro modely ARIMA. Výsledný kombinovaný model je pak aplikován několik finančních časových řad a jeho výkonost je diskutována.
|
13 |
Combinação de projeções de volatilidade baseadas em medidas de risco para dados em alta frequência / Volatility forecast combination using risk measures based on high frequency dataAraújo, Alcides Carlos de 29 April 2016 (has links)
Operações em alta frequência demonstraram crescimento nos últimos anos; em decorrência disso, surgiu a necessidade de estudar o mercado de ações brasileiro no contexto dos dados em alta frequência. Os estimadores da volatilidade dos preços de ações utilizando dados de negociações em alta frequência são os principais objetos de estudo. Conforme Aldridge (2010) e Vuorenmaa (2013), o HFT foi definido como a rápida realocação de capital feita de modo que as transações possam ocorrer em milésimos de segundos por uso de algoritmos complexos que gerenciam envio de ordens, análise dos dados obtidos e tomada das melhores decisões de compra e venda. A principal fonte de informações para análise do HFT são os dados tick by tick, conhecidos como dados em alta frequência. Uma métrica oriunda da análise de dados em alta frequência e utilizada para gestão de riscos é a Volatilidade Percebida. Conforme Andersen et al. (2003), Pong et al. (2004), Koopman et al. (2005) e Corsi (2009) há um consenso na área de finanças de que as projeções da volatilidade utilizando essa métrica de risco são mais eficientes de que a estimativa da volatilidade por meio de modelos GARCH. Na gestão financeira, a projeção da volatilidade é uma ferramenta fundamental para provisionar reservas para possíveis perdas;, devido à existência de vários métodos de projeção da volatilidade e em decorrência desta necessidade torna-se necessário selecionar um modelo ou combinar diversas projeções. O principal desafio para combinar projeções é a escolha dos pesos: as diversas pesquisas da área têm foco no desenvolvimento de métodos para escolhê-los visando minimizar os erros de previsão. A literatura existente carece, no entanto, de uma proposição de método que considere o problema de eventual projeção de volatilidade abaixo do esperado. Buscando preencher essa lacuna, o objetivo principal desta tese é propor uma combinação dos estimadores da volatilidade dos preços de ações utilizando dados de negociações em alta frequência para o mercado brasileiro. Como principal ponto de inovação, propõe-se aqui de forma inédita a utilização da função baseada no Lower Partial Moment (LPM) para estimativa dos pesos para combinação das projeções. Ainda que a métrica LPM seja bastante conhecida na literatura, sua utilização para combinação de projeções ainda não foi analisada. Este trabalho apresenta contribuições ao estudo de combinações de projeções realizadas pelos modelos HAR, MIDAS, ARFIMA e Nearest Neighbor, além de propor dois novos métodos de combinação -- estes denominados por LPMFE (Lower Partial Moment Forecast Error) e DLPMFE (Discounted LPMFE). Os métodos demonstraram resultados promissores pretendem casos cuja pretensão seja evitar perdas acima do esperado e evitar provisionamento excessivo do ponto de vista orçamentário. / The High Frequency Trading (HFT) has grown significantly in the last years, in this way, this raises the need for research of the high frequency data on the Brazilian stock market.The volatility estimators of the asset prices using high frequency data are the main objects of study. According to Aldridge (2010) and Vuorenmaa (2013), the HFT was defined as the fast reallocation of trading capital that the negotiations may occur on milliseconds by complex algorithms scheduled for optimize the process of sending orders, data analysis and to make the best decisions of buy or sell. The principal information source for HFT analysis is the tick by tick data, called as high frequency data. The Realized Volatility is a risk measure from the high frequency data analysis, this metric is used for risk management.According to Andersen et al. (2003), Pong et al. (2004), Koopman et al.(2005) and Corsi (2009) there is a consensus in the finance field that the volatility forecast using this risk measure produce better results than estimating the volatility by GARCH models. The volatility forecasting is a key issue in the financial management to provision capital resources to possible losses. However, because there are several volatility forecast methods, this problem raises the need to choice a specific model or combines the projections. The main challenge to combine forecasts is the choice of the weights, with the aim of minimizingthe forecast errors, several research in the field have been focusing on development of methods to choice the weights.Nevertheless, it is missing in the literature the proposition of amethod which consider the minimization of the risk of an inefficient forecast for the losses protection. Aiming to fill the gap, the main goal of the thesis is to propose a combination of the asset prices volatility forecasts using high frequency data for Brazilian stock market. As the main focus of innovation, the thesis proposes, in an unprecedented way, the use of the function based on the Lower Partial Moment (LPM) to estimate the weights for the combination of volatility forecasts. Although the LPM measure is well known in the literature, the use of this metric for forecast combination has not been yet studied.The thesis contributes to the literature when studying the forecasts combination made by the models HAR, MIDAS, ARFIMA and Nearest Neighbor. The thesis also contributes when proposing two new methods of combinations, these methodologies are referred to as LPMFE (Lower Partial Moment Forecast Error) and DLPMFE (Discounted LPMFE). The methods have shown promising results when it is intended to avoid losses above the expected it is not intended to cause provisioning excess in the budget.
|
14 |
Essays on forecasting and Bayesian model averagingEklund, Jana January 2006 (has links)
This thesis, which consists of four chapters, focuses on forecasting in a data-rich environment and related computational issues. Chapter 1, “An embarrassment of riches: Forecasting using large panels” explores the idea of combining forecasts from various indicator models by using Bayesian model averaging (BMA) and compares the predictive performance of BMA with predictive performance of factor models. The combination of these two methods is also implemented, together with a benchmark, a simple autoregressive model. The forecast comparison is conducted in a pseudo out-of-sample framework for three distinct datasets measured at different frequencies. These include monthly and quarterly US datasets consisting of more than 140 predictors, and a quarterly Swedish dataset with 77 possible predictors. The results show that none of the considered methods is uniformly superior and that no method consistently outperforms or underperforms a simple autoregressive process. Chapter 2. “Forecast combination using predictive measures” proposes using out-of-sample predictive likelihood as the basis for BMA and forecast combination. In addition to its intuitive appeal, the use of the predictive likelihood relaxes the need to specify proper priors for the parameters of each model. We show that the forecast weights based on the predictive likelihood have desirable asymptotic properties. And that these weights will have better small sample properties than the traditional in-sample marginal likelihood when uninformative priors are used. In order to calculate the weights for the combined forecast, a number of observations, a hold-out sample, is needed. There is a trade off involved in the size of the hold-out sample. The number of observations available for estimation is reduced, which might have a detrimental effect. On the other hand, as the hold-out sample size increases, the predictive measure becomes more stable and this should improve performance. When there is a true model in the model set, the predictive likelihood will select the true model asymptotically, but the convergence to the true model is slower than for the marginal likelihood. It is this slower convergence, coupled with protection against overfitting, which is the reason the predictive likelihood performs better when the true model is not in the model set. In Chapter 3. “Forecasting GDP with factor models and Bayesian forecast combination” the predictive likelihood approach developed in the previous chapter is applied to forecasting GDP growth. The analysis is performed on quarterly economic dataset from six countries: Canada, Germany, Great Britain, Italy, Japan and United States. The forecast combination technique based on both in-sample and out-of-sample weights is compared to forecasts based on factor models. The traditional point forecast analysis is extended by considering confidence intervals. The results indicate that forecast combinations based on the predictive likelihood weights have better forecasting performance compared with the factor models and forecast combinations based on the traditional in-sample weights. In contrast to common findings, the predictive likelihood does improve upon an autoregressive process for longer horizons. The largest improvement over the in-sample weights is for small values of hold-out sample sizes, which provides protection against structural breaks at the end of the sample period. The potential benefits of model averaging as a tool for extracting the relevant information from a large set of predictor variables come at the cost of considerable computational complexity. To avoid evaluating all the models, several approaches have been developed to simulate from the posterior distributions. Markov chain Monte Carlo methods can be used to directly draw from the model posterior distributions. It is desirable that the chain moves well through the model space and takes draws from regions with high probabilities. Several computationally efficient sampling schemes, either one at a time or in blocks, have been proposed for speeding up convergence. There is a trade-off between local moves, which make use of the current parameter values to propose plausible values for model parameters, and more global transitions, which potentially allow faster exploration of the distribution of interest, but may be much harder to implement efficiently. Local model moves enable use of fast updating schemes, where it is unnecessary to completely reestimate the new, slightly modified, model to obtain an updated solution. The last fourth chapter “Computational efficiency in Bayesian model and variable selection” investigates the possibility of increasing computational efficiency by using alternative algorithms to obtain estimates of model parameters as well as keeping track of their numerical accuracy. Also, various samplers that explore the model space are presented and compared based on the output of the Markov chain. / Diss. Stockholm : Handelshögskolan, 2006
|
15 |
Previsão de cargas elétricas a curto prazo por combinação de previsões via regressão simbólicaBraga, Douglas de Oliveira Matos 31 August 2017 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-12T11:44:53Z
No. of bitstreams: 1
douglasdeoliveiramatosbraga.pdf: 1221207 bytes, checksum: 2e8c8b8de9aa188f87fe5670354d478c (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-01-23T13:56:44Z (GMT) No. of bitstreams: 1
douglasdeoliveiramatosbraga.pdf: 1221207 bytes, checksum: 2e8c8b8de9aa188f87fe5670354d478c (MD5) / Made available in DSpace on 2018-01-23T13:56:44Z (GMT). No. of bitstreams: 1
douglasdeoliveiramatosbraga.pdf: 1221207 bytes, checksum: 2e8c8b8de9aa188f87fe5670354d478c (MD5)
Previous issue date: 2017-08-31 / O planejamento energético é base para as tomadas de decisões nas companhias de energia
elétrica e, para isto, depende fortemente da disponibilidade de previsões acuradas para as
cargas. Devido á inviabilidade de armazenamentos em larga-escala e o custo elevado de compras de energia a curto prazo, além da possibilidade de multas e sanções de órgãos governamentais, previsões em curto prazo são importantes para a otimização da alocação de recursos e da geração de energia.
Neste trabalho utilizamos nove métodos univariados de séries temporais para a
previsão de cargas a curto prazo, com horizontes de 1 a 24 horas a frente. Buscando melhorar a acurácia das previsões, propomos um método de combinação de previsões através de Regressão Simbólica, que combina de forma não-linear as previsões obtidas pelos nove métodos de séries temporais utilizados. Diferente de outros métodos não-lineares
de regressão, a Regressão Simbólica não precisa de uma especificação previa da forma funcional.
O método proposto é aplicado em uma série real da cidade do Rio de Janeiro (RJ), que contém cargas horárias de 104 semanas dos anos de 1996 e 1997. Comparamos, através de critérios indicados na literatura, os resultados obtidos pelo método proposto com os resultados obtidos por métodos tradicionais de combinação de previsões e ao resultado de simulações de redes neurais artificiais aplicados ao mesmo conjunto de dados. O método proposto obteve melhores resultados, que indicam que a não-linearidade pode ser aspecto importante para combinação de previsões no problema de previsão de carga a curto prazo / Decision-making in energy companies relies heavily on the availability of accurate load
forecasts. Because storing electricity on a large scale is not viable, the cost of short-term
energy purchasing is high, and there are government fines and sanctions for failing to
supply energy on demand, short-term load forecasts are important for the optimization
of resource allocation and energy production.
In this work we used nine univariate time series methods for short-term load forecasts,
with forecast horizons ranging from 1 to 24 hours ahead. In order to improve the accuracy
of forecasts, we propose a method of combining forecasts through Symbolic Regression,
which combines in a non-linear way the forecasts obtained by the nine methods of the
time series used. Unlike other non-linear regression methods, Symbolic Regression does
not need a previous specification of the function structure.
We applied the proposed method to a real time series of the city of Rio de Janeiro (RJ),
which contains data on hourly loads of 104 weeks in the years 1996 and 1997. We compare,
through the criteria indicated in the literature, the results obtained by the proposed
method with the results obtained by traditional methods of forecasts combination and
the result obtained by artificial neural networks applied to the same dataset. The method
has yielded better results, indicating that non-linearity may be important in combining
predictions in short term load forecasts.
|
16 |
Metas para inflação, previsões fiscais e monetárias na UEMOASilva, Eudésio Eduím da 05 June 2018 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2018-07-20T15:07:16Z
No. of bitstreams: 1
eudesioeduimdasilva.pdf: 771603 bytes, checksum: f4e775ab970d99d2708823edc10c427c (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-09-03T16:17:46Z (GMT) No. of bitstreams: 1
eudesioeduimdasilva.pdf: 771603 bytes, checksum: f4e775ab970d99d2708823edc10c427c (MD5) / Made available in DSpace on 2018-09-03T16:17:46Z (GMT). No. of bitstreams: 1
eudesioeduimdasilva.pdf: 771603 bytes, checksum: f4e775ab970d99d2708823edc10c427c (MD5)
Previous issue date: 2018-06-05 / Esta tese tem como objetivo estudar os erros de previsão para inflação e os determinantes do erro de previsão fiscsal na União Econômica e Monetária do Oeste Africano (UEMOA). No que concerne a previsão de inflação, se comparados dois períodos: aquele em que o Banco Central da UEMOA (BCEAO) utiliza o modelo de metas para inflação (entre 2011 e 2015) e o período anterior (entre 1997 e 2010). Para tal, foram estimadas as Raízes dos Erros Quadrados Médios (REQMs) de diferentes modelos econométricos e de suas combinações (similares àqueles utilizados na previsão da inflação da Zona Monetária do Euro). Os resultados mostram uma redução dos erros de previsão da inflação, após a implementação do modelo de metas. Em relação aos determinantes do erro de previsão do saldo orçamentário na zona da União, Econômica e Monetária da África Ocidental (UEMOA) no período entre 2000 e 2015, a análise preliminar dos dados mostra que a maioria dos países da UEMOA apresentam erros de previsão positivos, sugerindo uma postura prudente em relação a previsão do saldo orçamentário. Destarte, foram feitas estimações por meio de quatro métodos econométricos: Mínimos quadrados ordinários em Painel (POLS), Mínimos quadrados ordinários com efeito fixo (FE-OLS), método generalizado de momentos em diferença (DGMM) e método generalizado de momentos sistêmico (S-GMM). Os resultados mostram a relevância dos fatores econômicos na explicação do erro de previsão do saldo orçamentário, especialmente o erro de previsão do PIB. Por outro lado, a hipótese do efeito da crise de subprime de 2008 não foi confirmada na zona da UEMOA. Os fatores políticos, institucionais e de governança também não tiveram relevância na determinação do erro de previsão fiscal. / The main objective of this thesis is to study inflation forecasting errors and the determinants of fiscal forecast error in the West African Economic and Monetary Union (WAEMU). Concerning inflation forecasting, two periods are compared: the one in which the UEMOA (BCEAO) uses the inflation targeting model (between 2011 and 2015) and the previous period (between 1997 and 2010). In order to do so, the Mean Square Error Roots (REQMs) of different econometric models and their combinations (similar to those used in inflation forecasting of the Euro Monetary Zone) were estimated. The results show a reduction of inflation forecast errors after the implementation of the target model. Regarding the determinants of the forecast error of the budget balance in the West African Economic and Monetary Union (WAEMU) area between 2000 and 2015, preliminary data analysis shows that most WAEMU countries have positive forecast errors, suggesting a cautious approach to forecasting the budget balance. Thus, estimations were made through four econometric methods: Ordinary least squares in Panel (POLS), Ordinary least squares with fixed effect (FE-OLS), generalized method of moments in difference (D-GMM) and systemic generalized method of moments (S-GMM). The results show the relevance of the economic factors to explain forecast error of the budget balance, especially the forecast error of GDP. On the other hand, the hypothesis of the effect of the 2008 subprime crisis was not confirmed in the UEMOA zone. The political, institutional and governance factors were also not relevant in determining the fiscal forecast error.
|
17 |
Combinação de projeções de volatilidade baseadas em medidas de risco para dados em alta frequência / Volatility forecast combination using risk measures based on high frequency dataAlcides Carlos de Araújo 29 April 2016 (has links)
Operações em alta frequência demonstraram crescimento nos últimos anos; em decorrência disso, surgiu a necessidade de estudar o mercado de ações brasileiro no contexto dos dados em alta frequência. Os estimadores da volatilidade dos preços de ações utilizando dados de negociações em alta frequência são os principais objetos de estudo. Conforme Aldridge (2010) e Vuorenmaa (2013), o HFT foi definido como a rápida realocação de capital feita de modo que as transações possam ocorrer em milésimos de segundos por uso de algoritmos complexos que gerenciam envio de ordens, análise dos dados obtidos e tomada das melhores decisões de compra e venda. A principal fonte de informações para análise do HFT são os dados tick by tick, conhecidos como dados em alta frequência. Uma métrica oriunda da análise de dados em alta frequência e utilizada para gestão de riscos é a Volatilidade Percebida. Conforme Andersen et al. (2003), Pong et al. (2004), Koopman et al. (2005) e Corsi (2009) há um consenso na área de finanças de que as projeções da volatilidade utilizando essa métrica de risco são mais eficientes de que a estimativa da volatilidade por meio de modelos GARCH. Na gestão financeira, a projeção da volatilidade é uma ferramenta fundamental para provisionar reservas para possíveis perdas;, devido à existência de vários métodos de projeção da volatilidade e em decorrência desta necessidade torna-se necessário selecionar um modelo ou combinar diversas projeções. O principal desafio para combinar projeções é a escolha dos pesos: as diversas pesquisas da área têm foco no desenvolvimento de métodos para escolhê-los visando minimizar os erros de previsão. A literatura existente carece, no entanto, de uma proposição de método que considere o problema de eventual projeção de volatilidade abaixo do esperado. Buscando preencher essa lacuna, o objetivo principal desta tese é propor uma combinação dos estimadores da volatilidade dos preços de ações utilizando dados de negociações em alta frequência para o mercado brasileiro. Como principal ponto de inovação, propõe-se aqui de forma inédita a utilização da função baseada no Lower Partial Moment (LPM) para estimativa dos pesos para combinação das projeções. Ainda que a métrica LPM seja bastante conhecida na literatura, sua utilização para combinação de projeções ainda não foi analisada. Este trabalho apresenta contribuições ao estudo de combinações de projeções realizadas pelos modelos HAR, MIDAS, ARFIMA e Nearest Neighbor, além de propor dois novos métodos de combinação -- estes denominados por LPMFE (Lower Partial Moment Forecast Error) e DLPMFE (Discounted LPMFE). Os métodos demonstraram resultados promissores pretendem casos cuja pretensão seja evitar perdas acima do esperado e evitar provisionamento excessivo do ponto de vista orçamentário. / The High Frequency Trading (HFT) has grown significantly in the last years, in this way, this raises the need for research of the high frequency data on the Brazilian stock market.The volatility estimators of the asset prices using high frequency data are the main objects of study. According to Aldridge (2010) and Vuorenmaa (2013), the HFT was defined as the fast reallocation of trading capital that the negotiations may occur on milliseconds by complex algorithms scheduled for optimize the process of sending orders, data analysis and to make the best decisions of buy or sell. The principal information source for HFT analysis is the tick by tick data, called as high frequency data. The Realized Volatility is a risk measure from the high frequency data analysis, this metric is used for risk management.According to Andersen et al. (2003), Pong et al. (2004), Koopman et al.(2005) and Corsi (2009) there is a consensus in the finance field that the volatility forecast using this risk measure produce better results than estimating the volatility by GARCH models. The volatility forecasting is a key issue in the financial management to provision capital resources to possible losses. However, because there are several volatility forecast methods, this problem raises the need to choice a specific model or combines the projections. The main challenge to combine forecasts is the choice of the weights, with the aim of minimizingthe forecast errors, several research in the field have been focusing on development of methods to choice the weights.Nevertheless, it is missing in the literature the proposition of amethod which consider the minimization of the risk of an inefficient forecast for the losses protection. Aiming to fill the gap, the main goal of the thesis is to propose a combination of the asset prices volatility forecasts using high frequency data for Brazilian stock market. As the main focus of innovation, the thesis proposes, in an unprecedented way, the use of the function based on the Lower Partial Moment (LPM) to estimate the weights for the combination of volatility forecasts. Although the LPM measure is well known in the literature, the use of this metric for forecast combination has not been yet studied.The thesis contributes to the literature when studying the forecasts combination made by the models HAR, MIDAS, ARFIMA and Nearest Neighbor. The thesis also contributes when proposing two new methods of combinations, these methodologies are referred to as LPMFE (Lower Partial Moment Forecast Error) and DLPMFE (Discounted LPMFE). The methods have shown promising results when it is intended to avoid losses above the expected it is not intended to cause provisioning excess in the budget.
|
18 |
Essays on the econometrics of macroeconomic survey dataConflitti, Cristina 11 September 2012 (has links)
This thesis contains three essays covering different topics in the field of statistics<p>and econometrics of survey data. Chapters one and two analyse two aspects<p>of the Survey of Professional Forecasters (SPF hereafter) dataset. This survey<p>provides a large information on macroeconomic expectations done by the professional<p>forecasters and offers an opportunity to exploit a rich information set.<p>But it poses a challenge on how to extract the relevant information in a proper<p>way. The last chapter addresses the issue of analyzing the opinions on the euro<p>reported in the Flash Eurobaromenter dataset.<p>The first chapter Measuring Uncertainty and Disagreement in the European<p>Survey of Professional Forecasters proposes a density forecast methodology based<p>on the piecewise linear approximation of the individual’s forecasting histograms,<p>to measure uncertainty and disagreement of the professional forecasters. Since<p>1960 with the introduction of the SPF in the US, it has been clear that they were a<p>useful source of information to address the issue on how to measure disagreement<p>and uncertainty, without relying on macroeconomic or time series models. Direct<p>measures of uncertainty are seldom available, whereas many surveys report point<p>forecasts from a number of individual respondents. There has been a long tradition<p>of using measures of the dispersion of individual respondents’ point forecasts<p>(disagreement or consensus) as proxies for uncertainty. Unlike other surveys, the<p>SPF represents an exception. It directly asks for the point forecast, and for the<p>probability distribution, in the form of histogram, associated with the macro variables<p>of interest. An important issue that should be considered concerns how to<p>approximate individual probability densities and get accurate individual results<p>for disagreement and uncertainty before computing the aggregate measures. In<p>contrast to Zarnowitz and Lambros (1987), and Giordani and Soderlind (2003) we<p>overcome the problem associated with distributional assumptions of probability<p>density forecasts by using a non parametric approach that, instead of assuming<p>a functional form for the individual probability law, approximates the histogram<p>by a piecewise linear function. In addition, and unlike earlier works that focus on<p>US data, we employ European data, considering gross domestic product (GDP),<p>inflation and unemployment.<p>The second chapter Optimal Combination of Survey Forecasts is based on<p>a joint work with Christine De Mol and Domenico Giannone. It proposes an<p>approach to optimally combine survey forecasts, exploiting the whole covariance<p>structure among forecasters. There is a vast literature on forecast combination<p>methods, advocating their usefulness both from the theoretical and empirical<p>points of view (see e.g. the recent review by Timmermann (2006)). Surprisingly,<p>it appears that simple methods tend to outperform more sophisticated ones, as<p>shown for example by Genre et al. (2010) on the combination of the forecasts in<p>the SPF conducted by the European Central Bank (ECB). The main conclusion of<p>several studies is that the simple equal-weighted average constitutes a benchmark<p>that is hard to improve upon. In contrast to a great part of the literature which<p>does not exploit the correlation among forecasters, we take into account the full<p>covariance structure and we determine the optimal weights for the combination<p>of point forecasts as the minimizers of the mean squared forecast error (MSFE),<p>under the constraint that these weights are nonnegative and sum to one. We<p>compare our combination scheme with other methodologies in terms of forecasting<p>performance. Results show that the proposed optimal combination scheme is an<p>appropriate methodology to combine survey forecasts.<p>The literature on point forecast combination has been widely developed, however<p>there are fewer studies analyzing the issue for combination density forecast.<p>We extend our work considering the density forecasts combination. Moving from<p>the main results presented in Hall and Mitchell (2007), we propose an iterative<p>algorithm for computing the density weights which maximize the average logarithmic<p>score over the sample period. The empirical application is made for the<p>European GDP and inflation forecasts. Results suggest that optimal weights,<p>obtained via an iterative algorithm outperform the equal-weighted used by the<p>ECB density combinations.<p>The third chapter entitled Opinion surveys on the euro: a multilevel multinomial<p>logistic analysis outlines the multilevel aspects related to public attitudes<p>toward the euro. This work was motivated by the on-going debate whether the<p>perception of the euro among European citizenships after ten years from its introduction<p>was positive or negative. The aim of this work is, therefore, to disentangle<p>the issue of public attitudes considering either individual socio-demographic characteristics<p>and macroeconomic features of each country, counting each of them<p>as two separate levels in a single analysis. Considering a hierarchical structure<p>represents an advantage as it models within-country as well as between-country<p>relations using a single analysis. The multilevel analysis allows the consideration<p>of the existence of dependence between individuals within countries induced by<p>unobserved heterogeneity between countries, i.e. we include in the estimation<p>specific country characteristics not directly observable. In this chapter we empirically<p>investigate which individual characteristics and country specificities are<p>most important and affect the perception of the euro. The attitudes toward the<p>euro vary across individuals and countries, and are driven by personal considerations<p>based on the benefits and costs of using the single currency. Individual<p>features, such as a high level of education or living in a metropolitan area, have<p>a positive impact on the perception of the euro. Moreover, the country-specific<p>economic condition can influence individuals attitudes. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
19 |
Forecasting COVID-19 with Temporal Hierarchies and Ensemble MethodsShandross, Li 09 August 2023 (has links) (PDF)
Infectious disease forecasting efforts underwent rapid growth during the COVID-19 pandemic, providing guidance for pandemic response and about potential future trends. Yet despite their importance, short-term forecasting models often struggled to produce accurate real-time predictions of this complex and rapidly changing system. This gap in accuracy persisted into the pandemic and warrants the exploration and testing of new methods to glean fresh insights.
In this work, we examined the application of the temporal hierarchical forecasting (THieF) methodology to probabilistic forecasts of COVID-19 incident hospital admissions in the United States. THieF is an innovative forecasting technique that aggregates time-series data into a hierarchy made up of different temporal scales, produces forecasts at each level of the hierarchy, then reconciles those forecasts using optimized weighted forecast combination. While THieF's unique approach has shown substantial accuracy improvements in a diverse range of applications, such as operations management and emergency room admission predictions, this technique had not previously been applied to outbreak forecasting.
We generated candidate models formulated using the THieF methodology, which differed by their hierarchy schemes and data transformations, and ensembles of the THieF models, computed as a mean of predictive quantiles. The models were evaluated using weighted interval score (WIS) as a measure of forecast skill, and the top-performing subset was compared to several benchmark models. These models included simple ARIMA and seasonal ARIMA models, a naive baseline model, and an ensemble of operational incident hospitalization models from the US COVID-19 Forecast Hub. The THieF models and THieF ensembles demonstrated improvements in WIS and MAE, as well as competitive prediction interval coverage, over many benchmark models for both the validation and testing phases. The best THieF model generally ranked second out of nine total models during the testing evaluation. These accuracy improvements suggest the THieF methodology may serve as a useful addition to the infectious disease forecasting toolkit.
|
20 |
Probabilistic solar power forecasting using partially linear additive quantile regression models: an application to South African dataMpfumali, Phathutshedzo 18 May 2019 (has links)
MSc (Statistics) / Department of Statistics / This study discusses an application of partially linear additive quantile regression
models in predicting medium-term global solar irradiance using data
from Tellerie radiometric station in South Africa for the period August 2009
to April 2010. Variables are selected using a least absolute shrinkage and
selection operator (Lasso) via hierarchical interactions and the parameters
of the developed models are estimated using the Barrodale and Roberts's
algorithm. The best models are selected based on the Akaike information
criterion (AIC), Bayesian information criterion (BIC), adjusted R squared
(AdjR2) and generalised cross validation (GCV). The accuracy of the forecasts
is evaluated using mean absolute error (MAE) and root mean square
errors (RMSE). To improve the accuracy of forecasts, a convex forecast combination
algorithm where the average loss su ered by the models is based
on the pinball loss function is used. A second forecast combination method
which is quantile regression averaging (QRA) is also used. The best set
of forecasts is selected based on the prediction interval coverage probability
(PICP), prediction interval normalised average width (PINAW) and prediction
interval normalised average deviation (PINAD). The results show that
QRA is the best model since it produces robust prediction intervals than
other models. The percentage improvement is calculated and the results
demonstrate that QRA model over GAM with interactions yields a small
improvement whereas QRA over a convex forecast combination model yields
a higher percentage improvement. A major contribution of this dissertation
is the inclusion of a non-linear trend variable and the extension of forecast
combination models to include the QRA. / NRF
|
Page generated in 0.1076 seconds