• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 9
  • 8
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Portfolio management using computational intelligence approaches : forecasting and optimising the stock returns and stock volatilities with fuzzy logic, neural network and evolutionary algorithms

Skolpadungket, Prisadarng January 2013 (has links)
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN's initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
22

Previsão setorial do consumo de fontes energéticas para o Brasil: um estudo a partir da proposta de integração econometria+insumo-produto

Silva, Daniel Oliveira Paiva da 21 June 2010 (has links)
Made available in DSpace on 2015-05-08T14:45:04Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1065234 bytes, checksum: aefbac301ac1d7646c7bac2965673ce2 (MD5) Previous issue date: 2010-06-21 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The appropriate availability of energy resources is an aspect to be considered in the recovery process of the Brazilian economical growth, such fact highlighted by the widespread use of that resource n the economy, as well as on the electric energy crises in 2001, and the natural gas in the years 2004 and 2005. That said, there is a need of reasoning of decision making, or more specifically the strategic management in the supply of energy sources. In this respect, models of prediction have been shown as an important tool to help in these decision makings. Thus some predictions were made by making use of the proposal of econometry integration + product input, this latter being hybrid for the year 2005. The econometric block was employed to endogenize the family consumption and the private investment. Completing the necessary information to carry out the predictions, the scenery setting was used for it, being of Low, of Reference, and of High ones. The integration of the two blocks by the connection strategy all along with the sceneries. That enabled to make predictions which indicated that the oil and natural gas sectors, oil and coke refinement, alcohol and electricity for the High scenery will quite double their rates of consumption on the Brazilian economy from 2006 to 2015, given an average growth of 96%. For the sceneries of Reference and of Low, the results indicate an increase respectively of 45% and 13%. Comparing the results of the predictions for the years 2006 to 2008 with the data available was possible to get at a reasonable rate of accuracy in the predictions, such fact was essentially true for the Electricity sector. / A adequada disponibilidade de recursos energéticos é um aspecto a ser considerado no processo de recuperação do crescimento econômico brasileiro, fato este realçado pela utilização generalizada desse recurso na economia, bem como, pelas crises de energia elétrica de 2001 e do gás natural nos anos de 2004 e 2005. Sendo assim, há a necessidade de racionalização da tomada de decisão, ou mais precisamente da gestão estratégica no suprimento de fontes energéticas. Neste contexto, modelos de previsão têm se apresentado como uma ferramenta importante para subsidiar nessas tomadas de decisão. Desta forma, foram realizadas previsões utilizando-se da proposta de integração econometria + insumoproduto, sendo esta última híbrida, para o ano de 2005. O bloco econométrico foi empregado para endogeneizar o consumo das famílias e o investimento privado. Complementando as informações necessárias para a realização das previsões se fez uso da construção de cenários, sendo eles o de Baixa, de Referência e de Alta. A integração dos dois blocos pela estratégia de ligação, juntamente aos cenários, permitiu construir previsões que indicaram que os setores petróleo e gás natural, refino de petróleo e coque, álcool e eletricidade, para o cenário de Alta, praticamente dobrarão os seus níveis de consumo na economia brasileira no intervalo de 2006 a 2015, dado um crescimento médio de 96%. Para os cenários de Referência e de Baixa, os resultados indicam um crescimento de respectivamente 45% e 13%. Confrontando os resultados das previsões para os anos de 2006 a 2008 aos dados a que já se tem disponibilidade pôde-se perceber um nível razoável de acurácia das previsões, fato este principalmente verdadeiro para o setor Eletricidade.
23

Previsão do consumo de energia elétrica a curto prazo, usando combinações de métodos univariados

Carneiro, Anna Cláudia Mancini da Silva 26 September 2014 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-02T12:24:39Z No. of bitstreams: 1 annaclaudiamancinidasilvacarneiro.pdf: 1333903 bytes, checksum: a7b3819bb5b0e1adb8efd07bca0f9aa2 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:35:55Z (GMT) No. of bitstreams: 1 annaclaudiamancinidasilvacarneiro.pdf: 1333903 bytes, checksum: a7b3819bb5b0e1adb8efd07bca0f9aa2 (MD5) / Made available in DSpace on 2017-03-06T19:35:55Z (GMT). No. of bitstreams: 1 annaclaudiamancinidasilvacarneiro.pdf: 1333903 bytes, checksum: a7b3819bb5b0e1adb8efd07bca0f9aa2 (MD5) Previous issue date: 2014-09-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A previsão de cargas elétricas é fundamental para o planejamento das empresas de energia. O foco deste estudo são as previsões a curto prazo; assim, aplicamos métodos univariados de previsão de séries temporais a uma série real de cargas elétricas de 104 semanas no Rio de Janeiro, nos anos de 1996 e 1997, e experimentamos várias combinações dos métodos de melhor desempenho. As combinações foram feitas pelo método outperformance, uma combinação linear simples, com pesos fixos. Os resultados das combinações foram comparados ao de simulações de redes neurais artificiais que solucionam o mesmo problema, e ao resultado de um método de amortecimento de dupla sazonalidade aditiva. No geral, este método de amortecimento obteve os melhores resultados, e talvez seja o mais adequado e confiável para aplicações práticas, embora necessite de melhorias para garantir a extração completa da informação contida nos dados. / Forecasting the demand for electric power is crucial for the production planning in energy utilities. The focus of this study are the short-term forecasts. We apply univariate time series methods to the forecasting of a series containing observations of the energy consumption of 104 weeks in Rio de Janeiro, in 1996 and 1997, and experiment with several combinations of the methods which have the best performance. These combinations are done by the outperformance method, a simple linear combination with fixed weights. The results were compared to those obtained by neural networks on the same problem, and with the results of a exponential smoothing method for dual additive seasonality. Overall, the exponential smoothing method achieved the best results, and was shown to be perhaps the most reliable and suitable for practical applications, even though it needs improvements to ensure complete extraction of the information contained in the data.
24

Aproximace prostorově distribuovaných hierarchicky strukturovaných dat / Approximation of spatially-distributed hierarchically organized data

Smejkalová, Veronika January 2018 (has links)
The forecast of the waste production is an important information for planning in waste management. The historical data often consists of short time series, therefore traditional prognostic approaches fail. The mathematical model for forecasting of future waste production based on spatially distributed data with hierarchically structure is suggested in this thesis. The approach is based on principles of regression analysis with final balance to ensure the compliance of aggregated data values. The selection of the regression function is a part of mathematical model for high-quality description of data trend. In addition, outlier values are cleared, which occur abundantly in the database. The emphasis is on decomposition of extensive model into subtasks, which lead to a simpler implementation. The output of this thesis is tool tested within case study on municipal waste production data in the Czech Republic.
25

Portfolio management using computational intelligence approaches. Forecasting and Optimising the Stock Returns and Stock Volatilities with Fuzzy Logic, Neural Network and Evolutionary Algorithms.

Skolpadungket, Prisadarng January 2013 (has links)
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN¿s initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
26

[en] SPOT PRICE FORECASTING IN THE ELECTRICITY MARKET / [pt] PREVISÃO DO PREÇO SPOT NO MERCADO DE ENERGIA ELÉTRICA

LUCIO DE MEDEIROS 14 April 2004 (has links)
[pt] O objetivo da tese é propor uma metodologia para previsão do preço de curto prazo (spot) da energia elétrica no Brasil baseada em sistemas neuro-fuzzy e nos programas do planejamento da operação do sistema elétrico brasileiro. Com essa abordagem, obtém-se distribuições estimadas do preço spot para o curto prazo com menor dispersão do que as obtidas somente com os programas do planejamento da operação. Além disso, por ser rápido, o sistema de previsão final possibilita análises de cenários ou simulações Monte Carlo. As principais variáveis que afetam o preço spot no Brasil são consideradas, tais como a energia natural afluente e a energia armazenada, entre outras. Ainda, é possível incluir também variáveis que não têm um histórico definido ou dados suficientes para o treinamento, tais como o plano de obras, limites de intercâmbio, demanda etc. Comparações com modelos de redes neurais são feitas. Apresenta-se, também, o estado da arte em modelagem para a política e o mercado de energia elétrica e os principais conceitos de gerenciamento de risco no mercado de eletricidade. / [en] This thesis focuses on spot price forecasting and risk management in the Brazilian electricity industry. It is proposed a new methodology for the problem based on neuro- fuzzy systems and the dispatching and planning operation programs. The main advantage of the approach is to be able to get more informative spot price distributions than using the operation and planning programs alone. Furthermore, it allows Monte Carlo simulations or scenarios analysis as the forecasting system runs in less than 1 minute. The main variables which affect the spot price (inflow river, storage capacity of reservoir, among others) are included in the model. Even variables such as the interchange limits, without a well-defined time series and which could be important, could also be included because of the intrinsic characteristics of each fuzzy model. Comparisons with neural networks models are made. It is also presented the state-of-the-art in the market and politics modelling for the electricity market around the world, as well as some main concepts of the risk management.
27

[en] A COMPARATIVE STUDY OF THE FORECAST CAPABILITY OF VOLATILITY MODELS / [pt] ESTUDO COMPARATIVO DA CAPACIDADE PREDITIVA DE MODELOS DE ESTIMAÇÃO DE VOLATILIDADE

LUIS ANTONIO GUIMARAES BENEGAS 15 January 2002 (has links)
[pt] O conceito de risco é definido como a distribuição de resultados inesperados devido a alterações nos valores das variáveis que descrevem o mercado. Entretanto, o risco não é uma variável observável e sua quantificação depende do modelo empregado para avaliá-lo. Portanto, o uso de diferentes modelos pode levar a previsões de risco significativamente diferentes.O objetivo principal desta dissertação é realizar um estudo comparativo dos modelos mais amplamente utilizados (medição de variância amostral nos últimos k períodos, modelos de amortecimento exponencial e o GARCH(1,1) de Bollerslev) quanto à capacidade preditiva da volatilidade.Esta dissertação compara os modelos de estimação de volatilidade citados acima quanto à sua capacidade preditiva para carteiras compostas por um conjunto de ações negociadas no mercado brasileiro. As previsões de volatilidade desses modelos serão comparadas com a volatilidade real fora da amostra. Como a volatilidade real não é uma variável observável, usou-se o mesmo procedimento adotado pelo RiskMetrics para o cálculo do fator de decaimento ótimo: assumiu-se a premissa que o retorno médio de cada uma das carteiras de ações estudadas é igual a zero e,como conseqüência disso, a previsão um passo à frente da variância do retorno realizada na data t é igual ao valor esperado do quadrado do retorno na data t.O objetivo final é concluir, por meio de técnicas de backtesting, qual dos modelos de previsão de volatilidade apresentou melhor performance quanto aos critérios de comparação vis-à-vis ao esforço computacional necessário. Dessa forma, pretende-se avaliar qual desses modelos oferece a melhor relação custo-benefício para o mercado acionário brasileiro. / [en] The risk concept is defined as the distribution of the unexpected results from variations in the values of the variables that describe the market. However, the variable risk is not observable and its measurement depends on which model is used in its evaluation. Thus, the application of different models could result in significant different risk forecasts.The goal of this study is to carry out a comparison within the largest used models (sample variance in the last k observations, exponentially smoothing models and the Bollerslev s model GARCH(1,1)). The study compares the models mentioned above regarding its forecast capability of the volatility for portfolios of selected brazilian stocks. The volatility forecasts will be compared to the actual out of sample volatility. As long as the actual volatility is not an observable variable, the same procedure adopted by RiskMetrics in the calculation of the optimum decay factor will be used: it assumes the premise that the average return of which one of the stock portfolios is equal zero and, as the consequence of this fact, the one step variance forecast of the portfolio return carried out on date t is equal to expected value of the squared return of date t.The final objective is to conclude, using backtesting techniques, which of the forecasting volatility models show the best performance regarding the comparison criterions vis-a-vis the demanding computer efforts. By this way, it was aimed to evaluate which of them offer the best cost-benefit relation for the brazilian equity market.
28

Understanding co-movements in macro and financial variables

D'Agostino, Antonello 09 January 2007 (has links)
Over the last years, the growing availability of large datasets and the improvements in the computational speed of computers have further fostered the research in the fields of both macroeconomic modeling and forecasting analysis. A primary focus of these research areas is to improve the models performance by exploiting the informational content of several time series. Increasing the dimension of macro models is indeed crucial for a detailed structural understanding of the economic environment, as well as for an accurate forecasting analysis. As consequence, a new generation of large-scale macro models, based on the micro-foundations of a fully specified dynamic stochastic general equilibrium set-up, has became one of the most flourishing research areas of interest both in central banks and academia. At the same time, there has been a revival of forecasting methods dealing with many predictors, such as the factor models. The central idea of factor models is to exploit co-movements among variables through a parsimonious econometric structure. Few underlying common shocks or factors explain most of the co-variations among variables. The unexplained component of series movements is on the other hand due to pure idiosyncratic dynamics. The generality of their framework allows factor models to be suitable for describing a broad variety of models in a macroeconomic and a financial context. The revival of factor models, over the recent years, comes from important developments achieved by Stock and Watson (2002) and Forni, Hallin, Lippi and Reichlin (2000). These authors find the conditions under which some data averages become collinear to the space spanned by the factors when, the cross section dimension, becomes large. Moreover, their factor specifications allow the idiosyncratic dynamics to be mildly cross-correlated (an effect referred to as the 'approximate factor structure' by Chamberlain and Rothschild, 1983), a situation empirically verified in many applications. These findings have relevant implications. The most important being that the use of a large number of series is no longer representative of a dimensional constraint. On the other hand, it does help to identify the factor space. This new generation of factor models has been applied in several areas of macroeconomics and finance as well as for policy evaluation. It is consequently very likely to become a milestone in the literature of forecasting methods using many predictors. This thesis contributes to the empirical literature on factor models by proposing four original applications. <p><p>In the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation<p>function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used. <p><p>The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario. <p><p>The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear<p>to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability. <p><p>The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It<p>focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the<p>two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts. <p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished

Page generated in 0.0962 seconds