Spelling suggestions: "subject:"3factor codels"" "subject:"3factor 2models""
71 |
Understanding co-movements in macro and financial variablesD'Agostino, Antonello 09 January 2007 (has links)
Over the last years, the growing availability of large datasets and the improvements in the computational speed of computers have further fostered the research in the fields of both macroeconomic modeling and forecasting analysis. A primary focus of these research areas is to improve the models performance by exploiting the informational content of several time series. Increasing the dimension of macro models is indeed crucial for a detailed structural understanding of the economic environment, as well as for an accurate forecasting analysis. As consequence, a new generation of large-scale macro models, based on the micro-foundations of a fully specified dynamic stochastic general equilibrium set-up, has became one of the most flourishing research areas of interest both in central banks and academia. At the same time, there has been a revival of forecasting methods dealing with many predictors, such as the factor models. The central idea of factor models is to exploit co-movements among variables through a parsimonious econometric structure. Few underlying common shocks or factors explain most of the co-variations among variables. The unexplained component of series movements is on the other hand due to pure idiosyncratic dynamics. The generality of their framework allows factor models to be suitable for describing a broad variety of models in a macroeconomic and a financial context. The revival of factor models, over the recent years, comes from important developments achieved by Stock and Watson (2002) and Forni, Hallin, Lippi and Reichlin (2000). These authors find the conditions under which some data averages become collinear to the space spanned by the factors when, the cross section dimension, becomes large. Moreover, their factor specifications allow the idiosyncratic dynamics to be mildly cross-correlated (an effect referred to as the 'approximate factor structure' by Chamberlain and Rothschild, 1983), a situation empirically verified in many applications. These findings have relevant implications. The most important being that the use of a large number of series is no longer representative of a dimensional constraint. On the other hand, it does help to identify the factor space. This new generation of factor models has been applied in several areas of macroeconomics and finance as well as for policy evaluation. It is consequently very likely to become a milestone in the literature of forecasting methods using many predictors. This thesis contributes to the empirical literature on factor models by proposing four original applications. <p><p>In the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation<p>function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used. <p><p>The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario. <p><p>The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear<p>to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability. <p><p>The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It<p>focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the<p>two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts. <p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
|
72 |
Dynamic semiparametric factor modelsBorak, Szymon 11 July 2008 (has links)
Hochdimensionale Regressionsprobleme, die sich dynamisch entwickeln, sind in zahlreichen Bereichen der Wissenschaft anzutreffen. Die Dynamik eines solchen komplexen Systems wird typischerweise mittels der Zeitreiheneigenschaften einer geringen Anzahl von Faktoren analysiert. Diese Faktoren wiederum sind mit zeitinvarianten Funktionen von explikativen Variablen bewichtet. Diese Doktorarbeit beschäftigt sich mit einem dynamischen semiparametrischen Faktormodell, dass nichtparametrische Bewichtungsfunktionen benutzt. Zu Beginn sollen kurz die wichtigsten statistischen Methoden diskutiert werden um dann auf die Eigenschaften des verwendeten Modells einzugehen. Im Anschluss folgt die Diskussion einiger Anwendungen des Modellrahmens auf verschiedene Datensätze. Besondere Aufmerksamkeit wird auf die Dynamik der so genannten Implizierten Volatilität und das daraus resultierende Faktor-Hedging von Barrier Optionen gerichtet. / High-dimensional regression problems which reveal dynamic behavior occur frequently in many different fields of science. The dynamics of the whole complex system is typically analyzed by time propagation of few number of factors, which are loaded with time invariant functions of exploratory variables. In this thesis we consider dynamic semiparametric factor model, which assumes nonparametric loading functions. We start with a short discussion of related statistical techniques and present the properties of the model. Additionally real data applications are discussed with particular focus on implied volatility dynamics and resulting factor hedging of barrier options.
|
73 |
Essays on monetary policy, saving and investmentLenza, Michèle 04 June 2007 (has links)
This thesis addresses three relevant macroeconomic issues: (i) why<p>Central Banks behave so cautiously compared to optimal theoretical<p>benchmarks, (ii) do monetary variables add information about<p>future Euro Area inflation to a large amount of non monetary<p>variables and (iii) why national saving and investment are so<p>correlated in OECD countries in spite of the high degree of<p>integration of international financial markets.<p><p>The process of innovation in the elaboration of economic theory<p>and statistical analysis of the data witnessed in the last thirty<p>years has greatly enriched the toolbox available to<p>macroeconomists. Two aspects of such a process are particularly<p>noteworthy for addressing the issues in this thesis: the<p>development of macroeconomic dynamic stochastic general<p>equilibrium models (see Woodford, 1999b for an historical<p>perspective) and of techniques that enable to handle large data<p>sets in a parsimonious and flexible manner (see Reichlin, 2002 for<p>an historical perspective).<p><p>Dynamic stochastic general equilibrium models (DSGE) provide the<p>appropriate tools to evaluate the macroeconomic consequences of<p>policy changes. These models, by exploiting modern intertemporal<p>general equilibrium theory, aggregate the optimal responses of<p>individual as consumers and firms in order to identify the<p>aggregate shocks and their propagation mechanisms by the<p>restrictions imposed by optimizing individual behavior. Such a<p>modelling strategy, uncovering economic relationships invariant to<p>a change in policy regimes, provides a framework to analyze the<p>effects of economic policy that is robust to the Lucas'critique<p>(see Lucas, 1976). The early attempts of explaining business<p>cycles by starting from microeconomic behavior suggested that<p>economic policy should play no role since business cycles<p>reflected the efficient response of economic agents to exogenous<p>sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}<p>and, more recently, King and Rebelo, 1999). This view was challenged by<p>several empirical studies showing that the adjustment mechanisms<p>of variables at the heart of macroeconomic propagation mechanisms<p>like prices and wages are not well represented by efficient<p>responses of individual agents in frictionless economies (see, for<p>example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al. 2004). Hence, macroeconomic models currently incorporate<p>some sources of nominal and real rigidities in the DSGE framework<p>and allow the study of the optimal policy reactions to inefficient<p>fluctuations stemming from frictions in macroeconomic propagation<p>mechanisms.<p><p>Against this background, the first chapter of this thesis sets up<p>a DSGE model in order to analyze optimal monetary policy in an<p>economy with sectorial heterogeneity in the frequency of price<p>adjustments. Price setters are divided in two groups: those<p>subject to Calvo type nominal rigidities and those able to change<p>their prices at each period. Sectorial heterogeneity in price<p>setting behavior is a relevant feature in real economies (see, for<p>example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro<p>Area). Hence, neglecting it would lead to an understatement of the<p>heterogeneity in the transmission mechanisms of economy wide<p>shocks. In this framework, Aoki (2001) shows that a Central<p>Bank maximizing social welfare should stabilize only inflation in<p>the sector where prices are sticky (hereafter, core inflation).<p>Since complete stabilization is the only true objective of the<p>policymaker in Aoki (2001) and, hence, is not only desirable<p>but also implementable, the equilibrium real interest rate in the<p>economy is equal to the natural interest rate irrespective of the<p>degree of heterogeneity that is assumed. This would lead to<p>conclude that stabilizing core inflation rather than overall<p>inflation does not imply any observable difference in the<p>aggressiveness of the policy behavior. While maintaining the<p>assumption of sectorial heterogeneity in the frequency of price<p>adjustments, this chapter adds non negligible transaction<p>frictions to the model economy in Aoki (2001). As a<p>consequence, the social welfare maximizing monetary policymaker<p>faces a trade-off among the stabilization of core inflation,<p>economy wide output gap and the nominal interest rate. This<p>feature reflects the trade-offs between conflicting objectives<p>faced by actual policymakers. The chapter shows that the existence<p>of this trade-off makes the aggressiveness of the monetary policy<p>reaction dependent on the degree of sectorial heterogeneity in the<p>economy. In particular, in presence of sectorial heterogeneity in<p>price adjustments, Central Banks are much more likely to behave<p>less aggressively than in an economy where all firms face nominal<p>rigidities. Hence, the chapter concludes that the excessive<p>caution in the conduct of monetary policy shown by actual Central<p>Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not<p>represent a sub-optimal behavior but, on the contrary, might be<p>the optimal monetary policy response in presence of a relevant<p>sectorial dispersion in the frequency of price adjustments.<p><p>DSGE models are proving useful also in empirical applications and<p>recently efforts have been made to incorporate large amounts of<p>information in their framework (see Boivin and Giannoni, 2006). However, the<p>typical DSGE model still relies on a handful of variables. Partly,<p>this reflects the fact that, increasing the number of variables,<p>the specification of a plausible set of theoretical restrictions<p>identifying aggregate shocks and their propagation mechanisms<p>becomes cumbersome. On the other hand, several questions in<p>macroeconomics require the study of a large amount of variables.<p>Among others, two examples related to the second and third chapter<p>of this thesis can help to understand why. First, policymakers<p>analyze a large quantity of information to assess the current and<p>future stance of their economies and, because of model<p>uncertainty, do not rely on a single modelling framework.<p>Consequently, macroeconomic policy can be better understood if the<p>econometrician relies on large set of variables without imposing<p>too much a priori structure on the relationships governing their<p>evolution (see, for example, Giannone et al. 2004 and Bernanke et al. 2005).<p>Moreover, the process of integration of good and financial markets<p>implies that the source of aggregate shocks is increasingly global<p>requiring, in turn, the study of their propagation through cross<p>country links (see, among others, Forni and Reichlin, 2001 and Kose et al. 2003). A<p>priori, country specific behavior cannot be ruled out and many of<p>the homogeneity assumptions that are typically embodied in open<p>macroeconomic models for keeping them tractable are rejected by<p>the data. Summing up, in order to deal with such issues, we need<p>modelling frameworks able to treat a large amount of variables in<p>a flexible manner, i.e. without pre-committing on too many<p>a-priori restrictions more likely to be rejected by the data. The<p>large extent of comovement among wide cross sections of economic<p>variables suggests the existence of few common sources of<p>fluctuations (Forni et al. 2000 and Stock and Watson, 2002) around which<p>individual variables may display specific features: a shock to the<p>world price of oil, for example, hits oil exporters and importers<p>with different sign and intensity or global technological advances<p>can affect some countries before others (Giannone and Reichlin, 2004). Factor<p>models mainly rely on the identification assumption that the<p>dynamics of each variable can be decomposed into two orthogonal<p>components - common and idiosyncratic - and provide a parsimonious<p>tool allowing the analysis of the aggregate shocks and their<p>propagation mechanisms in a large cross section of variables. In<p>fact, while the idiosyncratic components are poorly<p>cross-sectionally correlated, driven by shocks specific of a<p>variable or a group of variables or measurement error, the common<p>components capture the bulk of cross-sectional correlation, and<p>are driven by few shocks that affect, through variable specific<p>factor loadings, all items in a panel of economic time series.<p>Focusing on the latter components allows useful insights on the<p>identity and propagation mechanisms of aggregate shocks underlying<p>a large amount of variables. The second and third chapter of this<p>thesis exploit this idea.<p><p>The second chapter deals with the issue whether monetary variables<p>help to forecast inflation in the Euro Area harmonized index of<p>consumer prices (HICP). Policymakers form their views on the<p>economic outlook by drawing on large amounts of potentially<p>relevant information. Indeed, the monetary policy strategy of the<p>European Central Bank acknowledges that many variables and models<p>can be informative about future Euro Area inflation. A peculiarity<p>of such strategy is that it assigns to monetary information the<p>role of providing insights for the medium - long term evolution of<p>prices while a wide range of alternative non monetary variables<p>and models are employed in order to form a view on the short term<p>and to cross-check the inference based on monetary information.<p>However, both the academic literature and the practice of the<p>leading Central Banks other than the ECB do not assign such a<p>special role to monetary variables (see Gali et al. 2004 and<p>references therein). Hence, the debate whether money really<p>provides relevant information for the inflation outlook in the<p>Euro Area is still open. Specifically, this chapter addresses the<p>issue whether money provides useful information about future<p>inflation beyond what contained in a large amount of non monetary<p>variables. It shows that a few aggregates of the data explain a<p>large amount of the fluctuations in a large cross section of Euro<p>Area variables. This allows to postulate a factor structure for<p>the large panel of variables at hand and to aggregate it in few<p>synthetic indexes that still retain the salient features of the<p>large cross section. The database is split in two big blocks of<p>variables: non monetary (baseline) and monetary variables. Results<p>show that baseline variables provide a satisfactory predictive<p>performance improving on the best univariate benchmarks in the<p>period 1997 - 2005 at all horizons between 6 and 36 months.<p>Remarkably, monetary variables provide a sensible improvement on<p>the performance of baseline variables at horizons above two years.<p>However, the analysis of the evolution of the forecast errors<p>reveals that most of the gains obtained relative to univariate<p>benchmarks of non forecastability with baseline and monetary<p>variables are realized in the first part of the prediction sample<p>up to the end of 2002, which casts doubts on the current<p>forecastability of inflation in the Euro Area.<p><p>The third chapter is based on a joint work with Domenico Giannone<p>and gives empirical foundation to the general equilibrium<p>explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found<p>that domestic saving and investment in OECD countries strongly<p>comove, contrary to the idea that high capital mobility should<p>allow countries to seek the highest returns in global financial<p>markets and, hence, imply a correlation among national saving and<p>investment closer to zero than one. Moreover, capital mobility has<p>strongly increased since the publication of Feldstein - Horioka's<p>seminal paper while the association between saving and investment<p>does not seem to comparably decrease. Through general equilibrium<p>mechanisms, the presence of global shocks might rationalize the<p>correlation between saving and investment. In fact, global shocks,<p>affecting all countries, tend to create imbalance on global<p>capital markets causing offsetting movements in the global<p>interest rate and can generate the observed correlation across<p>national saving and investment rates. However, previous empirical<p>studies (see Ventura, 2003) that have controlled for the effects<p>of global shocks in the context of saving-investment regressions<p>failed to give empirical foundation to this explanation. We show<p>that previous studies have neglected the fact that global shocks<p>may propagate heterogeneously across countries, failing to<p>properly isolate components of saving and investment that are<p>affected by non pervasive shocks. We propose a novel factor<p>augmented panel regression methodology that allows to isolate<p>idiosyncratic sources of fluctuations under the assumption of<p>heterogenous transmission mechanisms of global shocks. Remarkably,<p>by applying our methodology, the association between domestic<p>saving and investment decreases considerably over time,<p>consistently with the observed increase in international capital<p>mobility. In particular, in the last 25 years the correlation<p>between saving and investment disappears.<p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0396 seconds