• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 10
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Die kombinering van vooruitskattings : 'n toepassing op die vernaamste makro-ekonomiese veranderlikes

18 February 2014 (has links)
M.Com. (Econometrics) / The main purpose of this study is the combining of forecasts with special reference to major macroeconomic series of South Africa. The study is based on econometric principles and makes use of three macro-economic variables, forecasted with four forecasting techniques. The macroeconomic variables which have been selected are the consumer price index, consumer expenditure on durable and semi-durable products and real M3 money supply. Forecasts of these variables have been generated by applying the Box-Jenkins ARIMA technique, Holt's two parameter exponential smoothing, the regression approach and mUltiplicative decomposition. Subsequently, the results of each individual forecast are combined in order to determine if forecasting errors can be minimized. Traditionally, forecasting involves the identification and application of the best forecasting model. However, in the search for this unique model, it often happens that some important independent information contained in one of the other models, is discarded. To prevent this from happening, researchers have investigated the idea of combining forecasts. A number of researchers used the results from different techniques as inputs into the combination of forecasts. In spite of the differences in their conclusions, three basic principles have been identified in the combination of forecasts, namely: i The considered forecasts should represent the widest range of forecasting techniques possible. Inferior forecasts should be identified. Predictable errors should be modelled and incorporated into a new forecast series. Finally, a method of combining the selected forecasts needs to be chosen. The best way of selecting a m ethod is probably by experimenting to find the best fit over the historical data. Having generated individual forecasts, these are combined by considering the specifications of the three combination methods. The first combination method is the combination of forecasts via weighted averages. The use of weighted averages to combine forecasts allows consideration of the relative accuracy of the individual methods and of the covariances of forecast errors among the methods. Secondly, the combination of exponential smoothing and Box-Jenkins is considered. Past errors of each of the original forecasts are used to determine the weights to attach to the two original forecasts in forming the combined forecasts. Finally, the regression approach is used to combine individual forecasts. Granger en Ramanathan (1984) have shown that weights can be obtained by regressing actual values of the variables of interest on the individual forecasts, without including a constant and with the restriction that weights add up to one. The performance of combination relative to the individual forecasts have been tested, given that the efficiency criterion is the minimization of the mean square errors. The results of both the individual and the combined forecasting methods are acceptable. Although some of the methods prove to be more accurate than others, the conclusion can be made that reliable forecasts are generated by individual and combined forecasting methods. It is up to the researcher to decide whether he wants to use an individual or combined method since the difference, if any, in the root mean square percentage errors (RMSPE) are insignificantly small.
2

Model selection for time series forecasting models

Billah, Baki, 1965- January 2001 (has links)
Abstract not available
3

The combination of high and low frequency data in macroeconometric forecasts: the case of Hong Kong.

January 1999 (has links)
by Chan Ka Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 64-65). / Abstracts in English and Chinese. / ACKNOWLEDGMENTS --- p.iii / LIST OF TABLES --- p.iv / CHAPTER / Chapter I --- INTRODUCTION --- p.1 / Chapter II --- THE LITERATURE REVIEW --- p.4 / Chapter III --- METHODOLOGY / Forecast Pooling Technique / Modified Technique / Chapter IV --- MODEL SPECIFICATIONS --- p.16 / The Monthly Models / The Quarterly Model / Data Description / Chapter V --- THE COMBINED FORECAST --- p.32 / Pooling Forecast Technique in Case of Hong Kong / The Forecasts Results / Chapter VI --- CONCLUSION --- p.38 / TABLES --- p.40 / APPENDIX --- p.53 / BIBLIOGRAPHY --- p.64
4

Three essays on financial econometrics. / CUHK electronic theses & dissertations collection

January 2013 (has links)
本文由三篇文章構成。首篇是關於多維變或然分佈預測的檢驗。第三篇是關於非貝斯結構性轉變的VAR 模型。或然分佈預測的檢驗是基於檢驗PIT(probability integral transformation) 序的均勻份佈性質與獨性質。第一篇文章基於Clements and Smith (2002) 的方法提出新的位置正變換。這新的變換改善原有的對稱問題,以及提高檢驗的power。第二篇文章建對於多變或然分佈預測的data-driven smooth 檢驗。通過蒙特卡模擬,本文驗證這種方法在小樣本下的有效性。在此之前,由於高維模型的複雜性,大部分的研究止於二維模型。我們在文中提出有效的方法把多維變換至單變。蒙特卡模擬實驗,以及在組融據的應用中,都證實這種方法的優勢。最後一篇文章提出非貝斯結構性轉變的VAR 模型。在此之前,Chib(1998) 建的貝斯結構性轉變模型須要預先假定構性轉變的目。因此他的方法須要比較同構性轉變目模型的優。而本文提出的stick-breaking 先驗概,可以使構性轉變目在估計中一同估計出。因此我們的方法具有robust 之性質。通過蒙特卡模擬,我們考察存在著四個構性轉變的autoregressive VAR(2) 模型。結果顯示我們的方法能準確地估計出構性轉變的發生位置。而模型中的65 個估計都十分接近真實值。我們把這方法應用在多個對沖基回報序。驗測出的構性轉變位置與市場大跌的時段十分吻合。 / This thesis consists of three essays on financial econometrics. The first two essays are about multivariate density forecast evaluations. The third essay is on nonparametric Bayesian change-point VAR model. We develop a method for multivariate density forecast evaluations. The density forecast evaluation is based on checking uniformity and independence conditions of the probability integral transformation of the observed series in question. In the first essay, we propose a new method which is a location-adjusted version of Clements and Smith (2002) that corrects asymmetry problem and increases testing power. In the second essay, we develop a data-driven smooth test for multivariate density forecast evaluation and show some evidences on its finite sample performance using Monte Carlo simulations. Previous to our study, most of the works are up to bivariate model as it is difficult to evaluate with the existing methods. We propose an efficient dimensional reduction approach to reduce the dimension of multivariate density evaluation to a univariate one. We perform various Monte Carlo simulations and two applications on financial asset returns which show that our test performs well. The last essay proposes a nonparametric extension to existing Bayesian change-point model in a multivariate setting. Previous change-point model of Chib (1998) requires specification of the number of change points a priori. Hence a posterior model comparison is needed for di erent change-point models. We introduce the stick-breaking prior to the change-point process that allows us to endogenize the number of change points into the estimation procedure. Hence, the number of change points is simultaneously determined with other unknown parameters. Therefore our model is robust to model specification. We preform a Monte Carlo simulation of bivariate vector autoregressive VAR(2) process which is subject to four structural breaks. Our model estimate the break locations with high accuracy and the posterior estimates of the 65 parameters are closed to the true values. We apply our model to various hedge fund return processes and the detected change points coincide with market crashes. / Detailed summary in vernacular field only. / Ko, Iat Meng. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 176-194). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Multivariate Density Forecast Evaluation: A Modified Approach --- p.7 / Chapter 2.1 --- Introduction --- p.7 / Chapter 2.2 --- Evaluating Density Forecasts --- p.13 / Chapter 2.3 --- Monte Carlo Simulations --- p.18 / Chapter 2.3.1 --- Bivariate normal distribution --- p.19 / Chapter 2.3.2 --- The Ramberg distribution --- p.21 / Chapter 2.3.3 --- Student’s t and uniform distributions --- p.24 / Chapter 2.4 --- Empirical Applications --- p.24 / Chapter 2.4.1 --- AR model --- p.25 / Chapter 2.4.2 --- GARCH model --- p.27 / Chapter 2.5 --- Conclusion --- p.29 / Chapter 3 --- Multivariate Density Forecast Evaluation: Smooth Test Approach --- p.39 / Chapter 3.1 --- Introduction --- p.39 / Chapter 3.2 --- Exponential Transformation for Multi-dimension Reduction --- p.47 / Chapter 3.3 --- The Smooth Test --- p.56 / Chapter 3.4 --- The Data-Driven Smooth Test Statistic --- p.66 / Chapter 3.4.1 --- Selection of K --- p.66 / Chapter 3.4.2 --- Choosing p of the Portmanteau based test --- p.69 / Chapter 3.5 --- Monte Carlo Simulations --- p.70 / Chapter 3.5.1 --- Multivariate normal and Student’s t distributions --- p.71 / Chapter 3.5.2 --- VAR(1) model --- p.74 / Chapter 3.5.3 --- Multivariate GARCH(1,1) Model --- p.78 / Chapter 3.6 --- Density Forecast Evaluation of the DCC-GARCH Model in Density Forecast of Spot-Future returns and International Equity Markets --- p.80 / Chapter 3.7 --- Conclusion --- p.87 / Chapter 4 --- Stick-Breaking Bayesian Change-Point VAR Model with Stochastic Search Variable Selection --- p.111 / Chapter 4.1 --- Introduction --- p.111 / Chapter 4.2 --- The Bayesian Change-Point VAR Model --- p.116 / Chapter 4.3 --- The Stick-breaking Process Prior --- p.120 / Chapter 4.4 --- Stochastic Search Variable Selection (SSVS) --- p.121 / Chapter 4.4.1 --- Priors on Φ[subscript j] = vec(Φ[subscript j]) = --- p.122 / Chapter 4.4.2 --- Prior on Σ[subscript j] --- p.123 / Chapter 4.5 --- The Gibbs Sampler and a Monte Carlo Simulation --- p.123 / Chapter 4.5.1 --- The posteriors of ΦΣ[subscript j] and Σ[subscript j] --- p.123 / Chapter 4.5.2 --- MCMC Inference for SB Change-Point Model: A Gibbs Sampler --- p.126 / Chapter 4.5.3 --- A Monte Carlo Experiment --- p.128 / Chapter 4.6 --- Application to Daily Hedge Fund Return --- p.130 / Chapter 4.6.1 --- Hedge Funds Composite Indices --- p.132 / Chapter 4.6.2 --- Single Strategy Hedge Funds Indices --- p.135 / Chapter 4.7 --- Conclusion --- p.138 / Chapter A --- Derivation and Proof --- p.166 / Chapter A.1 --- Derivation of the distribution of (Z₁ - EZ₁) x (Z₂ - EZ₂) --- p.166 / Chapter A.2 --- Derivation of limiting distribution of the smooth test statistic without parameter estimation uncertainty ( θ = θ₀) --- p.168 / Chapter A.3 --- Proof of Theorem 2 --- p.170 / Chapter A.4 --- Proof of Theorem 3 --- p.172 / Chapter A.5 --- Proof of Theorem 4 --- p.174 / Chapter A.6 --- Proof of Theorem 5 --- p.175 / Bibliography --- p.176
5

Modelling and forecasting in the presence of structural change in the linear regression model

Azam, Mohammad Nurul, 1957- January 2001 (has links)
Abstract not available
6

Essays on dynamic macroeconomics

Steinbach, Max Rudibert 04 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: In the first essay of this thesis, a medium scale DSGE model is developed and estimated for the South African economy. When used for forecasting, the model is found to outperform private sector economists when forecasting CPI inflation, GDP growth and the policy rate over certain horizons. In the second essay, the benchmark DSGE model is extended to include the yield on South African 10-year government bonds. The model is then used to decompose the 10-year yield spread into (1) the structural shocks that contributed to its evolution during the inflation targeting regime of the South African Reserve Bank, as well as (2) an expected yield and a term premium. In addition, it is found that changes in the South African term premium may predict future real economic activity. Finally, the need for DSGE models to take account of financial frictions became apparent during the recent global financial crisis. As a result, the final essay incorporates a stylised banking sector into the benchmark DSGE model described above. The optimal response of the South African Reserve Bank to financial shocks is then analysed within the context of this structural model.
7

Structural models for macroeconomics and forecasting

De Antonio Liedo, David 03 May 2010 (has links)
This Thesis is composed by three independent papers that investigate<p>central debates in empirical macroeconomic modeling.<p><p>Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data<p>revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based<p>on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the<p>DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary<p>figures, it is not possible for them to quantify it, as done by our model. <p><p>The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.<p><p><p>Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable<p>to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.<p><p>The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE<p>models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and<p>Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which<p>models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE<p>modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that<p>resulting from the original specification. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
8

Essays on the economics of risk and uncertainty

Berger, Loïc 22 June 2012 (has links)
In the first chapter of this thesis, I use the smooth ambiguity model developed by Klibanoff, Marinacci, and Mukerji (2005) to define the concepts of ambiguity and uncertainty premia in a way analogous to what Pratt (1964) did in the risk theory literature. I show that these concepts may be useful to quantify the effect ambiguity has on the welfare of economic agents. I also define several other concepts such as the unambiguous probability equivalent or the ambiguous utility premium, provide local approximations of these different premia and show the link that exists between them when comparing different degrees of ambiguity aversion not only in the small, but also in the large. <p><p>In the second chapter, I analyze the effect of ambiguity on self-insurance and self-protection, that are tools used to deal with the uncertainty of facing a monetary loss when market insurance is not available (in the self-insurance model, the decision maker has the opportunity to furnish an effort to reduce the size of the loss occurring in the bad state of the world, while in the self-protection – or prevention – model, the effort reduces the probability of being in the bad state). <p>In a short note, in the context of a two-period model I first examine the links between risk-aversion, prudence and self-insurance/self-protection activities under risk. Contrary to the results obtained in the static one-period model, I show that the impacts of prudence and of risk-aversion go in the same direction and generate a higher level of prevention in the more usual situations. I also show that the results concerning self-insurance in a single period framework may be easily extended to a two-period context. <p>I then consider two-period self-insurance and self-protection models in the presence of ambiguity and analyze the effect of ambiguity aversion. I show that in most common situations, ambiguity prudence is a sufficient condition to observe an increase in the level of effort. I propose an interpretation of the model in the context of climate change, so that self-insurance and self-protection are respectively seen as adaptation and mitigation efforts a policy-maker should provide to deal with an uncertain catastrophic event, and interpret the results obtained as an expression of the Precautionary Principle. <p><p>In the third chapter, I introduce the economic theory developed to deal with ambiguity in the context of medical decision-making. I show that, under diagnostic uncertainty, an increase in ambiguity aversion always leads a physician whose goal is to act in the best interest of his patient, to choose a higher level of treatment. In the context of a dichotomic choice (treatment versus no treatment), this result implies that taking into account the attitude agents generally manifest towards ambiguity may induce a physician to change his decision by opting for treatment more often. I further show that under therapeutic uncertainty, the opposite happens, i.e. an ambiguity averse physician may eventually choose not to treat a patient who would have been treated under ambiguity neutrality. <p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
9

Essays in dynamic macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting.<p><p>The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.<p><p>The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.<p>The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.<p><p>The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the<p>latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.<p><p>The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an<p>important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. <p><p>The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
10

Essays on monetary policy, saving and investment

Lenza, Michèle 04 June 2007 (has links)
This thesis addresses three relevant macroeconomic issues: (i) why<p>Central Banks behave so cautiously compared to optimal theoretical<p>benchmarks, (ii) do monetary variables add information about<p>future Euro Area inflation to a large amount of non monetary<p>variables and (iii) why national saving and investment are so<p>correlated in OECD countries in spite of the high degree of<p>integration of international financial markets.<p><p>The process of innovation in the elaboration of economic theory<p>and statistical analysis of the data witnessed in the last thirty<p>years has greatly enriched the toolbox available to<p>macroeconomists. Two aspects of such a process are particularly<p>noteworthy for addressing the issues in this thesis: the<p>development of macroeconomic dynamic stochastic general<p>equilibrium models (see Woodford, 1999b for an historical<p>perspective) and of techniques that enable to handle large data<p>sets in a parsimonious and flexible manner (see Reichlin, 2002 for<p>an historical perspective).<p><p>Dynamic stochastic general equilibrium models (DSGE) provide the<p>appropriate tools to evaluate the macroeconomic consequences of<p>policy changes. These models, by exploiting modern intertemporal<p>general equilibrium theory, aggregate the optimal responses of<p>individual as consumers and firms in order to identify the<p>aggregate shocks and their propagation mechanisms by the<p>restrictions imposed by optimizing individual behavior. Such a<p>modelling strategy, uncovering economic relationships invariant to<p>a change in policy regimes, provides a framework to analyze the<p>effects of economic policy that is robust to the Lucas'critique<p>(see Lucas, 1976). The early attempts of explaining business<p>cycles by starting from microeconomic behavior suggested that<p>economic policy should play no role since business cycles<p>reflected the efficient response of economic agents to exogenous<p>sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}<p>and, more recently, King and Rebelo, 1999). This view was challenged by<p>several empirical studies showing that the adjustment mechanisms<p>of variables at the heart of macroeconomic propagation mechanisms<p>like prices and wages are not well represented by efficient<p>responses of individual agents in frictionless economies (see, for<p>example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al. 2004). Hence, macroeconomic models currently incorporate<p>some sources of nominal and real rigidities in the DSGE framework<p>and allow the study of the optimal policy reactions to inefficient<p>fluctuations stemming from frictions in macroeconomic propagation<p>mechanisms.<p><p>Against this background, the first chapter of this thesis sets up<p>a DSGE model in order to analyze optimal monetary policy in an<p>economy with sectorial heterogeneity in the frequency of price<p>adjustments. Price setters are divided in two groups: those<p>subject to Calvo type nominal rigidities and those able to change<p>their prices at each period. Sectorial heterogeneity in price<p>setting behavior is a relevant feature in real economies (see, for<p>example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro<p>Area). Hence, neglecting it would lead to an understatement of the<p>heterogeneity in the transmission mechanisms of economy wide<p>shocks. In this framework, Aoki (2001) shows that a Central<p>Bank maximizing social welfare should stabilize only inflation in<p>the sector where prices are sticky (hereafter, core inflation).<p>Since complete stabilization is the only true objective of the<p>policymaker in Aoki (2001) and, hence, is not only desirable<p>but also implementable, the equilibrium real interest rate in the<p>economy is equal to the natural interest rate irrespective of the<p>degree of heterogeneity that is assumed. This would lead to<p>conclude that stabilizing core inflation rather than overall<p>inflation does not imply any observable difference in the<p>aggressiveness of the policy behavior. While maintaining the<p>assumption of sectorial heterogeneity in the frequency of price<p>adjustments, this chapter adds non negligible transaction<p>frictions to the model economy in Aoki (2001). As a<p>consequence, the social welfare maximizing monetary policymaker<p>faces a trade-off among the stabilization of core inflation,<p>economy wide output gap and the nominal interest rate. This<p>feature reflects the trade-offs between conflicting objectives<p>faced by actual policymakers. The chapter shows that the existence<p>of this trade-off makes the aggressiveness of the monetary policy<p>reaction dependent on the degree of sectorial heterogeneity in the<p>economy. In particular, in presence of sectorial heterogeneity in<p>price adjustments, Central Banks are much more likely to behave<p>less aggressively than in an economy where all firms face nominal<p>rigidities. Hence, the chapter concludes that the excessive<p>caution in the conduct of monetary policy shown by actual Central<p>Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not<p>represent a sub-optimal behavior but, on the contrary, might be<p>the optimal monetary policy response in presence of a relevant<p>sectorial dispersion in the frequency of price adjustments.<p><p>DSGE models are proving useful also in empirical applications and<p>recently efforts have been made to incorporate large amounts of<p>information in their framework (see Boivin and Giannoni, 2006). However, the<p>typical DSGE model still relies on a handful of variables. Partly,<p>this reflects the fact that, increasing the number of variables,<p>the specification of a plausible set of theoretical restrictions<p>identifying aggregate shocks and their propagation mechanisms<p>becomes cumbersome. On the other hand, several questions in<p>macroeconomics require the study of a large amount of variables.<p>Among others, two examples related to the second and third chapter<p>of this thesis can help to understand why. First, policymakers<p>analyze a large quantity of information to assess the current and<p>future stance of their economies and, because of model<p>uncertainty, do not rely on a single modelling framework.<p>Consequently, macroeconomic policy can be better understood if the<p>econometrician relies on large set of variables without imposing<p>too much a priori structure on the relationships governing their<p>evolution (see, for example, Giannone et al. 2004 and Bernanke et al. 2005).<p>Moreover, the process of integration of good and financial markets<p>implies that the source of aggregate shocks is increasingly global<p>requiring, in turn, the study of their propagation through cross<p>country links (see, among others, Forni and Reichlin, 2001 and Kose et al. 2003). A<p>priori, country specific behavior cannot be ruled out and many of<p>the homogeneity assumptions that are typically embodied in open<p>macroeconomic models for keeping them tractable are rejected by<p>the data. Summing up, in order to deal with such issues, we need<p>modelling frameworks able to treat a large amount of variables in<p>a flexible manner, i.e. without pre-committing on too many<p>a-priori restrictions more likely to be rejected by the data. The<p>large extent of comovement among wide cross sections of economic<p>variables suggests the existence of few common sources of<p>fluctuations (Forni et al. 2000 and Stock and Watson, 2002) around which<p>individual variables may display specific features: a shock to the<p>world price of oil, for example, hits oil exporters and importers<p>with different sign and intensity or global technological advances<p>can affect some countries before others (Giannone and Reichlin, 2004). Factor<p>models mainly rely on the identification assumption that the<p>dynamics of each variable can be decomposed into two orthogonal<p>components - common and idiosyncratic - and provide a parsimonious<p>tool allowing the analysis of the aggregate shocks and their<p>propagation mechanisms in a large cross section of variables. In<p>fact, while the idiosyncratic components are poorly<p>cross-sectionally correlated, driven by shocks specific of a<p>variable or a group of variables or measurement error, the common<p>components capture the bulk of cross-sectional correlation, and<p>are driven by few shocks that affect, through variable specific<p>factor loadings, all items in a panel of economic time series.<p>Focusing on the latter components allows useful insights on the<p>identity and propagation mechanisms of aggregate shocks underlying<p>a large amount of variables. The second and third chapter of this<p>thesis exploit this idea.<p><p>The second chapter deals with the issue whether monetary variables<p>help to forecast inflation in the Euro Area harmonized index of<p>consumer prices (HICP). Policymakers form their views on the<p>economic outlook by drawing on large amounts of potentially<p>relevant information. Indeed, the monetary policy strategy of the<p>European Central Bank acknowledges that many variables and models<p>can be informative about future Euro Area inflation. A peculiarity<p>of such strategy is that it assigns to monetary information the<p>role of providing insights for the medium - long term evolution of<p>prices while a wide range of alternative non monetary variables<p>and models are employed in order to form a view on the short term<p>and to cross-check the inference based on monetary information.<p>However, both the academic literature and the practice of the<p>leading Central Banks other than the ECB do not assign such a<p>special role to monetary variables (see Gali et al. 2004 and<p>references therein). Hence, the debate whether money really<p>provides relevant information for the inflation outlook in the<p>Euro Area is still open. Specifically, this chapter addresses the<p>issue whether money provides useful information about future<p>inflation beyond what contained in a large amount of non monetary<p>variables. It shows that a few aggregates of the data explain a<p>large amount of the fluctuations in a large cross section of Euro<p>Area variables. This allows to postulate a factor structure for<p>the large panel of variables at hand and to aggregate it in few<p>synthetic indexes that still retain the salient features of the<p>large cross section. The database is split in two big blocks of<p>variables: non monetary (baseline) and monetary variables. Results<p>show that baseline variables provide a satisfactory predictive<p>performance improving on the best univariate benchmarks in the<p>period 1997 - 2005 at all horizons between 6 and 36 months.<p>Remarkably, monetary variables provide a sensible improvement on<p>the performance of baseline variables at horizons above two years.<p>However, the analysis of the evolution of the forecast errors<p>reveals that most of the gains obtained relative to univariate<p>benchmarks of non forecastability with baseline and monetary<p>variables are realized in the first part of the prediction sample<p>up to the end of 2002, which casts doubts on the current<p>forecastability of inflation in the Euro Area.<p><p>The third chapter is based on a joint work with Domenico Giannone<p>and gives empirical foundation to the general equilibrium<p>explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found<p>that domestic saving and investment in OECD countries strongly<p>comove, contrary to the idea that high capital mobility should<p>allow countries to seek the highest returns in global financial<p>markets and, hence, imply a correlation among national saving and<p>investment closer to zero than one. Moreover, capital mobility has<p>strongly increased since the publication of Feldstein - Horioka's<p>seminal paper while the association between saving and investment<p>does not seem to comparably decrease. Through general equilibrium<p>mechanisms, the presence of global shocks might rationalize the<p>correlation between saving and investment. In fact, global shocks,<p>affecting all countries, tend to create imbalance on global<p>capital markets causing offsetting movements in the global<p>interest rate and can generate the observed correlation across<p>national saving and investment rates. However, previous empirical<p>studies (see Ventura, 2003) that have controlled for the effects<p>of global shocks in the context of saving-investment regressions<p>failed to give empirical foundation to this explanation. We show<p>that previous studies have neglected the fact that global shocks<p>may propagate heterogeneously across countries, failing to<p>properly isolate components of saving and investment that are<p>affected by non pervasive shocks. We propose a novel factor<p>augmented panel regression methodology that allows to isolate<p>idiosyncratic sources of fluctuations under the assumption of<p>heterogenous transmission mechanisms of global shocks. Remarkably,<p>by applying our methodology, the association between domestic<p>saving and investment decreases considerably over time,<p>consistently with the observed increase in international capital<p>mobility. In particular, in the last 25 years the correlation<p>between saving and investment disappears.<p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished

Page generated in 0.1547 seconds