• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 52
  • 2
  • Tagged with
  • 112
  • 55
  • 51
  • 28
  • 27
  • 25
  • 24
  • 23
  • 22
  • 21
  • 20
  • 20
  • 19
  • 19
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Macroeconometrics with high-dimensional data

Zeugner, Stefan 12 September 2012 (has links)
CHAPTER 1:<p>The default g-priors predominant in Bayesian Model Averaging tend to over-concentrate posterior mass on a tiny set of models - a feature we denote as 'supermodel effect'. To address it, we propose a 'hyper-g' prior specification, whose data-dependent shrinkage adapts posterior model distributions to data quality. We demonstrate the asymptotic consistency of the hyper-g prior, and its interpretation as a goodness-of-fit indicator. Moreover, we highlight the similarities between hyper-g and 'Empirical Bayes' priors, and introduce closed-form expressions essential to computationally feasibility. The robustness of the hyper-g prior is demonstrated via simulation analysis, and by comparing four vintages of economic growth data.<p><p>CHAPTER 2:<p>Ciccone and Jarocinski (2010) show that inference in Bayesian Model Averaging (BMA) can be highly sensitive to small data perturbations. In particular they demonstrate that the importance attributed to potential growth determinants varies tremendously over different revisions of international income data. They conclude that 'agnostic' priors appear too sensitive for this strand of growth empirics. In response, we show that the found instability owes much to a specific BMA set-up: First, comparing the same countries over data revisions improves robustness. Second, much of the remaining variation can be reduced by applying an evenly 'agnostic', but flexible prior.<p><p>CHAPTER 3:<p>This chapter explores the link between the leverage of the US financial sector, of households and of non-financial businesses, and real activity. We document that leverage is negatively correlated with the future growth of real activity, and positively linked to the conditional volatility of future real activity and of equity returns. <p>The joint information in sectoral leverage series is more relevant for predicting future real activity than the information contained in any individual leverage series. Using in-sample regressions and out-of sample forecasts, we show that the predictive power of leverage is roughly comparable to that of macro and financial predictors commonly used by forecasters. <p>Leverage information would not have allowed to predict the 'Great Recession' of 2008-2009 any better than conventional macro/financial predictors. <p><p>CHAPTER 4:<p>Model averaging has proven popular for inference with many potential predictors in small samples. However, it is frequently criticized for a lack of robustness with respect to prediction and inference. This chapter explores the reasons for such robustness problems and proposes to address them by transforming the subset of potential 'control' predictors into principal components in suitable datasets. A simulation analysis shows that this approach yields robustness advantages vs. both standard model averaging and principal component-augmented regression. Moreover, we devise a prior framework that extends model averaging to uncertainty over the set of principal components and show that it offers considerable improvements with respect to the robustness of estimates and inference about the importance of covariates. Finally, we empirically benchmark our approach with popular model averaging and PC-based techniques in evaluating financial indicators as alternatives to established macroeconomic predictors of real economic activity. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
102

Essays on macroeconometrics and short-term forecasting

Cicconi, Claudia 11 September 2012 (has links)
The thesis, entitled "Essays on macroeconometrics and short-term forecasting",<p>is composed of three chapters. The first two chapters are on nowcasting,<p>a topic that has received an increasing attention both among practitioners and<p>the academics especially in conjunction and in the aftermath of the 2008-2009<p>economic crisis. At the heart of the two chapters is the idea of exploiting the<p>information from data published at a higher frequency for obtaining early estimates<p>of the macroeconomic variable of interest. The models used to compute<p>the nowcasts are dynamic models conceived for handling in an efficient way<p>the characteristics of the data used in a real-time context, like the fact that due to the different frequencies and the non-synchronicity of the releases<p>the time series have in general missing data at the end of the sample. While<p>the first chapter uses a small model like a VAR for nowcasting Italian GDP,<p>the second one makes use of a dynamic factor model, more suitable to handle<p>medium-large data sets, for providing early estimates of the employment in<p>the euro area. The third chapter develops a topic only marginally touched<p>by the second chapter, i.e. the estimation of dynamic factor models on data characterized by block-structures.<p>The firrst chapter assesses the accuracy of the Italian GDP nowcasts based<p>on a small information set consisting of GDP itself, the industrial production<p>index and the Economic Sentiment Indicator. The task is carried out by using<p>real-time vintages of data in an out-of-sample exercise over rolling windows<p>of data. Beside using real-time data, the real-time setting of the exercise is<p>also guaranteed by updating the nowcasts according to the historical release calendar. The model used to compute the nowcasts is a mixed-frequency Vector<p>Autoregressive (VAR) model, cast in state-space form and estimated by<p>maximum likelihood. The results show that the model can provide quite accurate<p>early estimates of the Italian GDP growth rates not only with respect<p>to a naive benchmark but also with respect to a bridge model based on the<p>same information set and a mixed-frequency VAR with only GDP and the industrial production index.<p>The chapter also analyzes with some attention the role of the Economic Sentiment<p>Indicator, and of soft information in general. The comparison of our<p>mixed-frequency VAR with one with only GDP and the industrial production<p>index clearly shows that using soft information helps obtaining more accurate<p>early estimates. Evidence is also found that the advantage from using soft<p>information goes beyond its timeliness.<p>In the second chapter we focus on nowcasting the quarterly national account<p>employment of the euro area making use of both country-specific and<p>area wide information. The relevance of anticipating Eurostat estimates of<p>employment rests on the fact that, despite it represents an important macroeconomic<p>variable, euro area employment is measured at a relatively low frequency<p>(quarterly) and published with a considerable delay (approximately<p>two months and a half). Obtaining an early estimate of this variable is possible<p>thanks to the fact that several Member States publish employment data and<p>employment-related statistics in advance with respect to the Eurostat release<p>of the euro area employment. Data availability represents, nevertheless, a<p>major limit as country-level time series are in general non homogeneous, have<p>different starting periods and, in some cases, are very short. We construct a<p>data set of monthly and quarterly time series consisting of both aggregate and<p>country-level data on Quarterly National Account employment, employment<p>expectations from business surveys and Labour Force Survey employment and<p>unemployment. In order to perform a real time out-of-sample exercise simulating<p>the (pseudo) real-time availability of the data, we construct an artificial<p>calendar of data releases based on the effective calendar observed during the first quarter of 2012. The model used to compute the nowcasts is a dynamic<p>factor model allowing for mixed-frequency data, missing data at the beginning<p>of the sample and ragged edges typical of non synchronous data releases. Our<p>results show that using country-specific information as soon as it is available<p>allows to obtain reasonably accurate estimates of the employment of the euro<p>area about fifteen days before the end of the quarter.<p>We also look at the nowcasts of employment of the four largest Member<p>States. We find that (with the exception of France) augmenting the dynamic<p>factor model with country-specific factors provides better results than those<p>obtained with the model without country-specific factors.<p>The third chapter of the thesis deals with dynamic factor models on data<p>characterized by local cross-correlation due to the presence of block-structures.<p>The latter is modeled by introducing block-specific factors, i.e. factors that<p>are specific to blocks of time series. We propose an algorithm to estimate the model by (quasi) maximum likelihood and use it to run Monte Carlo<p>simulations to evaluate the effects of modeling or not the block-structure on<p>the estimates of common factors. We find two main results: first, that in finite samples modeling the block-structure, beside being interesting per se, can help<p>reducing the model miss-specification and getting more accurate estimates<p>of the common factors; second, that imposing a wrong block-structure or<p>imposing a block-structure when it is not present does not have negative<p>effects on the estimates of the common factors. These two results allow us<p>to conclude that it is always recommendable to model the block-structure<p>especially if the characteristics of the data suggest that there is one. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
103

Essays on modelling and forecasting financial time series

Coroneo, Laura 28 August 2009 (has links)
This thesis is composed of three chapters which propose some novel approaches to model and forecast financial time series. The first chapter focuses on high frequency financial returns and proposes a quantile regression approach to model their intraday seasonality and dynamics. The second chapter deals with the problem of forecasting the yield curve including large datasets of macroeconomics information. While the last chapter addresses the issue of modelling the term structure of interest rates. <p><p>The first chapter investigates the distribution of high frequency financial returns, with special emphasis on the intraday seasonality. Using quantile regression, I show the expansions and shrinks of the probability law through the day for three years of 15 minutes sampled stock returns. Returns are more dispersed and less concentrated around the median at the hours near the opening and closing. I provide intraday value at risk assessments and I show how it adapts to changes of dispersion over the day. The tests performed on the out-of-sample forecasts of the value at risk show that the model is able to provide good risk assessments and to outperform standard Gaussian and Student’s t GARCH models.<p><p>The second chapter shows that macroeconomic indicators are helpful in forecasting the yield curve. I incorporate a large number of macroeconomic predictors within the Nelson and Siegel (1987) model for the yield curve, which can be cast in a common factor model representation. Rather than including macroeconomic variables as additional factors, I use them to extract the Nelson and Siegel factors. Estimation is performed by EM algorithm and Kalman filter using a data set composed by 17 yields and 118 macro variables. Results show that incorporating large macroeconomic information improves the accuracy of out-of-sample yield forecasts at medium and long horizons.<p><p>The third chapter statistically tests whether the Nelson and Siegel (1987) yield curve model is arbitrage-free. Theoretically, the Nelson-Siegel model does not ensure the absence of arbitrage opportunities. Still, central banks and public wealth managers rely heavily on it. Using a non-parametric resampling technique and zero-coupon yield curve data from the US market, I find that the no-arbitrage parameters are not statistically different from those obtained from the Nelson and Siegel model, at a 95 percent confidence level. I therefore conclude that the Nelson and Siegel yield curve model is compatible with arbitrage-freeness.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
104

Essays on systematic and unsystematic monetary and fiscal policies

Cimadomo, Jacopo 24 September 2008 (has links)
The active use of macroeconomic policies to smooth economic fluctuations and, as a<p>consequence, the stance that policymakers should adopt over the business cycle, remain<p>controversial issues in the economic literature.<p>In the light of the dramatic experience of the early 1930s’ Great Depression, Keynes (1936)<p>argued that the market mechanism could not be relied upon to spontaneously recover from<p>a slump, and advocated counter-cyclical public spending and monetary policy to stimulate<p>demand. Albeit the Keynesian doctrine had largely influenced policymaking during<p>the two decades following World War II, it began to be seriously challenged in several<p>directions since the start of the 1970s. The introduction of rational expectations within<p>macroeconomic models implied that aggregate demand management could not stabilize<p>the economy’s responses to shocks (see in particular Sargent and Wallace (1975)). According<p>to this view, in fact, rational agents foresee the effects of the implemented policies, and<p>wage and price expectations are revised upwards accordingly. Therefore, real wages and<p>money balances remain constant and so does output. Within such a conceptual framework,<p>only unexpected policy interventions would have some short-run effects upon the economy.<p>The "real business cycle (RBC) theory", pioneered by Kydland and Prescott (1982), offered<p>an alternative explanation on the nature of fluctuations in economic activity, viewed<p>as reflecting the efficient responses of optimizing agents to exogenous sources of fluctuations, outside the direct control of policymakers. The normative implication was that<p>there should be no role for economic policy activism: fiscal and monetary policy should be<p>acyclical. The latest generation of New Keynesian dynamic stochastic general equilibrium<p>(DSGE) models builds on rigorous foundations in intertemporal optimizing behavior by<p>consumers and firms inherited from the RBC literature, but incorporates some frictions<p>in the adjustment of nominal and real quantities in response to macroeconomic shocks<p>(see Woodford (2003)). In such a framework, not only policy "surprises" may have an<p>impact on the economic activity, but also the way policymakers "systematically" respond<p>to exogenous sources of fluctuation plays a fundamental role in affecting the economic<p>activity, thereby rekindling interest in the use of counter-cyclical stabilization policies to<p>fine tune the business cycle.<p>Yet, despite impressive advances in the economic theory and econometric techniques, there are no definitive answers on the systematic stance policymakers should follow, and on the<p>effects of macroeconomic policies upon the economy. Against this background, the present thesis attempts to inspect the interrelations between macroeconomic policies and the economic activity from novel angles. Three contributions<p>are proposed. <p><p>In the first Chapter, I show that relying on the information actually available to policymakers when budgetary decisions are taken is of fundamental importance for the assessment of the cyclical stance of governments. In the second, I explore whether the effectiveness of fiscal shocks in spurring the economic activity has declined since the beginning of the 1970s. In the third, the impact of systematic monetary policies over U.S. industrial sectors is investigated. In the existing literature, empirical assessments of the historical stance of policymakers over the economic cycle have been mainly drawn from the estimation of "reduced-form" policy reaction functions (see in particular Taylor (1993) and Galì and Perotti (2003)). Such rules typically relate a policy instrument (a reference short-term interest rate or an indicator of discretionary fiscal policy) to a set of explanatory variables (notably inflation, the output gap and the debt-GDP ratio, as long as fiscal policy is concerned). Although these policy rules can be seen as simple approximations of what derived from an explicit optimization problem solved by social planners (see Kollmann (2007)), they received considerable attention since they proved to track the behavior of central banks and fiscal<p>policymakers relatively well. Typically, revised data, i.e. observations available to the<p>econometrician when the study is carried out, are used in the estimation of such policy<p>reaction functions. However, data available in "real-time" to policymakers may end up<p>to be remarkably different from what it is observed ex-post. Orphanides (2001), in an<p>innovative and thought-provoking paper on the U.S. monetary policy, challenged the way<p>policy evaluation was conducted that far by showing that unrealistic assumptions about<p>the timeliness of data availability may yield misleading descriptions of historical policy.<p>In the spirit of Orphanides (2001), in the first Chapter of this thesis I reconsider how<p>the intentional cyclical stance of fiscal authorities should be assessed. Importantly, in<p>the framework of fiscal policy rules, not only variables such as potential output and the<p>output gap are subject to measurement errors, but also the main discretionary "operating<p>instrument" in the hands of governments: the structural budget balance, i.e. the headline<p>government balance net of the effects due to automatic stabilizers. In fact, the actual<p>realization of planned fiscal measures may depend on several factors (such as the growth<p>rate of GDP, the implementation lags that often follow the adoption of many policy<p>measures, and others more) outside the direct and full control of fiscal authorities. Hence,<p>there might be sizeable differences between discretionary fiscal measures as planned in the<p>past and what it is observed ex-post. To be noted, this does not apply to monetary policy<p>since central bankers can control their operating interest rates with great accuracy.<p>When the historical behavior of fiscal authorities is analyzed from a real-time perspective, it emerges that the intentional stance has been counter-cyclical, especially during expansions, in the main OECD countries throughout the last thirteen years. This is at<p>odds with findings based on revised data, generally pointing to pro-cyclicality (see for example Gavin and Perotti (1997)). It is shown that empirical correlations among revision<p>errors and other second-order moments allow to predict the size and the sign of the bias<p>incurred in estimating the intentional stance of the policy when revised data are (mistakenly)<p>used. It addition, formal tests, based on a refinement of Hansen (1999), do not reject<p>the hypothesis that the intentional reaction of fiscal policy to the cycle is characterized by<p>two regimes: one counter-cyclical, when output is above its potential level, and the other<p>acyclical, in the opposite case. On the contrary, the use of revised data does not allow to identify any threshold effect.<p><p>The second and third Chapters of this thesis are devoted to the exploration of the impact<p>of fiscal and monetary policies upon the economy.<p>Over the last years, two approaches have been mainly followed by practitioners for the<p>estimation of the effects of macroeconomic policies on the real activity. On the one hand,<p>calibrated and estimated DSGE models allow to trace out the economy’s responses to<p>policy disturbances within an analytical framework derived from solid microeconomic<p>foundations. On the other, vector autoregressive (VAR) models continue to be largely<p>used since they have proved to fit macro data particularly well, albeit they cannot fully<p>serve to inspect structural interrelations among economic variables.<p>Yet, the typical DSGE and VAR models are designed to handle a limited number of variables<p>and are not suitable to address economic questions potentially involving a large<p>amount of information. In a DSGE framework, in fact, identifying aggregate shocks and<p>their propagation mechanism under a plausible set of theoretical restrictions becomes a<p>thorny issue when many variables are considered. As for VARs, estimation problems may<p>arise when models are specified in a large number of indicators (although latest contributions suggest that large-scale Bayesian VARs perform surprisingly well in forecasting.<p>See in particular Banbura, Giannone and Reichlin (2007)). As a consequence, the growing<p>popularity of factor models as effective econometric tools allowing to summarize in<p>a parsimonious and flexible manner large amounts of information may be explained not<p>only by their usefulness in deriving business cycle indicators and forecasting (see for example<p>Reichlin (2002) and D’Agostino and Giannone (2006)), but also, due to recent<p>developments, by their ability in evaluating the response of economic systems to identified<p>structural shocks (see Giannone, Reichlin and Sala (2002) and Forni, Giannone, Lippi<p>and Reichlin (2007)). Parallelly, some attempts have been made to combine the rigor of<p>DSGE models and the tractability of VAR ones, with the advantages of factor analysis<p>(see Boivin and Giannoni (2006) and Bernanke, Boivin and Eliasz (2005)).<p><p>The second Chapter of this thesis, based on a joint work with Agnès Bénassy-Quéré, presents an original study combining factor and VAR analysis in an encompassing framework,<p>to investigate how "unexpected" and "unsystematic" variations in taxes and government<p>spending feed through the economy in the home country and abroad. The domestic<p>impact of fiscal shocks in Germany, the U.K. and the U.S. and cross-border fiscal spillovers<p>from Germany to seven European economies is analyzed. In addition, the time evolution of domestic and cross-border tax and spending multipliers is explored. In fact, the way fiscal policy impacts on domestic and foreign economies<p>depends on several factors, possibly changing over time. In particular, the presence of excess<p>capacity, accommodating monetary policy, distortionary taxation and liquidity constrained<p>consumers, plays a prominent role in affecting how fiscal policies stimulate the<p>economic activity in the home country. The impact on foreign output crucially depends<p>on the importance of trade links, on real exchange rates and, in a monetary union, on<p>the sensitiveness of foreign economies to the common interest rate. It is well documented<p>that the last thirty years have witnessed frequent changes in the economic environment.<p>For instance, in most OECD countries, the monetary policy stance became less accommodating<p>in the 1980s compared to the 1970s, and more accommodating again in the<p>late 1990s and early 2000s. Moreover, financial markets have been heavily deregulated.<p>Hence, fiscal policy might have lost (or gained) power as a stimulating tool in the hands<p>of policymakers. Importantly, the issue of cross-border transmission of fiscal policy decisions is of the utmost relevance in the framework of the European Monetary Union and this explains why the debate on fiscal policy coordination has received so much attention since the adoption<p>of the single currency (see Ahearne, Sapir and Véron (2006) and European Commission<p>(2006)). It is found that over the period 1971 to 2004 tax shocks have generally been more effective in spurring domestic output than government spending shocks. Interestingly, the inclusion of common factors representing global economic phenomena yields to smaller multipliers<p>reconciling, at least for the U.K. the evidence from large-scale macroeconomic models,<p>generally finding feeble multipliers (see e.g. European Commission’s QUEST model), with<p>the one from a prototypical structural VAR pointing to stronger effects of fiscal policy.<p>When the estimation is performed recursively over samples of seventeen years of data, it<p>emerges that GDP multipliers have dropped drastically from early 1990s on, especially<p>in Germany (tax shocks) and in the U.S. (both tax and government spending shocks).<p>Moreover, the conduct of fiscal policy seems to have become less erratic, as documented<p>by a lower variance of fiscal shocks over time, and this might contribute to explain why<p>business cycles have shown less volatility in the countries under examination.<p>Expansionary fiscal policies in Germany do not generally have beggar-thy-neighbor effects<p>on other European countries. In particular, our results suggest that tax multipliers have<p>been positive but vanishing for neighboring countries (France, Italy, the Netherlands, Belgium and Austria), weak and mostly not significant for more remote ones (the U.K.<p>and Spain). Cross-border government spending multipliers are found to be monotonically<p>weak for all the subsamples considered.<p>Overall these findings suggest that fiscal "surprises", in the form of unexpected reductions in taxation and expansions in government consumption and investment, have become progressively less successful in stimulating the economic activity at the domestic level, indicating that, in the framework of the European Monetary Union, policymakers can only marginally rely on this discretionary instrument as a substitute for national monetary policies. <p><p>The objective of the third chapter is to inspect the role of monetary policy in the U.S. business cycle. In particular, the effects of "systematic" monetary policies upon several industrial sectors is investigated. The focus is on the systematic, or endogenous, component of monetary policy (i.e. the one which is related to the economic activity in a stable and predictable way), for three main reasons. First, endogenous monetary policies are likely to have sizeable real effects, if agents’ expectations are not perfectly rational and if there are some nominal and real frictions in a market. Second, as widely documented, the variability of the monetary instrument and of the main macro variables is only marginally explained by monetary "shocks", defined as unexpected and exogenous variations in monetary conditions. Third, monetary shocks can be simply interpreted as measurement errors (see Christiano, Eichenbaum<p>and Evans (1998)). Hence, the systematic component of monetary policy is likely to have played a fundamental role in affecting business cycle fluctuations. The strategy to isolate the impact of systematic policies relies on a counterfactual experiment, within a (calibrated or estimated) macroeconomic model. As a first step, a macroeconomic shock to which monetary policy is likely to respond should be selected,<p>and its effects upon the economy simulated. Then, the impact of such shock should be<p>evaluated under a “policy-inactive” scenario, assuming that the central bank does not respond<p>to it. Finally, by comparing the responses of the variables of interest under these<p>two scenarios, some evidence on the sensitivity of the economic system to the endogenous<p>component of the policy can be drawn (see Bernanke, Gertler and Watson (1997)).<p>Such kind of exercise is first proposed within a stylized DSGE model, where the analytical<p>solution of the model can be derived. However, as argued, large-scale multi-sector DSGE<p>models can be solved only numerically, thus implying that the proposed experiment cannot<p>be carried out. Moreover, the estimation of DSGE models becomes a thorny issue when many variables are incorporated (see Canova and Sala (2007)). For these arguments, a less “structural”, but more tractable, approach is followed, where a minimal amount of<p>identifying restrictions is imposed. In particular, a factor model econometric approach<p>is adopted (see in particular Giannone, Reichlin and Sala (2002) and Forni, Giannone,<p>Lippi and Reichlin (2007)). In this framework, I develop a technique to perform the counterfactual experiment needed to assess the impact of systematic monetary policies.<p>It is found that 2 and 3-digit SIC U.S. industries are characterized by very heterogeneous degrees of sensitivity to the endogenous component of the policy. Notably, the industries showing the strongest sensitivities are the ones producing durable goods and metallic<p>materials. Non-durable good producers, food, textile and lumber producing industries are<p>the least affected. In addition, it is highlighted that industrial sectors adjusting prices relatively infrequently are the most "vulnerable" ones. In fact, firms in this group are likely to increase quantities, rather than prices, following a shock positively hitting the economy. Finally, it emerges that sectors characterized by a higher recourse to external sources to finance investments, and sectors investing relatively more in new plants and machineries, are the most affected by endogenous monetary actions. / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
105

Understanding co-movements in macro and financial variables

D'Agostino, Antonello 09 January 2007 (has links)
Over the last years, the growing availability of large datasets and the improvements in the computational speed of computers have further fostered the research in the fields of both macroeconomic modeling and forecasting analysis. A primary focus of these research areas is to improve the models performance by exploiting the informational content of several time series. Increasing the dimension of macro models is indeed crucial for a detailed structural understanding of the economic environment, as well as for an accurate forecasting analysis. As consequence, a new generation of large-scale macro models, based on the micro-foundations of a fully specified dynamic stochastic general equilibrium set-up, has became one of the most flourishing research areas of interest both in central banks and academia. At the same time, there has been a revival of forecasting methods dealing with many predictors, such as the factor models. The central idea of factor models is to exploit co-movements among variables through a parsimonious econometric structure. Few underlying common shocks or factors explain most of the co-variations among variables. The unexplained component of series movements is on the other hand due to pure idiosyncratic dynamics. The generality of their framework allows factor models to be suitable for describing a broad variety of models in a macroeconomic and a financial context. The revival of factor models, over the recent years, comes from important developments achieved by Stock and Watson (2002) and Forni, Hallin, Lippi and Reichlin (2000). These authors find the conditions under which some data averages become collinear to the space spanned by the factors when, the cross section dimension, becomes large. Moreover, their factor specifications allow the idiosyncratic dynamics to be mildly cross-correlated (an effect referred to as the 'approximate factor structure' by Chamberlain and Rothschild, 1983), a situation empirically verified in many applications. These findings have relevant implications. The most important being that the use of a large number of series is no longer representative of a dimensional constraint. On the other hand, it does help to identify the factor space. This new generation of factor models has been applied in several areas of macroeconomics and finance as well as for policy evaluation. It is consequently very likely to become a milestone in the literature of forecasting methods using many predictors. This thesis contributes to the empirical literature on factor models by proposing four original applications. <p><p>In the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation<p>function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used. <p><p>The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario. <p><p>The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear<p>to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability. <p><p>The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It<p>focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the<p>two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts. <p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
106

Essays in international economics and industrial organization

Galgau, Olivia 10 November 2006 (has links)
The aim of the thesis is to further explore the relationship between economic integration and firm mobility and investment, both from an empirical and a theoretical perspective, with the objective of drawing conclusions on how government policy can be used to strengthen the positive impact of integration on investment, which is crucial in moving and maintaining countries at the forefront of the technology frontier and accelerating economic growth in a world of rapid technical change and high mobility of ideas, goods, services, capital and labor.<p>The first chapter aims to bring together the literature on economic integration, firm mobility and investment. It contains two sections: one dedicated to the literature on FDI and the second covering the literature on firm entry and exit, economic performance and economic and business regulation.<p>In the second chapter I examine the relationship between the Single Market and FDI both in an intra-EU context and from outside the EU. The empirical results show that the impact of the Single Market on FDI differs substantially from one country to another. This finding may be due to the functioning of institutions.<p>The third chapter studies the relationship between the level of external trade protection put into place by a Regional Integration Agreement(RIA)and the option of a firm from outside the RIA block to serve the RIA market through FDI rather than exports. I find that the level of external trade protection put in place by the RIA depends on the RIA country's capacity to benefit from FDI spillovers, the magnitude of set-up costs of building a plant in the RIA and on the amount of external trade protection erected by the country from outside the reigonal block with respect to the RIA.<p>The fourth chapter studies how the firm entry and exit process is affected by product market reforms and regulations and impact macroeconomic performance. The results show that an increase in deregulation will lead to a rise in firm entry and exit. This in turn will especially affect macroeconomic performance as measured by output growth and labor productivity growth. The analysis done at the sector level shows that results can differ substantially across industries, which implies that deregulation policies should be conducted at the sector level, rather than at the global macroeconomic level. / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
107

Three essays in international economics

Malek Mansour, Jeoffrey H.G. 25 January 2006 (has links)
This thesis consists in a collection of research works dealing with various aspects of International Economics. More precisely, we focus on three main themes: (i) the existence of a world business cycle and the implications thereof, (ii) the likelihood of asymmetric shocks in the Euro Zone resulting from fluctuations in the euro exchange rate because of differences in sector specialization patterns and some consequences of such shocks, and (iii) the relationship between trade openness and growth influence of the sector specialization structure on that relationship.<p><p>Regarding the approach pursued to tackle these problems, we have chosen to strictly remain within the boundaries of empirical (macro)economics - that is, applied econometrics. Though we systematically provide theoretical models to back up our empirical approach, our only real concern is to look at the stories the data can (or cannot) tell us. As to the econometric methodology, we will restrict ourselves to the use of panel data analysis. The large spectrum of techniques available within the panel framework allows us to utilize, for each of the problems at hand, the most suitable approach (or what we think it is). / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
108

Essays in the empirical analysis of venture capital and entrepreneurship

Romain, Astrid 09 February 2007 (has links)
EXECUTIVE SUMMARY<p><p>This thesis aims at analysing some aspects of Venture Capital (VC) and high-tech entrepreneurship. The focus is both at the macroeconomic level, comparing venture capital from an international point of view and Technology-Based Small Firms (TBSF) at company and founder’s level in Belgium. The approach is mainly empirical.<p>This work is divided into two parts. The first part focuses on venture capital. First of all, we test the impact of VC on productivity. We then identify the determinants of VC and we test their impact on the relative level of VC for a panel of countries.<p>The second part concerns the technology-based small firms in Belgium. The objective is twofold. It first aims at creating a database on Belgian TBSF to better understand the importance of entrepreneurship. In order to do this, a national survey was developed and the statistical results were analysed. Secondly, it provides an analysis of the role of universities in the employment performance of TBSF.<p>A broad summary of each chapter is presented below.<p><p>PART 1: VENTURE CAPITAL<p><p>The Economic Impact of Venture Capital<p><p>The objective of this chapter is to perform an evaluation of the macroeconomic impact of venture capital. The main assumption is that VC can be considered as being similar in several respects to business R&D performed by large firms. We test whether VC contributes to economic growth through two main channels. The first one is innovation, characterized by the introduction of new products, processes or services on the market. The second one is the development of an absorptive capacity. These hypotheses are tested quantitatively with a production function model for a panel data set of 16 OECD countries from 1990 to 2001. The results show that the accumulation of VC is a significant factor contributing directly to Multi-Factor Productivity (MFP) growth. The social rate of return to VC is significantly higher than the social rate of return to business or public R&D. VC has also an indirect impact on MFP in the sense that it improves the output elasticity of R&D. An increased VC intensity makes it easier to absorb the knowledge generated by universities and firms, and therefore improves aggregate economic performance.<p><p>Technological Opportunity, Entrepreneurial Environment and Venture Capital Development<p><p>The objective of this chapter is to identify the main determinants of venture capital. We develop a theoretical model where three main types of factors affect the demand and supply of VC: macroeconomic conditions, technological opportunity, and the entrepreneurial environment. The model is evaluated with a panel dataset of 16 OECD countries over the period 1990-2000. The estimates show that VC intensity is pro-cyclical - it reacts positively and significantly to GDP growth. Interest rates affect the VC intensity mainly because the entrepreneurs create a demand for this type of funding. Indicators of technological opportunity such as the stock of knowledge and the number of triadic patents affect positively and significantly the relative level of VC. Labour market rigidities reduce the impact of the GDP growth rate and of the stock of knowledge, whereas a minimum level of entrepreneurship is required in order to have a positive effect of the available stock of knowledge on VC intensity.<p><p>PART 2: TECHNOLOGY-BASED SMALL FIRMS<p><p>Survey in Belgium<p><p>The first purpose of this chapter is to present the existing literature on the performance of companies. In order to get a quantitative insight into the entrepreneurial growth process, an original survey of TBSF in Belgium was launched in 2002. The second purpose is to describe the methodology of our national TBSF survey. This survey has two main merits. The first one lies in the quality of the information. Indeed, most of national and international surveys have been developed at firm-level. There exist only a few surveys at founder-level. In the TBSF database, information both at firm and at entrepreneur-level will be found.<p>The second merit is about the subject covered. TBSF survey tackles the financing of firms (availability of public funds, role of venture capitalists, availability of business angels,…), the framework conditions (e.g. the quality and availability of infrastructures and communication channels, the level of academic and public research, the patenting process,…) and, finally, the socio-cultural factors associated with the entrepreneurs and their environment (e.g. level of education, their parents’education, gender,…).<p><p>Statistical Evidence<p><p>The main characteristics of companies in our sample are that employment and profits net of taxation do not follow the same trend. Indeed, employment may decrease while results after taxes may stay constant. Only a few companies enjoy a growth in both employment and results after taxes between 1998 and 2003.<p>On the financing front, our findings suggest that internal finance in the form of personal funds, as well as the funds of family and friends are the primary source of capital to start-up a high-tech company in Belgium. Entrepreneurs rely on their own personal savings in 84 percent of the cases. Commercial bank loans are the secondary source of finance. This part of external financing (debt-finance) exceeds the combined angel funds and venture capital funds (equity-finance).<p>On the entrepreneur front, the preliminary results show that 80 percent of entrepreneurs in this study have a university degree while 42 percent hold postgraduate degrees (i.e. master’s, and doctorate). In term of research activities, 88 percent of the entrepreneurs holding a Ph.D. or a post-doctorate collaborate with Belgian higher education institutes. Moreover, more than 90 percent of these entrepreneurs are working in a university spin-off.<p><p>The Contribution of Universities to Employment Growth<p><p>The objective of this chapter is to test whether universities play a role amongst the determinants of employment growth in Belgian TBSF. The empirical model is based on our original survey of 87 Belgian TBSF. The results suggest that both academic spin-offs and TBSF created on the basis of an idea originating from business R&D activities are associated with an above than average growth in employees. As most ‘high-tech’ entrepreneurs are at least graduated from universities, there is no significant impact of the level of education. Nevertheless, these results must be taken with caution, as they are highly sensitive to the presence of outliers. Young high-tech firms are by definition highly volatile, and might be therefore difficult to understand.<p><p>CONCLUSION<p><p>In this last chapter, recommendations for policy-makers are drawn from the results of the thesis. The possible interventions of governments are classified according to whether they influence the demand or the supply of entrepreneurship and/or VC. We present some possible actions such as direct intervention in the VC funds, interventions of public sector through labour market rigidities, pension system, patent and research policy, level of entrepreneurial activities, bankruptcy legislation, entrepreneurial education, development of university spin-offs, and creation of a national database of TBSF.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
109

Problèmes d'économétrie en macroéconomie et en finance : mesures de causalité, asymétrie de la volatilité et risque financier

Taamouti, Abderrahim January 2007 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
110

Marchés des matières premières agricoles et dynamique des cours : un réexamen par la financiarisation / Agricultural commodities markets and dynamics of prices : a review by financialization

Fam, Papa Gueye 29 November 2016 (has links)
Face à l’instabilité des cours agricoles et à ses conséquences notamment pour les pays en développement, la première partie de cette thèse est consacrée à la présentation des déterminants des cours des matières premières alimentaires, incluant les évolutions récentes en matière d’offre, en tenant compte des conséquences du réchauffement climatique, et de demande, considérant notamment les biocarburants. Il est également question de présenter la financiarisation en cours des économies, et les doutes qui planent sur le rôle que peuvent avoir la spéculation sur les marchés à terme ou encore la mise en œuvre des politiques monétaires, sur les cours au comptant observés sur les marchés physiques des produits agricoles. Suite aux réflexions et éléments de littérature avancés, la seconde partie procède de deux études empiriques. La première est axée sur l’impact de la spéculation sur les marchés financiers à terme sur le cours des sous-jacents (agricoles), alors que la seconde questionne le rôle des marchés monétaires, abordé à travers la capacité du banquier central à stabiliser les taux d’intérêt à court terme. Sur cette base, des conclusions mais également des pistes de recherche sont établies, du fait du prolongement en cours du processus de financiarisation des économies. / Faced with instability of agricultural commodities’ prices and its consequences especially for developing countries, the first part of this thesis is devoted to the presentation of food commodities’ prices, including recent developments with respect to the offering, taking into account the consequences of global warming and demand, as well as the importance of biofuels. It is also question to present the financialization of economies, and the doubts that take over the role of speculation on the futures markets or the implementation of monetary policies, on the spot prices observed on physical agricultural commodities markets. Following the advanced literature reflections and elements, the second part proceeds of two empirical studies, the first one focused on the impact of speculation about the financial futures markets on the underlying asset’s price (agricultural), while the second one examines the role of money markets through the capacities of the central banker to stabilize short-term interest rates. On this basis, conclusions but also future research are established due to the continuation of the economies financialization process.

Page generated in 0.0651 seconds