• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 7
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 25
  • 11
  • 11
  • 11
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Essays in dynamic macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting.<p><p>The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.<p><p>The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.<p>The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.<p><p>The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the<p>latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.<p><p>The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an<p>important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. <p><p>The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
22

Macroeconometrics with high-dimensional data

Zeugner, Stefan 12 September 2012 (has links)
CHAPTER 1:<p>The default g-priors predominant in Bayesian Model Averaging tend to over-concentrate posterior mass on a tiny set of models - a feature we denote as 'supermodel effect'. To address it, we propose a 'hyper-g' prior specification, whose data-dependent shrinkage adapts posterior model distributions to data quality. We demonstrate the asymptotic consistency of the hyper-g prior, and its interpretation as a goodness-of-fit indicator. Moreover, we highlight the similarities between hyper-g and 'Empirical Bayes' priors, and introduce closed-form expressions essential to computationally feasibility. The robustness of the hyper-g prior is demonstrated via simulation analysis, and by comparing four vintages of economic growth data.<p><p>CHAPTER 2:<p>Ciccone and Jarocinski (2010) show that inference in Bayesian Model Averaging (BMA) can be highly sensitive to small data perturbations. In particular they demonstrate that the importance attributed to potential growth determinants varies tremendously over different revisions of international income data. They conclude that 'agnostic' priors appear too sensitive for this strand of growth empirics. In response, we show that the found instability owes much to a specific BMA set-up: First, comparing the same countries over data revisions improves robustness. Second, much of the remaining variation can be reduced by applying an evenly 'agnostic', but flexible prior.<p><p>CHAPTER 3:<p>This chapter explores the link between the leverage of the US financial sector, of households and of non-financial businesses, and real activity. We document that leverage is negatively correlated with the future growth of real activity, and positively linked to the conditional volatility of future real activity and of equity returns. <p>The joint information in sectoral leverage series is more relevant for predicting future real activity than the information contained in any individual leverage series. Using in-sample regressions and out-of sample forecasts, we show that the predictive power of leverage is roughly comparable to that of macro and financial predictors commonly used by forecasters. <p>Leverage information would not have allowed to predict the 'Great Recession' of 2008-2009 any better than conventional macro/financial predictors. <p><p>CHAPTER 4:<p>Model averaging has proven popular for inference with many potential predictors in small samples. However, it is frequently criticized for a lack of robustness with respect to prediction and inference. This chapter explores the reasons for such robustness problems and proposes to address them by transforming the subset of potential 'control' predictors into principal components in suitable datasets. A simulation analysis shows that this approach yields robustness advantages vs. both standard model averaging and principal component-augmented regression. Moreover, we devise a prior framework that extends model averaging to uncertainty over the set of principal components and show that it offers considerable improvements with respect to the robustness of estimates and inference about the importance of covariates. Finally, we empirically benchmark our approach with popular model averaging and PC-based techniques in evaluating financial indicators as alternatives to established macroeconomic predictors of real economic activity. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
23

Essays on macroeconometrics and short-term forecasting

Cicconi, Claudia 11 September 2012 (has links)
The thesis, entitled "Essays on macroeconometrics and short-term forecasting",<p>is composed of three chapters. The first two chapters are on nowcasting,<p>a topic that has received an increasing attention both among practitioners and<p>the academics especially in conjunction and in the aftermath of the 2008-2009<p>economic crisis. At the heart of the two chapters is the idea of exploiting the<p>information from data published at a higher frequency for obtaining early estimates<p>of the macroeconomic variable of interest. The models used to compute<p>the nowcasts are dynamic models conceived for handling in an efficient way<p>the characteristics of the data used in a real-time context, like the fact that due to the different frequencies and the non-synchronicity of the releases<p>the time series have in general missing data at the end of the sample. While<p>the first chapter uses a small model like a VAR for nowcasting Italian GDP,<p>the second one makes use of a dynamic factor model, more suitable to handle<p>medium-large data sets, for providing early estimates of the employment in<p>the euro area. The third chapter develops a topic only marginally touched<p>by the second chapter, i.e. the estimation of dynamic factor models on data characterized by block-structures.<p>The firrst chapter assesses the accuracy of the Italian GDP nowcasts based<p>on a small information set consisting of GDP itself, the industrial production<p>index and the Economic Sentiment Indicator. The task is carried out by using<p>real-time vintages of data in an out-of-sample exercise over rolling windows<p>of data. Beside using real-time data, the real-time setting of the exercise is<p>also guaranteed by updating the nowcasts according to the historical release calendar. The model used to compute the nowcasts is a mixed-frequency Vector<p>Autoregressive (VAR) model, cast in state-space form and estimated by<p>maximum likelihood. The results show that the model can provide quite accurate<p>early estimates of the Italian GDP growth rates not only with respect<p>to a naive benchmark but also with respect to a bridge model based on the<p>same information set and a mixed-frequency VAR with only GDP and the industrial production index.<p>The chapter also analyzes with some attention the role of the Economic Sentiment<p>Indicator, and of soft information in general. The comparison of our<p>mixed-frequency VAR with one with only GDP and the industrial production<p>index clearly shows that using soft information helps obtaining more accurate<p>early estimates. Evidence is also found that the advantage from using soft<p>information goes beyond its timeliness.<p>In the second chapter we focus on nowcasting the quarterly national account<p>employment of the euro area making use of both country-specific and<p>area wide information. The relevance of anticipating Eurostat estimates of<p>employment rests on the fact that, despite it represents an important macroeconomic<p>variable, euro area employment is measured at a relatively low frequency<p>(quarterly) and published with a considerable delay (approximately<p>two months and a half). Obtaining an early estimate of this variable is possible<p>thanks to the fact that several Member States publish employment data and<p>employment-related statistics in advance with respect to the Eurostat release<p>of the euro area employment. Data availability represents, nevertheless, a<p>major limit as country-level time series are in general non homogeneous, have<p>different starting periods and, in some cases, are very short. We construct a<p>data set of monthly and quarterly time series consisting of both aggregate and<p>country-level data on Quarterly National Account employment, employment<p>expectations from business surveys and Labour Force Survey employment and<p>unemployment. In order to perform a real time out-of-sample exercise simulating<p>the (pseudo) real-time availability of the data, we construct an artificial<p>calendar of data releases based on the effective calendar observed during the first quarter of 2012. The model used to compute the nowcasts is a dynamic<p>factor model allowing for mixed-frequency data, missing data at the beginning<p>of the sample and ragged edges typical of non synchronous data releases. Our<p>results show that using country-specific information as soon as it is available<p>allows to obtain reasonably accurate estimates of the employment of the euro<p>area about fifteen days before the end of the quarter.<p>We also look at the nowcasts of employment of the four largest Member<p>States. We find that (with the exception of France) augmenting the dynamic<p>factor model with country-specific factors provides better results than those<p>obtained with the model without country-specific factors.<p>The third chapter of the thesis deals with dynamic factor models on data<p>characterized by local cross-correlation due to the presence of block-structures.<p>The latter is modeled by introducing block-specific factors, i.e. factors that<p>are specific to blocks of time series. We propose an algorithm to estimate the model by (quasi) maximum likelihood and use it to run Monte Carlo<p>simulations to evaluate the effects of modeling or not the block-structure on<p>the estimates of common factors. We find two main results: first, that in finite samples modeling the block-structure, beside being interesting per se, can help<p>reducing the model miss-specification and getting more accurate estimates<p>of the common factors; second, that imposing a wrong block-structure or<p>imposing a block-structure when it is not present does not have negative<p>effects on the estimates of the common factors. These two results allow us<p>to conclude that it is always recommendable to model the block-structure<p>especially if the characteristics of the data suggest that there is one. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
24

Essays on modelling and forecasting financial time series

Coroneo, Laura 28 August 2009 (has links)
This thesis is composed of three chapters which propose some novel approaches to model and forecast financial time series. The first chapter focuses on high frequency financial returns and proposes a quantile regression approach to model their intraday seasonality and dynamics. The second chapter deals with the problem of forecasting the yield curve including large datasets of macroeconomics information. While the last chapter addresses the issue of modelling the term structure of interest rates. <p><p>The first chapter investigates the distribution of high frequency financial returns, with special emphasis on the intraday seasonality. Using quantile regression, I show the expansions and shrinks of the probability law through the day for three years of 15 minutes sampled stock returns. Returns are more dispersed and less concentrated around the median at the hours near the opening and closing. I provide intraday value at risk assessments and I show how it adapts to changes of dispersion over the day. The tests performed on the out-of-sample forecasts of the value at risk show that the model is able to provide good risk assessments and to outperform standard Gaussian and Student’s t GARCH models.<p><p>The second chapter shows that macroeconomic indicators are helpful in forecasting the yield curve. I incorporate a large number of macroeconomic predictors within the Nelson and Siegel (1987) model for the yield curve, which can be cast in a common factor model representation. Rather than including macroeconomic variables as additional factors, I use them to extract the Nelson and Siegel factors. Estimation is performed by EM algorithm and Kalman filter using a data set composed by 17 yields and 118 macro variables. Results show that incorporating large macroeconomic information improves the accuracy of out-of-sample yield forecasts at medium and long horizons.<p><p>The third chapter statistically tests whether the Nelson and Siegel (1987) yield curve model is arbitrage-free. Theoretically, the Nelson-Siegel model does not ensure the absence of arbitrage opportunities. Still, central banks and public wealth managers rely heavily on it. Using a non-parametric resampling technique and zero-coupon yield curve data from the US market, I find that the no-arbitrage parameters are not statistically different from those obtained from the Nelson and Siegel model, at a 95 percent confidence level. I therefore conclude that the Nelson and Siegel yield curve model is compatible with arbitrage-freeness.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
25

Essays in the empirical analysis of venture capital and entrepreneurship

Romain, Astrid 09 February 2007 (has links)
EXECUTIVE SUMMARY<p><p>This thesis aims at analysing some aspects of Venture Capital (VC) and high-tech entrepreneurship. The focus is both at the macroeconomic level, comparing venture capital from an international point of view and Technology-Based Small Firms (TBSF) at company and founder’s level in Belgium. The approach is mainly empirical.<p>This work is divided into two parts. The first part focuses on venture capital. First of all, we test the impact of VC on productivity. We then identify the determinants of VC and we test their impact on the relative level of VC for a panel of countries.<p>The second part concerns the technology-based small firms in Belgium. The objective is twofold. It first aims at creating a database on Belgian TBSF to better understand the importance of entrepreneurship. In order to do this, a national survey was developed and the statistical results were analysed. Secondly, it provides an analysis of the role of universities in the employment performance of TBSF.<p>A broad summary of each chapter is presented below.<p><p>PART 1: VENTURE CAPITAL<p><p>The Economic Impact of Venture Capital<p><p>The objective of this chapter is to perform an evaluation of the macroeconomic impact of venture capital. The main assumption is that VC can be considered as being similar in several respects to business R&D performed by large firms. We test whether VC contributes to economic growth through two main channels. The first one is innovation, characterized by the introduction of new products, processes or services on the market. The second one is the development of an absorptive capacity. These hypotheses are tested quantitatively with a production function model for a panel data set of 16 OECD countries from 1990 to 2001. The results show that the accumulation of VC is a significant factor contributing directly to Multi-Factor Productivity (MFP) growth. The social rate of return to VC is significantly higher than the social rate of return to business or public R&D. VC has also an indirect impact on MFP in the sense that it improves the output elasticity of R&D. An increased VC intensity makes it easier to absorb the knowledge generated by universities and firms, and therefore improves aggregate economic performance.<p><p>Technological Opportunity, Entrepreneurial Environment and Venture Capital Development<p><p>The objective of this chapter is to identify the main determinants of venture capital. We develop a theoretical model where three main types of factors affect the demand and supply of VC: macroeconomic conditions, technological opportunity, and the entrepreneurial environment. The model is evaluated with a panel dataset of 16 OECD countries over the period 1990-2000. The estimates show that VC intensity is pro-cyclical - it reacts positively and significantly to GDP growth. Interest rates affect the VC intensity mainly because the entrepreneurs create a demand for this type of funding. Indicators of technological opportunity such as the stock of knowledge and the number of triadic patents affect positively and significantly the relative level of VC. Labour market rigidities reduce the impact of the GDP growth rate and of the stock of knowledge, whereas a minimum level of entrepreneurship is required in order to have a positive effect of the available stock of knowledge on VC intensity.<p><p>PART 2: TECHNOLOGY-BASED SMALL FIRMS<p><p>Survey in Belgium<p><p>The first purpose of this chapter is to present the existing literature on the performance of companies. In order to get a quantitative insight into the entrepreneurial growth process, an original survey of TBSF in Belgium was launched in 2002. The second purpose is to describe the methodology of our national TBSF survey. This survey has two main merits. The first one lies in the quality of the information. Indeed, most of national and international surveys have been developed at firm-level. There exist only a few surveys at founder-level. In the TBSF database, information both at firm and at entrepreneur-level will be found.<p>The second merit is about the subject covered. TBSF survey tackles the financing of firms (availability of public funds, role of venture capitalists, availability of business angels,…), the framework conditions (e.g. the quality and availability of infrastructures and communication channels, the level of academic and public research, the patenting process,…) and, finally, the socio-cultural factors associated with the entrepreneurs and their environment (e.g. level of education, their parents’education, gender,…).<p><p>Statistical Evidence<p><p>The main characteristics of companies in our sample are that employment and profits net of taxation do not follow the same trend. Indeed, employment may decrease while results after taxes may stay constant. Only a few companies enjoy a growth in both employment and results after taxes between 1998 and 2003.<p>On the financing front, our findings suggest that internal finance in the form of personal funds, as well as the funds of family and friends are the primary source of capital to start-up a high-tech company in Belgium. Entrepreneurs rely on their own personal savings in 84 percent of the cases. Commercial bank loans are the secondary source of finance. This part of external financing (debt-finance) exceeds the combined angel funds and venture capital funds (equity-finance).<p>On the entrepreneur front, the preliminary results show that 80 percent of entrepreneurs in this study have a university degree while 42 percent hold postgraduate degrees (i.e. master’s, and doctorate). In term of research activities, 88 percent of the entrepreneurs holding a Ph.D. or a post-doctorate collaborate with Belgian higher education institutes. Moreover, more than 90 percent of these entrepreneurs are working in a university spin-off.<p><p>The Contribution of Universities to Employment Growth<p><p>The objective of this chapter is to test whether universities play a role amongst the determinants of employment growth in Belgian TBSF. The empirical model is based on our original survey of 87 Belgian TBSF. The results suggest that both academic spin-offs and TBSF created on the basis of an idea originating from business R&D activities are associated with an above than average growth in employees. As most ‘high-tech’ entrepreneurs are at least graduated from universities, there is no significant impact of the level of education. Nevertheless, these results must be taken with caution, as they are highly sensitive to the presence of outliers. Young high-tech firms are by definition highly volatile, and might be therefore difficult to understand.<p><p>CONCLUSION<p><p>In this last chapter, recommendations for policy-makers are drawn from the results of the thesis. The possible interventions of governments are classified according to whether they influence the demand or the supply of entrepreneurship and/or VC. We present some possible actions such as direct intervention in the VC funds, interventions of public sector through labour market rigidities, pension system, patent and research policy, level of entrepreneurial activities, bankruptcy legislation, entrepreneurial education, development of university spin-offs, and creation of a national database of TBSF.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished

Page generated in 0.0979 seconds