• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 5
  • Tagged with
  • 22
  • 22
  • 14
  • 13
  • 13
  • 13
  • 12
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Essays on real-time econometrics and forecasting

Modugno, Michèle 14 September 2011 (has links)
The thesis contains four essays covering topics in the field of real time econometrics and forecasting.<p><p>The first Chapter, entitled “An area wide real time data base for the euro area” and coauthored with Domenico Giannone, Jerome Henry and Magda Lalik, describes how we constructed a real time database for the euro area covering more than 200 series regularly published in the European Central Bank Monthly Bulletin, as made available ahead of publication to the Governing Council members before their first meeting of the month.<p><p>Recent research has emphasised that the data revisions can be large for certain indicators and can have a bearing on the decisions made, as well as affect the assessment of their relevance. It is therefore key to be in a position to reconstruct the historical environment of economic decisions at the time they were made by private agents and policy-makers rather than using the data as they become available some years later. For this purpose, it is necessary to have the information in the form of all the different vintages of data as they were published in real time, the so-called "real-time data" that reflect the economic situation at a given point in time when models are estimated or policy decisions made.<p><p>We describe the database in details and study the properties of the euro area real-time data flow and data revisions, also providing comparisons with the United States and Japan. We finally illustrate how such revisions can contribute to the uncertainty surrounding key macroeconomic ratios and the NAIRU.<p><p>The second Chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on a joint work with Marta Banbura. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone et al (2008), we can handle datasets that are not only characterised by a 'ragged edge', but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach, which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. It has been shown by Doz et al (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz et al (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm. Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases.<p><p>We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data.<p><p>The third Chapter is entitled “Nowcasting Inflation Using High Frequency Data” and it proposes a methodology for nowcasting and forecasting inflation using data with sampling frequency higher than monthly. In particular, this Chapter focuses on the energy component of inflation given the availability of data like the Weekly Oil Bulletin Price Statistics for the euro area, the Weekly Retail Gasoline and Diesel Prices for the US and the daily spot and future prices of crude oil.<p><p>Although nowcasting inflation is a novel idea, there is a rather long literature focusing on nowcasting GDP. The use of higher frequency indicators in order to Nowcast/Forecast lower frequency indicators had started with monthly data for GDP. GDP is a quarterly variable released with a substantial time delay (e.g. two months after the end of the reference quarter for the euro area GDP). <p><p>The estimation adopts the methodology described in Chapter 2, modeling the data as a trading day frequency factor model with missing observations in a state space representation. In contrast to other procedures, the methodology proposed models all the data within a unified single framework that allows one to produce forecasts of all the involved variables from a factor model, which, by definition, does not suffer from overparametrisation. Moreover, this offers the possibility to disentangle model-based "news" from each release and then to assess their impact on the forecast revision. The Chapter provides an illustrative example of this procedure, focusing on a specific month.<p><p>In order to assess the importance of using high frequency data for forecasting inflation this Chapter compares the forecast performance of the univariate models, i.e. random walk and autoregressive process, with the forecast performance of the model that uses weekly and daily data. The provided empirical evidence shows that exploiting high frequency data relative to oil not only let us nowcast and forecast the energy component of inflation with a precision twice better than the proposed benchmarks, but we obtain a similar improvement even for total inflation.<p><p>The fourth Chapter entitled “The forecasting power of international yield curve linkages”, coauthored with Kleopatra Nikolaou, investigates dependency patterns between the yield curves of Germany and the US, by using an out-of-sample forecast exercise.<p><p>The motivation for this Chapter stems from the fact that our up to date knowledge on dependency patterns among yields curves of different countries is limited. Looking at the yield curve literature, the empirical evidence to-date informs us of strong contemporaneous interdependencies of yield curves across countries, in line with increased globalization and financial integration. Nevertheless, this yield curve literature does not investigate non-contemporaneous correlations. And yet, clear indication in favour of such dependency patterns is recorded in studies focusing on specific interest rates, which look at the role of certain countries as global players (see Frankel et al. (2004), Chinn and Frankel (2005) and Wang et al. (2007)). Evidence from these studies suggests a leading role for the US. Moreover, dependency patterns recorded in the real business cycles between the US and the euro area (Giannone and Reichlin, 2007) can also rationalize such linkages, to the extent that output affects nominal interest rates.<p><p>We propose, estimate and forecast (out-of-sample) a novel dynamic factor model for the yield curve, where dynamic information from foreign yield curves is introduced into domestic yield curve forecasts. This is the International Dependency Model (IDM). We want to compare the yield curve forecast under the IDM versus a purely domestic model and a model that allows for contemporaneous common global factors. These models serve as useful comparisons. The domestic model bears direct modeling links with IDM, as it can be seen as a nested model of IDM. The global model bears less direct links in terms of modeling, but, in line with IDM, it is also an international model that serves to highlight the advantages of introducing international information in yield curve forecasts. However, the global model aims to identify contemporaneous linkages in the yield curve of the two countries, whereas the IDM also allows for detecting dependency patterns.<p><p>Our results that shocks appear to be diffused in a rather asymmetric manner across the two countries. Namely, we find a unidirectional causality effect that runs from the US to Germany. This effect is stronger in the last ten years, where out-of-sample forecasts of Germany using the US information are even more accurate than the random walk forecasts. Our statistical results demonstrate a more independent role for the US. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
12

Structural models for macroeconomics and forecasting

De Antonio Liedo, David 03 May 2010 (has links)
This Thesis is composed by three independent papers that investigate<p>central debates in empirical macroeconomic modeling.<p><p>Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data<p>revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based<p>on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the<p>DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary<p>figures, it is not possible for them to quantify it, as done by our model. <p><p>The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.<p><p><p>Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable<p>to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.<p><p>The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE<p>models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and<p>Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which<p>models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE<p>modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that<p>resulting from the original specification. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
13

Amélioration des mesures de performance conditionnelles des fonds mutuels américains

Morel, Nandrasana Pascal 18 February 2021 (has links)
Cette étude s’intéresse à une amélioration des mesures de performance conditionnelles de Ferson et Schadt (1996) et Christopherson, Ferson et Glassman (1998), et poursuit deux objectifs principaux. Le premier objectif est de tester comparativement la significativité des alphas et des coefficients de timing obtenus en conditionnant avec des variables économiques traditionnelles, d’une part, et des variables prévisionnelles améliorées, d’une autre part. Nous utilisons pour ces dernières des prévisions combinées qui agrègent quinze prévisions individuelles hors échantillon de variables économiques. Le deuxième objectif est d’effectuer une analyse des spécifications proposées pour voir si les estimations de l'alpha sont plus élevées en périodes de récession qu’en périodes d'expansion. À cet effet, nous utilisons un échantillon composé de 1104 fonds mutuels américains d’actions sur la période allant de janvier 1987 à décembre 2016. En utilisant plusieurs variables de conditionnement, les résultats montrent qu’il n’y a pas beaucoup de fonds pour lesquels les écarts d’alphas et de coefficients de timing sont significatifs. La performance et le timing sont donc similaires indépendamment de l’utilisation de variables de conditionnement traditionnelles ou améliorées. Ces résultats suggèrent que le potentiel des prévisions combinées comme variables de conditionnement dans les mesures conditionnelles de performance est faible. Nos résultats mettent également en évidence une performance des fonds qui n’est pas statistiquement différente en expansion qu’en récession. Toutefois, le coefficient de timing diminue d’une manière économiquement importante en récession, suggérant que le timing est mieux en expansion. Même si nos résultats sont impactés par un biais de survivance et un nombre faible d’observations en récession, il faut conclure qu’ils ne valident pas nos deux hypothèses de recherche : celle sur la pertinence de l’approche combinatoire de Rapach, Strauss et Zhou (2010) pour l’obtention de variables de conditionnement améliorées, et celle sur la performance positive en récession et négative en expansion de Kacperczyk, Van Nieuwerburgh et Veldkamp (2014).
14

Modèle factoriel dynamique contraint à régimes markoviens pour l'évaluation en temps réel du cycle économique

Vlavonou, Firmin 19 April 2018 (has links)
Cette thèse, composée de trois essais, identifie des modèles factoriels dynamiques de prévisions en temps réel du cycle économique. Il a pour objectif principal de proposer une structure de modèles d’analyse du cycle économique avec des données à hautes fréquences dans un contexte de révisions de donnée. Ceci est pertinent pour trois raisons. Premièrement, la prévision du cycle économique est une question centrale en macroéconométrie. Deuxièmement, les décideurs politiques bénéficieraient à avoir des informations à hautes fréquences et évaluées en temps réel sur les conditions économiques pour leur prise de décisions. Enfin, les décisions sont souvent prises en se basant sur les données sujettes à des révisions et l’incertitude relative de ces données doit être incorporée dans le processus d’élaboration de décision. Après un bref survol de la littérature sur le cycle économique et des modèles d’analyse des points tournants, nous proposons une structure rigoureuse d’estimation du Produit Intérieur Brut (PIB) mensuel réel des États-Unis. Le problème récurrent rencontré dans l’estimation de cette classe de modèles est que les estimations du PIB mensuel ne sont pas cohérentes avec celles trimestrielles et ces dernières à leur tour ne sont pas cohérentes avec les estimations annuelles. Notre approche résout ce genre de problème et facilite les interprétations intrapériodes. Dans le premier essai (chapitre 2), nous développons et estimons un modèle factoriel dynamique traitant le PIB mensuel comme une variable inobservable. Contrairement aux approches existantes, la moyenne trimestrielle de nos estimations mensuelles est exactement égale à l’estimation trimestrielle du «Bureau of Economic Analysis». Par contruction, nos estimations mensuelles ont l’avantage d’être à la fois en temps réel et facile à interpréter. Le second essai (chapitre 3) est une extension de la structure précédente en y ajoutant un modèle markovien de changements de régimes du cycle économique au modèle factoriel dynamique. Le modèle est maintenant un modèle avec trois niveaux à deux composantes inobservables. Nous portons une attention particulière à la sensibilité des indicateurs usuels du cycle économique aux points tournants. L’indice de production industrielle, les ventes manufacturières et de commerce transmettent plus rapidement à la composante commune (PIB mensuel) les chocs qu’ils subissent du cycle économique que l’emploi. Dans le dernier essai (chapitre 4), nous intégrons les révisions de données dans le modèle factoriel dynamique à régimes markoviens dans une perspective d’évaluer leurs effets sur le cycle économique. Il apparait que les révisions de données ont un impact significatif sur les comouvements entre les variables et les points tournants sans compromettre la nature asymétrique du cycle économique. Mots clés : Modèle Factoriel Dynamique (MFD), Haute fréquence, Temps réel, Régimes markoviens, Composantes inobservables, Révisions, Comouvement, Points tournants, Asymétrie, Cycle économique. / This thesis is composed of three essays on real-time forecasting dynamic factor models. The main objective is to provide frameworks for high-frequency business cycle analysis in the presence of data revisions. This is relevant for three reasons. First, business cycle forecasting is a central question in macroeconometrics. Secondly, policy-makers would benefit from having access to timely, high-frequency information about business conditions to inform their decisions. Finally, decisions must frequently be made based on data that are subject to revision, and this data uncertainty should be incorporated into the decision-making process. After a review of the empirical business cycle literature and of models of business cycle turning points, we propose a rigorous framework for estimating monthly real US Gross Domestic Product (GDP). A recurring problem in this class of models is that estimates for monthly GDP are generally not consistent with quarterly estimates in the same way that quarterly estimates are not consistent with annual data. Our approach solves this problem. In the first essay (chapter 2), we develop and estimate a dynamic factor model treating the monthly Gross Domestic Product (GDP) as an unobservable latent variable. In contrast with existing approaches, the quarterly averages of our monthly estimates are exactly equal to the Bureau of Economic Analysis quarterly estimates. By construction, our monthly estimates have the advantage of being both timely and easy to interpret. The second essay (chapter 3) extends this framework by adding a Markov-switching model of business cycle regimes to the dynamic factor model. The model is now one with three levels, two of which have latent dependent variables. We pay particular attention to the sensibility of the usual indicators at turning points. The industrial production index, manufacturing and trade sales transmit more information about business cycle shocks to the common component (monthly GDP) than does employment. Finally, we integrate data revisions into our Markov- switching dynamic factor model in order to evaluate the effects of the revisions process on monthly estimates. It appears that data revisions have a significant impact on the co-movement of variables and on turning points without compromising the asymmetric nature of the business cycle. Keywords : Dynamic Factor Model (DFM), High-frequency, Real-time, Markov-switching, unobservable components, Revisions, co-movement, Turning points, Asymmetric, Business cycle.
15

Three essays in international finance and macroeconomics

Nono, Simplice Aimé 24 April 2018 (has links)
Cette thèse examine l’effet de l’information sur la prévision macroéconomique. De façon spécifique, l’emphase est d’abord mise sur l’impact des frictions d’information en économie ouverte sur la prévision du taux de change bilatéral et ensuite sur le rôle de l’information issue des données d’enquêtes de conjoncture dans la prévision de l’activité économique réelle. Issu du paradigme de la nouvelle macroéconomie ouverte (NOEM), le premier essai intègre des frictions d’informations et des rigidités nominales dans un modèle d’équilibre général dynamique stochastique (DSGE) en économie ouverte. Il présente ensuite une analyse comparative des résultats de la prévision du taux de change obtenu en utilisant le modèle avec et sans ces frictions d’information. Tandis que le premier essai développe un modèle macroéconomique structurel de type DSGE pour analyser l’effet de la transmission des choc en information incomplète sur la dynamique du taux de change entre deux économies, le deuxième et troisième essais utilisent les modèles factorielles dynamiques avec ciblage pour mettre en exergue la contribution de l’information contenu dans les données d’enquêtes de confiance (soit au niveau de l’économie nationale que internationale) sur la prévision conjoncturelle de l’activité économique réelle. « The Forward Premium Puzzle : a Learning-based Explanation » (Essai 1) est une contribution à la littérature sur la prévision du taux de change. Cet essai a comme point de départ le résultat théorique selon lequel lorsque les taux d’intérêt sont plus élevés localement qu’ils le sont à l’étranger, cela annonce une dépréciation future de la monnaie locale. Cependant, les résultats empiriques obtenus sont généralement en contradiction avec cette intuition et cette contradiction a été baptisée l’énigme de la parité des taux d’intérêt non-couverte ou encore «énigme de la prime des contrats à terme ». L’essai propose une explication de cette énigme basée sur le mécanisme d’apprentissage des agents économiques. Sous l’hypothèse que les chocs de politique monétaire et de technologie peuvent être soit de type persistant et soit de type transitoire, le problème d’information survient lorsque les agents économiques ne sont pas en mesure d’observer directement le type de choc et doivent plutôt utiliser un mécanisme de filtrage de l’information pour inférer la nature du choc. Nous simulons le modèle en présence de ces frictions informationnelles, et ensuite en les éliminant, et nous vérifions si les données artificielles générées par les simulations présentent les symptômes de l’énigme de la prime des contrats à terme. Notre explication à l’énigme est validée si et seulement si seules les données générées par le modèle avec les frictions informationnelles répliquent l’énigme. « Using Confidence Data to Forecast the Canadian Business Cycle » (Essai 2) s’appuie sur l’observation selon laquelle la confiance des agents économiques figure désormais parmi les principaux indicateurs de la dynamique conjoncturelle. Cet essai analyse la qualité et la quantité d’information contenu dans les données d’enquêtes mesurant la confiance des agents économiques. A cet effet, il évalue la contribution des données de confiance dans la prévision des points de retournement (« turning points ») dans l’évolution de l’économie canadienne. Un cadre d’analyse avec des modèles de type probit à facteurs est spécifié et appliqué à un indicateur de l’état du cycle économique canadien produit par l’OCDE. Les variables explicatives comprennent toutes les données canadiennes disponibles sur la confiance des agents (qui proviennent de quatre enquêtes différentes) ainsi que diverses données macroéconomiques et financières. Le modèle est estimé par le maximum de vraisemblance et les données de confiance sont introduites dans les différents modèles sous la forme de variables individuelles, de moyennes simples (des « indices de confiance ») et de « facteurs de confiance » extraits d’un ensemble de données plus grand dans lequel toutes les données de confiance disponibles ont été regroupées via la méthode des composantes principales, . Nos résultats indiquent que le plein potentiel des données sur la confiance pour la prévision des cycles économiques canadiens est obtenu lorsque toutes les données sont utilisées et que les modèles factoriels sont utilisés. « Forecasting with Many Predictors: How Useful are National and International Confidence Data? » (Essai 3) est basé sur le fait que dans un environnement où les sources de données sont multiples, l’information est susceptible de devenir redondante d’une variable à l’autre et qu’une sélection serrée devient nécessaire pour identifier les principaux déterminants de la prévision. Cet essai analyse les conditions selon lesquelles les données de confiance constituent un des déterminants majeurs de la prévision de l’activité économique dans un tel environnement. La modélisation factorielle dynamique ciblée est utilisé pour évaluer le pouvoir prédictif des données des enquêtes nationales et internationales sur la confiance dans la prévision de la croissance du PIB Canadien. Nous considérons les données d’enquêtes de confiance désagrégées dans un environnement riche en données (c’est-à-dire contenant plus d’un millier de séries macro-économiques et financières) et évaluons leur contenu informatif au-delà de celui contenu dans les variables macroéconomiques et financières. De bout en bout, nous étudions le pouvoir prédictif des données de confiance en produisant des prévisions du PIB avec des modèles à facteurs dynamiques où les facteurs sont dérivés avec et sans données de confiance. Les résultats montrent que la capacité de prévision est améliorée de façon robuste lorsqu’on prend en compte l’information contenue dans les données nationales sur la confiance. En revanche, les données internationales de confiance ne sont utiles que lorsqu’elles sont combinées dans le même ensemble avec celles issues des enquêtes nationales. En outre, les gains les plus pertinents dans l’amelioration des prévisions sont obtenus à court terme (jusqu’à trois trimestres en avant). / This thesis examines the effect of information on macroeconomic forecasting. Specifically, the emphasis is firstly on the impact of information frictions in open economy in forecasting the bilateral exchange rate and then on the role of information from confidence survey data in forecasting real economic activity. Based on the new open-economy macroeconomics paradigm (NOEM), the first chapter incorporates information frictions and nominal rigidities in a stochastic dynamic general equilibrium (DSGE) model in open economy. Then, it presents a comparative analysis of the results of the exchange rate forecast obtained using the model with and without these information frictions. While the first chapter develops a structural macroeconomic model of DSGE type to analyze the effect of shock transmission in incomplete information on exchange rate dynamics between two economies, the second and third chapters use static and dynamic factor models with targeting to highlight the contribution of information contained in confidence-based survey data (either at the national or international level) in forecasting real economic activity. The first chapter is entitled The Forward Premium Puzzle: a Learning-based Explanation and is a contribution to the exchange rate forecasting literature. When interest rates are higher in one’s home country than abroad, economic intuition suggests this signals the home currency will depreciate in the future. However, empirical evidence has been found to be at odds with this intuition: this is the "forward premium puzzle." I propose a learning-based explanation for this puzzle. To do so, I embed an information problem in a two-country open-economy model with nominal rigidities. The information friction arises because economic agents do not directly observe whether shocks are transitory or permanent and must infer their nature using a filtering mechanism each period. We simulate the model with and without this informational friction and test whether the generated artificial data exhibits the symptoms of the forward premium puzzle. Our leaning-based explanation is validated as only the data generated with the active informational friction replicates the puzzle. The second chapter uses dynamic factor models to highlight the contribution of the information contained in Canadian confidence survey data for forecasting the Canadian business cycle: Using Confidence Data to Forecast the Canadian Business Cycle is based on the fact that confidence (or sentiment) is one key indicators of economic momentum. The chapter assesses the contribution of confidence -or sentiment-data in predicting Canadian economic slowdowns. A probit framework is applied to an indicator on the status of the Canadian business cycle produced by the OECD. Explanatory variables include all available Canadian data on sentiment (which arise from four different surveys) as well as various macroeconomic and financial data. Sentiment data are introduced either as individual variables, as simple averages (such as confidence indices) and as confidence factors extracted, via principal components’ decomposition, from a larger dataset in which all available sentiment data have been collected. Our findings indicate that the full potential of sentiment data for forecasting future business cycles in Canada is attained when all data are used through the use of factor models. The third chapter uses dynamic factor models to highlight the contribution of the information contained in confidence survey data (either in Canadian or International surveys) for forecasting the Canadian economic activity. This chapter entitled Forecasting with Many Predictors: How Useful are National and International Confidence Data? is based on the fact that in a data-rich environment, information may become redundant so that a selection of forecasting determinants based on the quality of information is required. The chapter investigates whether in such an environment; confidence data can constitute a major determinant of economic activity forecasting. To do so, a targeted dynamic factor model is used to evaluate the performance of national and international confidence survey data in predicting Canadian GDP growth. We first examine the relationship between Canadian GDP and confidence and assess whether Canadian and international (US) improve forecasting accuracy after controlling for classical predictors. We next consider dis-aggregated confidence survey data in a data-rich environment (i.e. containing more than a thousand macroeconomic and financial series) and assess their information content in excess of that contained in macroeconomic and financial variables. Throughout, we investigate the predictive power of confidence data by producing GDP forecasts with dynamic factor models where the factors are derived with and without confidence data. We find that forecasting ability is consistently improved by considering information from national confidence data; by contrast, the international counterpart are helpful only when combined in the same set with national confidence. Moreover most relevant gains in the forecast performance come in short-horizon (up to three-quarters-ahead).
16

Prévisions des importations des pays développés capitalistes en provenance du monde sous-développé

Kestens, Paul January 1968 (has links)
Doctorat en sciences sociales, politiques et économiques / info:eu-repo/semantics/nonPublished
17

Essays on the economics of risk and uncertainty

Berger, Loïc 22 June 2012 (has links)
In the first chapter of this thesis, I use the smooth ambiguity model developed by Klibanoff, Marinacci, and Mukerji (2005) to define the concepts of ambiguity and uncertainty premia in a way analogous to what Pratt (1964) did in the risk theory literature. I show that these concepts may be useful to quantify the effect ambiguity has on the welfare of economic agents. I also define several other concepts such as the unambiguous probability equivalent or the ambiguous utility premium, provide local approximations of these different premia and show the link that exists between them when comparing different degrees of ambiguity aversion not only in the small, but also in the large. <p><p>In the second chapter, I analyze the effect of ambiguity on self-insurance and self-protection, that are tools used to deal with the uncertainty of facing a monetary loss when market insurance is not available (in the self-insurance model, the decision maker has the opportunity to furnish an effort to reduce the size of the loss occurring in the bad state of the world, while in the self-protection – or prevention – model, the effort reduces the probability of being in the bad state). <p>In a short note, in the context of a two-period model I first examine the links between risk-aversion, prudence and self-insurance/self-protection activities under risk. Contrary to the results obtained in the static one-period model, I show that the impacts of prudence and of risk-aversion go in the same direction and generate a higher level of prevention in the more usual situations. I also show that the results concerning self-insurance in a single period framework may be easily extended to a two-period context. <p>I then consider two-period self-insurance and self-protection models in the presence of ambiguity and analyze the effect of ambiguity aversion. I show that in most common situations, ambiguity prudence is a sufficient condition to observe an increase in the level of effort. I propose an interpretation of the model in the context of climate change, so that self-insurance and self-protection are respectively seen as adaptation and mitigation efforts a policy-maker should provide to deal with an uncertain catastrophic event, and interpret the results obtained as an expression of the Precautionary Principle. <p><p>In the third chapter, I introduce the economic theory developed to deal with ambiguity in the context of medical decision-making. I show that, under diagnostic uncertainty, an increase in ambiguity aversion always leads a physician whose goal is to act in the best interest of his patient, to choose a higher level of treatment. In the context of a dichotomic choice (treatment versus no treatment), this result implies that taking into account the attitude agents generally manifest towards ambiguity may induce a physician to change his decision by opting for treatment more often. I further show that under therapeutic uncertainty, the opposite happens, i.e. an ambiguity averse physician may eventually choose not to treat a patient who would have been treated under ambiguity neutrality. <p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
18

Essays in dynamic macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting.<p><p>The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.<p><p>The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.<p>The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.<p><p>The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the<p>latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.<p><p>The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an<p>important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. <p><p>The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
19

Essays on aggregation and cointegration of econometric models

Silvestrini, Andrea 02 June 2009 (has links)
This dissertation can be broadly divided into two independent parts. The first three chapters analyse issues related to temporal and contemporaneous aggregation of econometric models. The fourth chapter contains an application of Bayesian techniques to investigate whether the post transition fiscal policy of Poland is sustainable in the long run and consistent with an intertemporal budget constraint.<p><p><p>Chapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models. <p><p><p>A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed.<p><p><p>Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it.<p><p><p>Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results.<p><p>Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country". <p><p><p>The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions. <p><p><p>The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available. <p><p>The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less). <p><p><p>Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process. <p><p>The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors. <p><p><p>Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure. <p><p><p>Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations. <p><p><p>Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely.<p><p>The empirical analysis to examine debt stabilization is made up by two steps. <p><p>First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005).<p><p>Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999). <p><p>The priors used in the paper leads to straightforward posterior calculations which can be easily performed.<p>Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
20

Mise en oeuvre de techniques de modélisation récentes pour la prévision statistique et économique

Njimi, Hassane 05 September 2008 (has links)
Mise en oeuvre de techniques de modélisation récentes pour la prévision statistique et économique. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.0827 seconds