• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 137
  • 137
  • 22
  • 20
  • 18
  • 17
  • 16
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Essays on aggregation and cointegration of econometric models

Silvestrini, Andrea 02 June 2009 (has links)
This dissertation can be broadly divided into two independent parts. The first three chapters analyse issues related to temporal and contemporaneous aggregation of econometric models. The fourth chapter contains an application of Bayesian techniques to investigate whether the post transition fiscal policy of Poland is sustainable in the long run and consistent with an intertemporal budget constraint.<p><p><p>Chapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models. <p><p><p>A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed.<p><p><p>Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it.<p><p><p>Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results.<p><p>Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country". <p><p><p>The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions. <p><p><p>The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available. <p><p>The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less). <p><p><p>Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process. <p><p>The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors. <p><p><p>Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure. <p><p><p>Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations. <p><p><p>Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely.<p><p>The empirical analysis to examine debt stabilization is made up by two steps. <p><p>First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005).<p><p>Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999). <p><p>The priors used in the paper leads to straightforward posterior calculations which can be easily performed.<p>Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
132

Mise en oeuvre de techniques de modélisation récentes pour la prévision statistique et économique

Njimi, Hassane 05 September 2008 (has links)
Mise en oeuvre de techniques de modélisation récentes pour la prévision statistique et économique. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
133

Essais en économie dynamique appliquée

Liégeois, Philippe January 2001 (has links)
Doctorat en sciences sociales, politiques et économiques / info:eu-repo/semantics/nonPublished
134

Rationalität und Qualität von Wirtschaftsprognosen / Rationality and Quality of Economic Forecasts

Scheier, Johannes 28 April 2015 (has links)
Wirtschaftsprognosen sollen die Unsicherheit bezüglich der zukünftigen wirtschaftlichen Entwicklung mindern und Planungsprozesse von Regierungen und Unternehmen unterstützen. Empirische Studien bescheinigen ihnen jedoch in aller Regel ein unbefriedigendes Qualitätsniveau. Auf der Suche nach den Ursachen hat sich in Form der rationalen Erwartungsbildung eine zentrale Grundforderung an  die Prognostiker herausgebildet. So müssten offensichtliche und systematische Fehler, wie bspw. regelmäßige Überschätzungen, mit der Zeit erkannt und abgestellt werden. Die erste Studie der Dissertation übt Kritik am vorherrschenden Verständnis der Rationalität. Dieses ist zu weitreichend, weshalb den Prognostikern die Rationalität voreilig abgesprochen wird. Anhand einer neuen empirischen Herangehensweise wird deutlich, dass die Prognosen aus einem anderen Blickwinkel heraus durchaus als rational angesehen werden können. Der zweite Aufsatz zeigt auf, dass in Form von Befragungsergebnissen öffentlich verfügbare Informationen bestehen, die bei geeigneter Verwendung zu einer Verbesserung der Qualität von Konjunkturprognosen beitragen würden. Die Rationalität dieser Prognosen ist daher stark eingeschränkt. Im dritten Papier erfolgt eine Analyse von Prognoserevisionen und deren Ursachen. Dabei zeigt sich, dass es keinen Zusammenhang zwischen der Rationalität und der Qualität der untersuchten Prognosezeitreihen gibt. Die vierte Studie dient der Präsentation der Ergebnisse eines Prognoseplanspiels, welches den Vergleich der Prognosen von Amateuren und Experten zum Ziel hatte. Es stellt sich heraus, dass die Prognosefehler erhebliche Übereinstimmungen aufweisen.
135

Avaliação em estudos de futuros de setores industriais na perspectiva da teoria ator-rede. Estudo de caso: Observatórios da Indústria do Sistema Federação da Indústria do Estado do Paraná (FIEP) / Evaluation in future studies of industrial sectors on the Actor-Network theory perspective. Case of study: Industry Observatories of Industry Federation System do Paraná State (FIEP)

Bolzani Júnior, Geraldo Morcel 28 March 2017 (has links)
Dados os resutados de pesquisa bibliométrica que demonstram escassez de pesquisas sobre avaliação em Estudos de Futuros, esta tese tem como objetivo propor um método de avaliação em Estudos de Futuros de setores industriais fundamentado em elementos da Teoria Ator-Rede. A revisão bibliográfica mostra que, no contexto dos Estudos de Futuro, o conceito de ontologia atribuído às imagens e visões de futuro permite propor o conceito de tradução do futuro. Com o conceito de tradução e a partir da observação de projetos prospectivos dos Observatórios da Indústria do Sistema FIEP, se definiram três etapas fundamentais do processo de tradução do futuro que devem ser consideradas: a tradução dos futuros esperados onde se analisa a produção, o consumo e o descarte de futuros esperados, a tradução dos futuros planejados onde se planejam os futuros dos setores industriais a serem articulados e a tradução dos futuros avaliados, objeto desta tese. Para realizar esta última etapa, o passo metodológico executado foi o desenho e a realização de um painel de especialistas com foco na avaliação do projeto prospectivo do setor de energia do estado do Paraná, a partir de conceitos da Sociologia da Expectativa, da Prospectiva Estratégica e da teoria Ator-Rede. As abordagens metodológicas da teoria Ator-Rede, especialmente o método Assemblage, embasaram a análise dos resultados do painel. Aplicado o processo de tradução à avaliação, esta se adensa olhando não apenas para resultados e impactos, mas também para questões de constituição do processo a ser avaliado. A avaliação se expande ao se considerar todas as fases do método prospectivo. O objetivo geral da pesquisa foi atingido na medida em que se apresentou como resultado o método Fundamentos Ontológicos Utilizados em Redes de Mediadores Industriais (FOURMI) para avaliações traduzidas em projetos de prospectiva estratégica. O método FOURMI propõe três etapas: a avaliação externa, para convocação dos atores que irão constituir o ator-rede, a avaliação interna para construção dos processos de formação e desenvolvimento do ator-rede e a avaliação relacional para avaliação das consequências políticas daquilo que se realizou. Da proposta do método também surgem os conceitos de expressão máxima dos atores e de avaliação traduzida que é a avaliação onde os actantes no processo de tradução do futuro têm a sua expressão garantida e registrada. / As it is difficult to find many researches and publications about future studies evaluation, this thesis has as its objective the proposal of a future studies evaluation method for industrial sectors based on some elements of the actor-network theory. The bibliographic review shows that, in the context of future studies, the concept of ontology given to future images and visions allow the proposal of the concept of future translation. With the translation concept and observing the prospective projects of the Industry Observatory of the FIEP System, three fundamental steps of the future translation process must be considered: the expected future translation where the production, consume and discharge of expected future are analyzed, the planned future translation where the sectorial industry futures are planned and will be implemented, and the evaluated futures translation, object of this thesis. In order to accomplish this last step, the next methodological step was the design and realization of an expert panel focused on the evaluation of the prospective project of Paraná energy sector, using the sociology of expectation, strategic prospective and actornetwork theory. The actor-network methodological approaches, specially the Assemblage method were used to analyze the panel results. As the translation process was applied to the evaluation, it became dense, looking not only to the results and impacts, but also to the constitution questions of the process that will be evaluated. The evaluation expands as considering all the steps of the prospective method. The general objective of the research was accomplished as the (in the sense that was presented the) FOURMI (Ontological Fundaments Used in Industrial Mediators Networks) method was presented to produce translated evaluation in strategic prospective projects. The FOURMI method brings forward three stages: the external evaluation to summon actors that will form the actor-network, the internal evaluation to construct the formation and development processes of the actor-network, the relational evaluation to evaluate the political consequences of what was accomplished. From the method proposal, the concepts of maximum expression of actors and translated evaluation can also be extracted. Translated evaluation is the evaluation where the actants in the future translation process have their manifestations guaranteed and registered.
136

Predicting Workforce in Healthcare : Using Machine Learning Algorithms, Statistical Methods and Swedish Healthcare Data / Predicering av Arbetskraft inom Sjukvården genom Maskininlärning, Statistiska Metoder och Svenska Sjukvårdsstatistik

Diskay, Gabriel, Joelsson, Carl January 2023 (has links)
Denna studie undersöker användningen av maskininlärningsmodeller för att predicera arbetskraftstrender inom hälso- och sjukvården i Sverige. Med hjälp av en linjär regressionmodell, en Gradient Boosting Regressor-modell och en Exponential Smoothing-modell syftar forskningen för detta arbete till att ge viktiga insikter för underlaget till makroekonomiska överväganden och att ge en djupare förståelse av Beveridge-kurvan i ett sammanhang relaterat till hälso- och sjukvårdssektorn. Trots vissa utmaningar med datan är målet att förbättra noggrannheten och effektiviteten i beslutsfattandet rörande arbetsmarknaden. Resultaten av denna studie visar maskininlärningspotentialen i predicering i ett ekonomiskt sammanhang, även om inneboende begränsningar och etiska överväganden beaktas. / This study examines the use of machine learning models to predict workforce trends in the healthcare sector in Sweden. Using a Linear Regression model, a Gradient Boosting Regressor model, and an Exponential Smoothing model the research aims to grant needed insight for the basis of macroeconomic considerations and to give a deeper understanding of the Beveridge Curve in the healthcare sector’s context. Despite some challenges with data, the goal is to improve the accuracy and efficiency of the policy-making around the labor market. The results of this study demonstrates the machine learning potential in the forecasting within an economic context, although inherent limitations and ethical considerations are considered.
137

Essays on monetary policy, saving and investment

Lenza, Michèle 04 June 2007 (has links)
This thesis addresses three relevant macroeconomic issues: (i) why<p>Central Banks behave so cautiously compared to optimal theoretical<p>benchmarks, (ii) do monetary variables add information about<p>future Euro Area inflation to a large amount of non monetary<p>variables and (iii) why national saving and investment are so<p>correlated in OECD countries in spite of the high degree of<p>integration of international financial markets.<p><p>The process of innovation in the elaboration of economic theory<p>and statistical analysis of the data witnessed in the last thirty<p>years has greatly enriched the toolbox available to<p>macroeconomists. Two aspects of such a process are particularly<p>noteworthy for addressing the issues in this thesis: the<p>development of macroeconomic dynamic stochastic general<p>equilibrium models (see Woodford, 1999b for an historical<p>perspective) and of techniques that enable to handle large data<p>sets in a parsimonious and flexible manner (see Reichlin, 2002 for<p>an historical perspective).<p><p>Dynamic stochastic general equilibrium models (DSGE) provide the<p>appropriate tools to evaluate the macroeconomic consequences of<p>policy changes. These models, by exploiting modern intertemporal<p>general equilibrium theory, aggregate the optimal responses of<p>individual as consumers and firms in order to identify the<p>aggregate shocks and their propagation mechanisms by the<p>restrictions imposed by optimizing individual behavior. Such a<p>modelling strategy, uncovering economic relationships invariant to<p>a change in policy regimes, provides a framework to analyze the<p>effects of economic policy that is robust to the Lucas'critique<p>(see Lucas, 1976). The early attempts of explaining business<p>cycles by starting from microeconomic behavior suggested that<p>economic policy should play no role since business cycles<p>reflected the efficient response of economic agents to exogenous<p>sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}<p>and, more recently, King and Rebelo, 1999). This view was challenged by<p>several empirical studies showing that the adjustment mechanisms<p>of variables at the heart of macroeconomic propagation mechanisms<p>like prices and wages are not well represented by efficient<p>responses of individual agents in frictionless economies (see, for<p>example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al. 2004). Hence, macroeconomic models currently incorporate<p>some sources of nominal and real rigidities in the DSGE framework<p>and allow the study of the optimal policy reactions to inefficient<p>fluctuations stemming from frictions in macroeconomic propagation<p>mechanisms.<p><p>Against this background, the first chapter of this thesis sets up<p>a DSGE model in order to analyze optimal monetary policy in an<p>economy with sectorial heterogeneity in the frequency of price<p>adjustments. Price setters are divided in two groups: those<p>subject to Calvo type nominal rigidities and those able to change<p>their prices at each period. Sectorial heterogeneity in price<p>setting behavior is a relevant feature in real economies (see, for<p>example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro<p>Area). Hence, neglecting it would lead to an understatement of the<p>heterogeneity in the transmission mechanisms of economy wide<p>shocks. In this framework, Aoki (2001) shows that a Central<p>Bank maximizing social welfare should stabilize only inflation in<p>the sector where prices are sticky (hereafter, core inflation).<p>Since complete stabilization is the only true objective of the<p>policymaker in Aoki (2001) and, hence, is not only desirable<p>but also implementable, the equilibrium real interest rate in the<p>economy is equal to the natural interest rate irrespective of the<p>degree of heterogeneity that is assumed. This would lead to<p>conclude that stabilizing core inflation rather than overall<p>inflation does not imply any observable difference in the<p>aggressiveness of the policy behavior. While maintaining the<p>assumption of sectorial heterogeneity in the frequency of price<p>adjustments, this chapter adds non negligible transaction<p>frictions to the model economy in Aoki (2001). As a<p>consequence, the social welfare maximizing monetary policymaker<p>faces a trade-off among the stabilization of core inflation,<p>economy wide output gap and the nominal interest rate. This<p>feature reflects the trade-offs between conflicting objectives<p>faced by actual policymakers. The chapter shows that the existence<p>of this trade-off makes the aggressiveness of the monetary policy<p>reaction dependent on the degree of sectorial heterogeneity in the<p>economy. In particular, in presence of sectorial heterogeneity in<p>price adjustments, Central Banks are much more likely to behave<p>less aggressively than in an economy where all firms face nominal<p>rigidities. Hence, the chapter concludes that the excessive<p>caution in the conduct of monetary policy shown by actual Central<p>Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not<p>represent a sub-optimal behavior but, on the contrary, might be<p>the optimal monetary policy response in presence of a relevant<p>sectorial dispersion in the frequency of price adjustments.<p><p>DSGE models are proving useful also in empirical applications and<p>recently efforts have been made to incorporate large amounts of<p>information in their framework (see Boivin and Giannoni, 2006). However, the<p>typical DSGE model still relies on a handful of variables. Partly,<p>this reflects the fact that, increasing the number of variables,<p>the specification of a plausible set of theoretical restrictions<p>identifying aggregate shocks and their propagation mechanisms<p>becomes cumbersome. On the other hand, several questions in<p>macroeconomics require the study of a large amount of variables.<p>Among others, two examples related to the second and third chapter<p>of this thesis can help to understand why. First, policymakers<p>analyze a large quantity of information to assess the current and<p>future stance of their economies and, because of model<p>uncertainty, do not rely on a single modelling framework.<p>Consequently, macroeconomic policy can be better understood if the<p>econometrician relies on large set of variables without imposing<p>too much a priori structure on the relationships governing their<p>evolution (see, for example, Giannone et al. 2004 and Bernanke et al. 2005).<p>Moreover, the process of integration of good and financial markets<p>implies that the source of aggregate shocks is increasingly global<p>requiring, in turn, the study of their propagation through cross<p>country links (see, among others, Forni and Reichlin, 2001 and Kose et al. 2003). A<p>priori, country specific behavior cannot be ruled out and many of<p>the homogeneity assumptions that are typically embodied in open<p>macroeconomic models for keeping them tractable are rejected by<p>the data. Summing up, in order to deal with such issues, we need<p>modelling frameworks able to treat a large amount of variables in<p>a flexible manner, i.e. without pre-committing on too many<p>a-priori restrictions more likely to be rejected by the data. The<p>large extent of comovement among wide cross sections of economic<p>variables suggests the existence of few common sources of<p>fluctuations (Forni et al. 2000 and Stock and Watson, 2002) around which<p>individual variables may display specific features: a shock to the<p>world price of oil, for example, hits oil exporters and importers<p>with different sign and intensity or global technological advances<p>can affect some countries before others (Giannone and Reichlin, 2004). Factor<p>models mainly rely on the identification assumption that the<p>dynamics of each variable can be decomposed into two orthogonal<p>components - common and idiosyncratic - and provide a parsimonious<p>tool allowing the analysis of the aggregate shocks and their<p>propagation mechanisms in a large cross section of variables. In<p>fact, while the idiosyncratic components are poorly<p>cross-sectionally correlated, driven by shocks specific of a<p>variable or a group of variables or measurement error, the common<p>components capture the bulk of cross-sectional correlation, and<p>are driven by few shocks that affect, through variable specific<p>factor loadings, all items in a panel of economic time series.<p>Focusing on the latter components allows useful insights on the<p>identity and propagation mechanisms of aggregate shocks underlying<p>a large amount of variables. The second and third chapter of this<p>thesis exploit this idea.<p><p>The second chapter deals with the issue whether monetary variables<p>help to forecast inflation in the Euro Area harmonized index of<p>consumer prices (HICP). Policymakers form their views on the<p>economic outlook by drawing on large amounts of potentially<p>relevant information. Indeed, the monetary policy strategy of the<p>European Central Bank acknowledges that many variables and models<p>can be informative about future Euro Area inflation. A peculiarity<p>of such strategy is that it assigns to monetary information the<p>role of providing insights for the medium - long term evolution of<p>prices while a wide range of alternative non monetary variables<p>and models are employed in order to form a view on the short term<p>and to cross-check the inference based on monetary information.<p>However, both the academic literature and the practice of the<p>leading Central Banks other than the ECB do not assign such a<p>special role to monetary variables (see Gali et al. 2004 and<p>references therein). Hence, the debate whether money really<p>provides relevant information for the inflation outlook in the<p>Euro Area is still open. Specifically, this chapter addresses the<p>issue whether money provides useful information about future<p>inflation beyond what contained in a large amount of non monetary<p>variables. It shows that a few aggregates of the data explain a<p>large amount of the fluctuations in a large cross section of Euro<p>Area variables. This allows to postulate a factor structure for<p>the large panel of variables at hand and to aggregate it in few<p>synthetic indexes that still retain the salient features of the<p>large cross section. The database is split in two big blocks of<p>variables: non monetary (baseline) and monetary variables. Results<p>show that baseline variables provide a satisfactory predictive<p>performance improving on the best univariate benchmarks in the<p>period 1997 - 2005 at all horizons between 6 and 36 months.<p>Remarkably, monetary variables provide a sensible improvement on<p>the performance of baseline variables at horizons above two years.<p>However, the analysis of the evolution of the forecast errors<p>reveals that most of the gains obtained relative to univariate<p>benchmarks of non forecastability with baseline and monetary<p>variables are realized in the first part of the prediction sample<p>up to the end of 2002, which casts doubts on the current<p>forecastability of inflation in the Euro Area.<p><p>The third chapter is based on a joint work with Domenico Giannone<p>and gives empirical foundation to the general equilibrium<p>explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found<p>that domestic saving and investment in OECD countries strongly<p>comove, contrary to the idea that high capital mobility should<p>allow countries to seek the highest returns in global financial<p>markets and, hence, imply a correlation among national saving and<p>investment closer to zero than one. Moreover, capital mobility has<p>strongly increased since the publication of Feldstein - Horioka's<p>seminal paper while the association between saving and investment<p>does not seem to comparably decrease. Through general equilibrium<p>mechanisms, the presence of global shocks might rationalize the<p>correlation between saving and investment. In fact, global shocks,<p>affecting all countries, tend to create imbalance on global<p>capital markets causing offsetting movements in the global<p>interest rate and can generate the observed correlation across<p>national saving and investment rates. However, previous empirical<p>studies (see Ventura, 2003) that have controlled for the effects<p>of global shocks in the context of saving-investment regressions<p>failed to give empirical foundation to this explanation. We show<p>that previous studies have neglected the fact that global shocks<p>may propagate heterogeneously across countries, failing to<p>properly isolate components of saving and investment that are<p>affected by non pervasive shocks. We propose a novel factor<p>augmented panel regression methodology that allows to isolate<p>idiosyncratic sources of fluctuations under the assumption of<p>heterogenous transmission mechanisms of global shocks. Remarkably,<p>by applying our methodology, the association between domestic<p>saving and investment decreases considerably over time,<p>consistently with the observed increase in international capital<p>mobility. In particular, in the last 25 years the correlation<p>between saving and investment disappears.<p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished

Page generated in 0.2142 seconds