• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 38
  • 17
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Estimando o PIB mensal do Rio Grande do Sul : uma abordagem de espaço de estados

Baggio, Giovani January 2017 (has links)
Considerando a importância de uma medida de alta frequência para o PIB do Rio Grande do Sul, o principal indicador de atividade econômica do estado, este trabalho foi dividido em três objetivos. O primeiro foi a estimação de uma série com frequência mensal para o PIB real do Rio Grande do Sul entre janeiro de 2002 e março de 2017, dado que o mesmo só é contabilizado em frequência trimestral. Para tanto, foi utilizado um modelo em espaço de estados que permite a estimação e nowcast do PIB mensal, utilizando séries coincidentes como fonte de informação para a interpolação dos dados trimestrais do PIB, em linha com Bernanke, Gertler e Watson (1997), Mönch e Uhlig (2005) e Issler e Notini (2016). O segundo objetivo foi comparar a série estimada com um indicador de atividade calculado pelo Banco Central do Brasil para o estado, o Índice de Atividade Econômica Regional (IBCR-RS), tanto em termos metodológicos como na capacidade em antecipar as variações do PIB trimestral antes de sua divulgação (nowcasting). O terceiro objetivo foi estabelecer a cronologia dos ciclos de expansão e recessão da economia gaúcha com o uso do algoritmo de Bry e Boschan (1971). Após a etapa de seleção das séries coincidentes e da estimação de diversos modelos de interpolação, foi escolhido para gerar a série mensal do PIB o modelo que utiliza somente a produção industrial como variável auxiliar, tendo este apresentado o melhor ajuste. A comparação do PIB mensal interpolado com o IBCR-RS mostrou que, além da vantagem computacional a favor do método proposto neste trabalho, a imposição da disciplina de que as variações do PIB mensal estimado devem ser exatamente iguais às do PIB trimestral faz com que a dinâmica de curto e longo prazo das variáveis sejam idênticas, o que não ocorre com o IBCR-RS. A cronologia dos pontos de inflexão da atividade econômica apontou três períodos recessivos na economia gaúcha desde janeiro de 2002: jun/2003 a abr/2005 (23 meses e queda acumulada de 8,79%); abr/2011 a abr/2012 (13 meses e queda acumulada de 9,47%); e jun/2013 a nov/2016 (42 meses e queda acumulada de 10,41%), sendo o encerramento deste último apontado somente com a inclusão dos resultados estimados pelo modelo para o segundo trimestre de 2017. Finalmente, os resultados do exercício de nowcasting do PIB mostraram desempenho superior do método proposto frente ao IBCR-RS em termos de antecipação do resultado do PIB de um trimestre a frente, tomando como base as medidas de MAE (erro absoluto médio, em inglês) e MSE (erro quadrático médio, em inglês), comumente usadas nesse intuito. / Giving the importance of a high frequency measure for Rio Grande do Sul’s GDP, the main indicator of economic activity of the state, this work was divided into three objectives. The first one was the estimation of monthly frequency series for Rio Grande do Sul’s real GDP between January/2002 and March/2017, since it is only accounted in quarterly basis. Therefore, we used a State-Space model that enables to estimate and nowcast the monthly GDP, using coincident series as a source of information for the interpolation of quarterly GDP data, in line with Bernanke, Gertler e Watson (1997), Mönch e Uhlig (2005) and Issler e Notini (2016). The second objective was to compare the estimated series with an activity indicator calculated by the Central Bank of Brazil for the state, the Regional Economic Activity Index (IBCR-RS), both in methodological terms and in the capability to anticipate the quarterly GDP release (nowcasting). The third objective was to establish the chronology of the cycles of expansion and recession of the economy of Rio Grande do Sul using the algorithm of Bry e Boschan (1971). After the selection of the coincident series and the estimation of several interpolation models, the chosen model to generate the monthly GDP series uses only the industrial production as an auxiliary variable, and this one presented the best fit. The comparison of the monthly GDP interpolated with the IBCR-RS showed that, in addition to the computational advantage in favor of the method proposed in this work, the imposition of the discipline that the estimated monthly GDP changes must be exactly the same as the quarterly GDP makes the short-term and long-term dynamics of the variables are identical, which is not the case with IBCR-RS. The chronology of the turning points of the economic activity pointed to three recessive periods in the economy of Rio Grande do Sul since January 2002: June/2003 to April/2005 (23 months and accumulated drop of 8.79%); April/2011 to April/2012 (13 months and accumulated fall of 9.47%); and June/2013 to November/2016 (42 months and 10.41% accumulated decrease), with the latter one closing only with the inclusion of the results estimated by the model for the second quarter of 2017. Finally, results for GDP’s nowcasting showed superior performance of the proposed method compared to the IBCR-RS in terms of anticipating quarter-to-quarter GDP results, based on the measures of MAE (absolute mean error) and MSE (mean square error), commonly used for this purpose.
32

Estimando o PIB mensal do Rio Grande do Sul : uma abordagem de espaço de estados

Baggio, Giovani January 2017 (has links)
Considerando a importância de uma medida de alta frequência para o PIB do Rio Grande do Sul, o principal indicador de atividade econômica do estado, este trabalho foi dividido em três objetivos. O primeiro foi a estimação de uma série com frequência mensal para o PIB real do Rio Grande do Sul entre janeiro de 2002 e março de 2017, dado que o mesmo só é contabilizado em frequência trimestral. Para tanto, foi utilizado um modelo em espaço de estados que permite a estimação e nowcast do PIB mensal, utilizando séries coincidentes como fonte de informação para a interpolação dos dados trimestrais do PIB, em linha com Bernanke, Gertler e Watson (1997), Mönch e Uhlig (2005) e Issler e Notini (2016). O segundo objetivo foi comparar a série estimada com um indicador de atividade calculado pelo Banco Central do Brasil para o estado, o Índice de Atividade Econômica Regional (IBCR-RS), tanto em termos metodológicos como na capacidade em antecipar as variações do PIB trimestral antes de sua divulgação (nowcasting). O terceiro objetivo foi estabelecer a cronologia dos ciclos de expansão e recessão da economia gaúcha com o uso do algoritmo de Bry e Boschan (1971). Após a etapa de seleção das séries coincidentes e da estimação de diversos modelos de interpolação, foi escolhido para gerar a série mensal do PIB o modelo que utiliza somente a produção industrial como variável auxiliar, tendo este apresentado o melhor ajuste. A comparação do PIB mensal interpolado com o IBCR-RS mostrou que, além da vantagem computacional a favor do método proposto neste trabalho, a imposição da disciplina de que as variações do PIB mensal estimado devem ser exatamente iguais às do PIB trimestral faz com que a dinâmica de curto e longo prazo das variáveis sejam idênticas, o que não ocorre com o IBCR-RS. A cronologia dos pontos de inflexão da atividade econômica apontou três períodos recessivos na economia gaúcha desde janeiro de 2002: jun/2003 a abr/2005 (23 meses e queda acumulada de 8,79%); abr/2011 a abr/2012 (13 meses e queda acumulada de 9,47%); e jun/2013 a nov/2016 (42 meses e queda acumulada de 10,41%), sendo o encerramento deste último apontado somente com a inclusão dos resultados estimados pelo modelo para o segundo trimestre de 2017. Finalmente, os resultados do exercício de nowcasting do PIB mostraram desempenho superior do método proposto frente ao IBCR-RS em termos de antecipação do resultado do PIB de um trimestre a frente, tomando como base as medidas de MAE (erro absoluto médio, em inglês) e MSE (erro quadrático médio, em inglês), comumente usadas nesse intuito. / Giving the importance of a high frequency measure for Rio Grande do Sul’s GDP, the main indicator of economic activity of the state, this work was divided into three objectives. The first one was the estimation of monthly frequency series for Rio Grande do Sul’s real GDP between January/2002 and March/2017, since it is only accounted in quarterly basis. Therefore, we used a State-Space model that enables to estimate and nowcast the monthly GDP, using coincident series as a source of information for the interpolation of quarterly GDP data, in line with Bernanke, Gertler e Watson (1997), Mönch e Uhlig (2005) and Issler e Notini (2016). The second objective was to compare the estimated series with an activity indicator calculated by the Central Bank of Brazil for the state, the Regional Economic Activity Index (IBCR-RS), both in methodological terms and in the capability to anticipate the quarterly GDP release (nowcasting). The third objective was to establish the chronology of the cycles of expansion and recession of the economy of Rio Grande do Sul using the algorithm of Bry e Boschan (1971). After the selection of the coincident series and the estimation of several interpolation models, the chosen model to generate the monthly GDP series uses only the industrial production as an auxiliary variable, and this one presented the best fit. The comparison of the monthly GDP interpolated with the IBCR-RS showed that, in addition to the computational advantage in favor of the method proposed in this work, the imposition of the discipline that the estimated monthly GDP changes must be exactly the same as the quarterly GDP makes the short-term and long-term dynamics of the variables are identical, which is not the case with IBCR-RS. The chronology of the turning points of the economic activity pointed to three recessive periods in the economy of Rio Grande do Sul since January 2002: June/2003 to April/2005 (23 months and accumulated drop of 8.79%); April/2011 to April/2012 (13 months and accumulated fall of 9.47%); and June/2013 to November/2016 (42 months and 10.41% accumulated decrease), with the latter one closing only with the inclusion of the results estimated by the model for the second quarter of 2017. Finally, results for GDP’s nowcasting showed superior performance of the proposed method compared to the IBCR-RS in terms of anticipating quarter-to-quarter GDP results, based on the measures of MAE (absolute mean error) and MSE (mean square error), commonly used for this purpose.
33

Essays in real-time forecasting

Liebermann, Joëlle 12 September 2012 (has links)
This thesis contains three essays in the field of real-time econometrics, and more particularly<p>forecasting.<p>The issue of using data as available in real-time to forecasters, policymakers or financial<p>markets is an important one which has only recently been taken on board in the empirical<p>literature. Data available and used in real-time are preliminary and differ from ex-post<p>revised data, and given that data revisions may be quite substantial, the use of latest<p>available instead of real-time can substantially affect empirical findings (see, among others,<p>Croushore’s (2011) survey). Furthermore, as variables are released on different dates<p>and with varying degrees of publication lags, in order not to disregard timely information,<p>datasets are characterized by the so-called “ragged-edge”structure problem. Hence, special<p>econometric frameworks, such as developed by Giannone, Reichlin and Small (2008) must<p>be used.<p>The first Chapter, “The impact of macroeconomic news on bond yields: (in)stabilities over<p>time and relative importance”, studies the reaction of U.S. Treasury bond yields to real-time<p>market-based news in the daily flow of macroeconomic releases which provide most of the<p>relevant information on their fundamentals, i.e. the state of the economy and inflation. We<p>find that yields react systematically to a set of news consisting of the soft data, which have<p>very short publication lags, and the most timely hard data, with the employment report<p>being the most important release. However, sub-samples evidence reveals that parameter<p>instability in terms of absolute and relative size of yields response to news, as well as<p>significance, is present. Especially, the often cited dominance to markets of the employment<p>report has been evolving over time, as the size of the yields reaction to it was steadily<p>increasing. Moreover, over the recent crisis period there has been an overall switch in the<p>relative importance of soft and hard data compared to the pre-crisis period, with the latter<p>becoming more important even if less timely, and the scope of hard data to which markets<p>react has increased and is more balanced as less concentrated on the employment report.<p>Markets have become more reactive to news over the recent crisis period, particularly to<p>hard data. This is a consequence of the fact that in periods of high uncertainty (bad state),<p>markets starve for information and attach a higher value to the marginal information content<p>of these news releases.<p>The second and third Chapters focus on the real-time ability of models to now-and-forecast<p>in a data-rich environment. It uses an econometric framework, that can deal with large<p>panels that have a “ragged-edge”structure, and to evaluate the models in real-time, we<p>constructed a database of vintages for US variables reproducing the exact information that<p>was available to a real-time forecaster.<p>The second Chapter, “Real-time nowcasting of GDP: a factor model versus professional<p>forecasters”, performs a fully real-time nowcasting (forecasting) exercise of US real GDP<p>growth using Giannone, Reichlin and Smalls (2008), henceforth (GRS), dynamic factor<p>model (DFM) framework which enables to handle large unbalanced datasets as available<p>in real-time. We track the daily evolution throughout the current and next quarter of the<p>model nowcasting performance. Similarly to GRS’s pseudo real-time results, we find that<p>the precision of the nowcasts increases with information releases. Moreover, the Survey of<p>Professional Forecasters does not carry additional information with respect to the model,<p>suggesting that the often cited superiority of the former, attributable to judgment, is weak<p>over our sample. As one moves forward along the real-time data flow, the continuous<p>updating of the model provides a more precise estimate of current quarter GDP growth and<p>the Survey of Professional Forecasters becomes stale. These results are robust to the recent<p>recession period.<p>The last Chapter, “Real-time forecasting in a data-rich environment”, evaluates the ability<p>of different models, to forecast key real and nominal U.S. monthly macroeconomic variables<p>in a data-rich environment and from the perspective of a real-time forecaster. Among<p>the approaches used to forecast in a data-rich environment, we use pooling of bi-variate<p>forecasts which is an indirect way to exploit large cross-section and the directly pooling of<p>information using a high-dimensional model (DFM and Bayesian VAR). Furthermore forecasts<p>combination schemes are used, to overcome the choice of model specification faced by<p>the practitioner (e.g. which criteria to use to select the parametrization of the model), as<p>we seek for evidence regarding the performance of a model that is robust across specifications/<p>combination schemes. Our findings show that predictability of the real variables is<p>confined over the recent recession/crisis period. This in line with the findings of D’Agostino<p>and Giannone (2012) over an earlier period, that gains in relative performance of models<p>using large datasets over univariate models are driven by downturn periods which are characterized<p>by higher comovements. These results are robust to the combination schemes<p>or models used. A point worth mentioning is that for nowcasting GDP exploiting crosssectional<p>information along the real-time data flow also helps over the end of the great moderation period. Since this is a quarterly aggregate proxying the state of the economy,<p>monthly variables carry information content for GDP. But similarly to the findings for the<p>monthly variables, predictability, as measured by the gains relative to the naive random<p>walk model, is higher during crisis/recession period than during tranquil times. Regarding<p>inflation, results are stable across time, but predictability is mainly found at nowcasting<p>and forecasting one-month ahead, with the BVAR standing out at nowcasting. The results<p>show that the forecasting gains at these short horizons stem mainly from exploiting timely<p>information. The results also show that direct pooling of information using a high dimensional<p>model (DFM or BVAR) which takes into account the cross-correlation between the<p>variables and efficiently deals with the “ragged-edge”structure of the dataset, yields more<p>accurate forecasts than the indirect pooling of bi-variate forecasts/models. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
34

Essays in Firm Dynamics, Ownership and Aggregate Effects / Essais sur la dynamique des entreprises, la propriété et les effets globaux

Luomaranta, Henri 09 September 2019 (has links)
Le résumé en français n'a pas été communiqué par l'auteur. / Administrative registers maintained by statistical offices on vastly heterogeneous firms have much untapped potential to reveal details on sources of productivity of firms and economies alike. It has been proposed that firm-level shocks can go a long way in explaining aggregate fluctuations. Based on novel monthly frequency data, idiosyncratic shocks are able to explain a sizable share of the Finnish economic fluctuations, providing support to the granular hypothesis. The global financial crisis of 2007-2008 has challenged the field of economic forecasting, and nowcasting has become an active field. This thesis shows that the information content of firm-level sales and truck traffic can be used for nowcasting GDP figures, by using a specific mixture of machine learning algorithms. The agency problem lies at the heart of much of economic theory. Based on a unique dataset linking owners, CEOs and firms, and exploiting plausibly exogenous variations in the separation of ownership and control, agency costs seem to be an important determinant of firm productivity. Furthermore, the effect appear strongest in medium-sized firms. Enterprise group structures might have important implications on the voluminous literature on firm size, as large share of SME employment can be attributed to affiliates of large business groups. Within firm variation suggests that enterprise group affiliation has heterogeneous impacts depending on size, having strong positive impact on productivity of small firms, and negative impact on their growth. In terms of aggregate job creation, it is found that the independent small firms have contributed the most. The results in this thesis underline the benefits of paying attention to samples encompassing the total population of firms. Researchers should continue to explore the potential of rich administrative data sources at statistical offices and strive to strengthen the ties with data producers.
35

Essays on Empirical Macroeconomics

Caruso, Alberto 25 June 2020 (has links) (PDF)
The thesis contains four essays, covering topics in the field of real-time macroeconometrics, forecasting and applied macroeconomics. In the first two chapters, I use recent techniques developed in the "nowcasting" literature in order to analyse and interpret the macroeconomic news flow. I use them either to assess current macroeconomic conditions, showing the importance of foreign indicators dealing with small open economies, or linking macroeconomic news to asset prices, through a model that help us interpret macroeconomic data and explaining the linkages between macro variables and financial indicators. In the third chapter, I analyse the link between macroeconomic data in real-time and the yield curve of interest rates, constructing a forecasting model which takes into account the peculiar characteristics of the macroeconomic data flow. In the last chapter, I present a Bayesian Vector Autoregression model built in order to analyse the last two crisis in the Eurozone (2008-09, and 2011-12) identifying their unique characteristics with respect to historical regularities, an issue of great importance from a policy perspective. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
36

Explorations into Machine Learning Techniques for Precipitation Nowcasting

Nagarajan, Aditya 24 March 2017 (has links) (PDF)
Recent advances in cloud-based big-data technologies now makes data driven solutions feasible for increasing numbers of scientific computing applications. One such data driven solution approach is machine learning where patterns in large data sets are brought to the surface by finding complex mathematical relationships within the data. Nowcasting or short-term prediction of rainfall in a given region is an important problem in meteorology. In this thesis we explore the nowcasting problem through a data driven approach by formulating it as a machine learning problem. State-of-the-art nowcasting systems today are based on numerical models which describe the physical processes leading to precipitation or on weather radar extrapolation techniques that predict future radar precipitation maps by advecting from a sequence of past maps. These techniques, while they can perform well over very short prediction horizons (minutes) or very long horizons (hours to days), tend not to perform well over medium horizons (1-2 hours) due to lack of input data at the necessary spatial and temporal scales for the numerical prediction methods or due to the inability of radar extrapolation methods to predict storm growth and decay. Given that water must first concentrate in the atmosphere as water vapor before it can fall to the ground as rain, one goal of this thesis is to understand if water vapor information can improve radar extrapolation techniques by giving the information needed to infer growth and decay. To do so, we use the GPS-Meteorology technique to measure the water vapor in the atmosphere and weather radar reflectivity to measure rainfall. By training a machine learning nowcasting algorithm using both variables and comparing its performance against a nowcasting algorithm trained on reflectivity alone, we draw conclusions as to the predictive power of adding water vapor information. Another goal of this thesis is to compare different machine learning techniques, viz., the random forest ensemble learning technique, which has shown success on a number of other weather prediction problems, and the current state-of-the-art machine learning technique for images and image sequences, convolutional neural network (CNN). We compare these in terms of problem representation, training complexity, and nowcasting performance. A final goal is to compare the nowcasting performance of our machine learning techniques against published results for current state-of-the-art model based nowcasting techniques.
37

Essays on tail risk in macroeconomics and finance: measurement and forecasting

Ricci, Lorenzo 13 February 2017 (has links)
This thesis is composed of three chapters that propose some novel approaches on tail risk for financial market and forecasting in finance and macroeconomics. The first part of this dissertation focuses on financial market correlations and introduces a simple measure of tail correlation, TailCoR, while the second contribution addresses the issue of identification of non- normal structural shocks in Vector Autoregression which is common on finance. The third part belongs to the vast literature on predictions of economic growth; the problem is tackled using a Bayesian Dynamic Factor model to predict Norwegian GDP.Chapter I: TailCoRThe first chapter introduces a simple measure of tail correlation, TailCoR, which disentangles linear and non linear correlation. The aim is to capture all features of financial market co- movement when extreme events (i.e. financial crises) occur. Indeed, tail correlations may arise because asset prices are either linearly correlated (i.e. the Pearson correlations are different from zero) or non-linearly correlated, meaning that asset prices are dependent at the tail of the distribution.Since it is based on quantiles, TailCoR has three main advantages: i) it is not based on asymptotic arguments, ii) it is very general as it applies with no specific distributional assumption, and iii) it is simple to use. We show that TailCoR also disentangles easily between linear and non-linear correlations. The measure has been successfully tested on simulated data. Several extensions, useful for practitioners, are presented like downside and upside tail correlations.In our empirical analysis, we apply this measure to eight major US banks for the period 2003-2012. For comparison purposes, we compute the upper and lower exceedance correlations and the parametric and non-parametric tail dependence coefficients. On the overall sample, results show that both the linear and non-linear contributions are relevant. The results suggest that co-movement increases during the financial crisis because of both the linear and non- linear correlations. Furthermore, the increase of TailCoR at the end of 2012 is mostly driven by the non-linearity, reflecting the risks of tail events and their spillovers associated with the European sovereign debt crisis. Chapter II: On the identification of non-normal shocks in structural VARThe second chapter deals with the structural interpretation of the VAR using the statistical properties of the innovation terms. In general, financial markets are characterized by non- normal shocks. Under non-Gaussianity, we introduce a methodology based on the reduction of tail dependency to identify the non-normal structural shocks.Borrowing from statistics, the methodology can be summarized in two main steps: i) decor- relate the estimated residuals and ii) the uncorrelated residuals are rotated in order to get a vector of independent shocks using a tail dependency matrix. We do not label the shocks a priori, but post-estimate on the basis of economic judgement.Furthermore, we show how our approach allows to identify all the shocks using a Monte Carlo study. In some cases, the method can turn out to be more significant when the amount of tail events are relevant. Therefore, the frequency of the series and the degree of non-normality are relevant to achieve accurate identification.Finally, we apply our method to two different VAR, all estimated on US data: i) a monthly trivariate model which studies the effects of oil market shocks, and finally ii) a VAR that focuses on the interaction between monetary policy and the stock market. In the first case, we validate the results obtained in the economic literature. In the second case, we cannot confirm the validity of an identification scheme based on combination of short and long run restrictions which is used in part of the empirical literature.Chapter III :Nowcasting NorwayThe third chapter consists in predictions of Norwegian Mainland GDP. Policy institutions have to decide to set their policies without knowledge of the current economic conditions. We estimate a Bayesian dynamic factor model (BDFM) on a panel of macroeconomic variables (all followed by market operators) from 1990 until 2011.First, the BDFM is an extension to the Bayesian framework of the dynamic factor model (DFM). The difference is that, compared with a DFM, there is more dynamics in the BDFM introduced in order to accommodate the dynamic heterogeneity of different variables. How- ever, in order to introduce more dynamics, the BDFM requires to estimate a large number of parameters, which can easily lead to volatile predictions due to estimation uncertainty. This is why the model is estimated with Bayesian methods, which, by shrinking the factor model toward a simple naive prior model, are able to limit estimation uncertainty.The second aspect is the use of a small dataset. A common feature of the literature on DFM is the use of large datasets. However, there is a literature that has shown how, for the purpose of forecasting, DFMs can be estimated on a small number of appropriately selected variables.Finally, through a pseudo real-time exercise, we show that the BDFM performs well both in terms of point forecast, and in terms of density forecasts. Results indicate that our model outperforms standard univariate benchmark models, that it performs as well as the Bloomberg Survey, and that it outperforms the predictions published by the Norges Bank in its monetary policy report. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
38

Essays on macroeconometrics and short-term forecasting

Cicconi, Claudia 11 September 2012 (has links)
The thesis, entitled "Essays on macroeconometrics and short-term forecasting",<p>is composed of three chapters. The first two chapters are on nowcasting,<p>a topic that has received an increasing attention both among practitioners and<p>the academics especially in conjunction and in the aftermath of the 2008-2009<p>economic crisis. At the heart of the two chapters is the idea of exploiting the<p>information from data published at a higher frequency for obtaining early estimates<p>of the macroeconomic variable of interest. The models used to compute<p>the nowcasts are dynamic models conceived for handling in an efficient way<p>the characteristics of the data used in a real-time context, like the fact that due to the different frequencies and the non-synchronicity of the releases<p>the time series have in general missing data at the end of the sample. While<p>the first chapter uses a small model like a VAR for nowcasting Italian GDP,<p>the second one makes use of a dynamic factor model, more suitable to handle<p>medium-large data sets, for providing early estimates of the employment in<p>the euro area. The third chapter develops a topic only marginally touched<p>by the second chapter, i.e. the estimation of dynamic factor models on data characterized by block-structures.<p>The firrst chapter assesses the accuracy of the Italian GDP nowcasts based<p>on a small information set consisting of GDP itself, the industrial production<p>index and the Economic Sentiment Indicator. The task is carried out by using<p>real-time vintages of data in an out-of-sample exercise over rolling windows<p>of data. Beside using real-time data, the real-time setting of the exercise is<p>also guaranteed by updating the nowcasts according to the historical release calendar. The model used to compute the nowcasts is a mixed-frequency Vector<p>Autoregressive (VAR) model, cast in state-space form and estimated by<p>maximum likelihood. The results show that the model can provide quite accurate<p>early estimates of the Italian GDP growth rates not only with respect<p>to a naive benchmark but also with respect to a bridge model based on the<p>same information set and a mixed-frequency VAR with only GDP and the industrial production index.<p>The chapter also analyzes with some attention the role of the Economic Sentiment<p>Indicator, and of soft information in general. The comparison of our<p>mixed-frequency VAR with one with only GDP and the industrial production<p>index clearly shows that using soft information helps obtaining more accurate<p>early estimates. Evidence is also found that the advantage from using soft<p>information goes beyond its timeliness.<p>In the second chapter we focus on nowcasting the quarterly national account<p>employment of the euro area making use of both country-specific and<p>area wide information. The relevance of anticipating Eurostat estimates of<p>employment rests on the fact that, despite it represents an important macroeconomic<p>variable, euro area employment is measured at a relatively low frequency<p>(quarterly) and published with a considerable delay (approximately<p>two months and a half). Obtaining an early estimate of this variable is possible<p>thanks to the fact that several Member States publish employment data and<p>employment-related statistics in advance with respect to the Eurostat release<p>of the euro area employment. Data availability represents, nevertheless, a<p>major limit as country-level time series are in general non homogeneous, have<p>different starting periods and, in some cases, are very short. We construct a<p>data set of monthly and quarterly time series consisting of both aggregate and<p>country-level data on Quarterly National Account employment, employment<p>expectations from business surveys and Labour Force Survey employment and<p>unemployment. In order to perform a real time out-of-sample exercise simulating<p>the (pseudo) real-time availability of the data, we construct an artificial<p>calendar of data releases based on the effective calendar observed during the first quarter of 2012. The model used to compute the nowcasts is a dynamic<p>factor model allowing for mixed-frequency data, missing data at the beginning<p>of the sample and ragged edges typical of non synchronous data releases. Our<p>results show that using country-specific information as soon as it is available<p>allows to obtain reasonably accurate estimates of the employment of the euro<p>area about fifteen days before the end of the quarter.<p>We also look at the nowcasts of employment of the four largest Member<p>States. We find that (with the exception of France) augmenting the dynamic<p>factor model with country-specific factors provides better results than those<p>obtained with the model without country-specific factors.<p>The third chapter of the thesis deals with dynamic factor models on data<p>characterized by local cross-correlation due to the presence of block-structures.<p>The latter is modeled by introducing block-specific factors, i.e. factors that<p>are specific to blocks of time series. We propose an algorithm to estimate the model by (quasi) maximum likelihood and use it to run Monte Carlo<p>simulations to evaluate the effects of modeling or not the block-structure on<p>the estimates of common factors. We find two main results: first, that in finite samples modeling the block-structure, beside being interesting per se, can help<p>reducing the model miss-specification and getting more accurate estimates<p>of the common factors; second, that imposing a wrong block-structure or<p>imposing a block-structure when it is not present does not have negative<p>effects on the estimates of the common factors. These two results allow us<p>to conclude that it is always recommendable to model the block-structure<p>especially if the characteristics of the data suggest that there is one. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished

Page generated in 0.0602 seconds