• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 10
  • 10
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Essays on Aggregation and Cointegration of Econometric Models

Silvestrini, Andrea 02 June 2009 (has links)
This dissertation can be broadly divided into two independent parts. The first three chapters analyse issues related to temporal and contemporaneous aggregation of econometric models. The fourth chapter contains an application of Bayesian techniques to investigate whether the post transition fiscal policy of Poland is sustainable in the long run and consistent with an intertemporal budget constraint. Chapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models. A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed. Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it. Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results. Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country". The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions. The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available. The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less). Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process. The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors. Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure. Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations. Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely. The empirical analysis to examine debt stabilization is made up by two steps. First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005). Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999). The priors used in the paper leads to straightforward posterior calculations which can be easily performed. Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.
2

Essays on Macroeconomics in Mixed Frequency Estimations

Kim, Tae Bong January 2011 (has links)
<p>This dissertation asks whether frequency misspecification of a New Keynesian model</p><p>results in temporal aggregation bias of the Calvo parameter. First, when a</p><p>New Keynesian model is estimated at a quarterly frequency while the true</p><p>data generating process is the same but at a monthly frequency, the Calvo</p><p>parameter is upward biased and hence implies longer average price duration.</p><p>This suggests estimating a New Keynesian model at a monthly frequency may</p><p>yield different results. However, due to mixed frequency datasets in macro</p><p>time series recorded at quarterly and monthly intervals, an estimation</p><p>methodology is not straightforward. To accommodate mixed frequency datasets,</p><p>this paper proposes a data augmentation method borrowed from Bayesian</p><p>estimation literature by extending MCMC algorithm with</p><p>"Rao-Blackwellization" of the posterior density. Compared to two alternative</p><p>estimation methods in context of Bayesian estimation of DSGE models, this</p><p>augmentation method delivers lower root mean squared errors for parameters</p><p>of interest in New Keynesian model. Lastly, a medium scale New Keynesian</p><p>model is brought to the actual data, and the benchmark estimation, i.e. the</p><p>data augmentation method, finds that the average price duration implied by</p><p>the monthly model is 5 months while that by the quarterly model is 20.7</p><p>months.</p> / Dissertation
3

The Effect Of Temporal Aggregation On Univariate Time Series Analysis

Sariaslan, Nazli 01 September 2010 (has links) (PDF)
Most of the time series are constructed by some kind of aggregation and temporal aggregation that can be defined as aggregation over consecutive time periods. Temporal aggregation takes an important role in time series analysis since the choice of time unit clearly influences the type of model and forecast results. A totally different time series model can be fitted on the same variable over different time periods. In this thesis, the effect of temporal aggregation on univariate time series models is studied by considering modeling and forecasting procedure via a simulation study and an application based on a southern oscillation data set. Simulation study shows how the model, mean square forecast error and estimated parameters change when temporally aggregated data is used for different orders of aggregation and sample sizes. Furthermore, the effect of temporal aggregation is also demonstrated through southern oscillation data set for different orders of aggregation. It is observed that the effect of temporal aggregation should be taken into account for data analysis since temporal aggregation can give rise to misleading results and inferences.
4

The use of temporally aggregated data on detecting a structural change of a time series process

Lee, Bu Hyoung January 2016 (has links)
A time series process can be influenced by an interruptive event which starts at a certain time point and so a structural break in either mean or variance may occur before and after the event time. However, the traditional statistical tests of two independent samples, such as the t-test for a mean difference and the F-test for a variance difference, cannot be directly used for detecting the structural breaks because it is almost certainly impossible that two random samples exist in a time series. As alternative methods, the likelihood ratio (LR) test for a mean change and the cumulative sum (CUSUM) of squares test for a variance change have been widely employed in literature. Another point of interest is temporal aggregation in a time series. Most published time series data are temporally aggregated from the original observations of a small time unit to the cumulative records of a large time unit. However, it is known that temporal aggregation has substantial effects on process properties because it transforms a high frequency nonaggregate process into a low frequency aggregate process. In this research, we investigate the effects of temporal aggregation on the LR test and the CUSUM test, through the ARIMA model transformation. First, we derive the proper transformation of ARIMA model orders and parameters when a time series is temporally aggregated. For the LR test for a mean change, its test statistic is associated with model parameters and errors. The parameters and errors in the statistic should be changed when an AR(p) process transforms upon the mth order temporal aggregation to an ARMA(P,Q) process. Using the property, we propose a modified LR test when a time series is aggregated. Through Monte Carlo simulations and empirical examples, we show that the aggregation leads the null distribution of the modified LR test statistic being shifted to the left. Hence, the test power increases as the order of aggregation increases. For the CUSUM test for a variance change, we show that two aggregation terms will appear in the test statistic and have negative effects on test results when an ARIMA(p,d,q) process transforms upon the mth order temporal aggregation to an ARIMA(P,d,Q) process. Then, we propose a modified CUSUM test to control the terms which are interpreted as the aggregation effects. Through Monte Carlo simulations and empirical examples, the modified CUSUM test shows better performance and higher test powers to detect a variance change in an aggregated time series than the original CUSUM test. / Statistics
5

Traffic data sampling for air pollution estimation at different urban scales / Échantillonnage des données de trafic pour l’estimation de la pollution atmosphérique aux différentes échelles urbaines

Schiper, Nicole 09 October 2017 (has links)
La circulation routière est une source majeure de pollution atmosphérique dans les zones urbaines. Les décideurs insistent pour qu’on leur propose de nouvelles solutions, y compris de nouvelles stratégies de management qui pourraient directement faire baisser les émissions de polluants. Pour évaluer les performances de ces stratégies, le calcul des émissions de pollution devrait tenir compte de la dynamique spatiale et temporelle du trafic. L’utilisation de capteurs traditionnels sur route (par exemple, capteurs inductifs ou boucles de comptage) pour collecter des données en temps réel est nécessaire mais pas suffisante en raison de leur coût de mise en oeuvre très élevé. Le fait que de telles technologies, pour des raisons pratiques, ne fournissent que des informations locales est un inconvénient. Certaines méthodes devraient ensuite être appliquées pour étendre cette information locale à une grande échelle. Ces méthodes souffrent actuellement des limites suivantes : (i) la relation entre les données manquantes et la précision de l’estimation ne peut être facilement déterminée et (ii) les calculs à grande échelle sont énormément coûteux, principalement lorsque les phénomènes de congestion sont considérés. Compte tenu d’une simulation microscopique du trafic couplée à un modèle d’émission, une approche innovante de ce problème est mise en oeuvre. Elle consiste à appliquer des techniques de sélection statistique qui permettent d’identifier les emplacements les plus pertinents pour estimer les émissions des véhicules du réseau à différentes échelles spatiales et temporelles. Ce travail explore l’utilisation de méthodes statistiques intelligentes et naïves, comme outil pour sélectionner l’information la plus pertinente sur le trafic et les émissions sur un réseau afin de déterminer les valeurs totales à plusieurs échelles. Ce travail met également en évidence quelques précautions à prendre en compte quand on calcul les émissions à large échelle à partir des données trafic et d’un modèle d’émission. L’utilisation des facteurs d’émission COPERT IV à différentes échelles spatio-temporelles induit un biais en fonction des conditions de circulation par rapport à l’échelle d’origine (cycles de conduite). Ce biais observé sur nos simulations a été quantifié en fonction des indicateurs de trafic (vitesse moyenne). Il a également été démontré qu’il avait une double origine : la convexité des fonctions d’émission et la covariance des variables de trafic. / Road traffic is a major source of air pollution in urban areas. Policy makers are pushing for different solutions including new traffic management strategies that can directly lower pollutants emissions. To assess the performances of such strategies, the calculation of pollution emission should consider spatial and temporal dynamic of the traffic. The use of traditional on-road sensors (e.g. inductive sensors) for collecting real-time data is necessary but not sufficient because of their expensive cost of implementation. It is also a disadvantage that such technologies, for practical reasons, only provide local information. Some methods should then be applied to expand this local information to large spatial extent. These methods currently suffer from the following limitations: (i) the relationship between missing data and the estimation accuracy, both cannot be easily determined and (ii) the calculations on large area is computationally expensive in particular when time evolution is considered. Given a dynamic traffic simulation coupled with an emission model, a novel approach to this problem is taken by applying selection techniques that can identify the most relevant locations to estimate the network vehicle emissions in various spatial and temporal scales. This work explores the use of different statistical methods both naïve and smart, as tools for selecting the most relevant traffic and emission information on a network to determine the total values at any scale. This work also highlights some cautions when such traffic-emission coupled method is used to quantify emissions due the traffic. Using the COPERT IV emission functions at various spatial-temporal scales induces a bias depending on traffic conditions, in comparison to the original scale (driving cycles). This bias observed in our simulations, has been quantified in function of traffic indicators (mean speed). It also has been demonstrated to have a double origin: the emission functions’ convexity and the traffic variables covariance.
6

Essays on Monetary Policy

Bayar, Omer 01 August 2010 (has links)
Central banks use a series of relatively small interest rate changes in adjusting their monetary policy stance. This persistence in interest rate changes is well documented by empirical monetary policy reaction functions that feature a large estimated coefficient for the lagged interest rate. The two hypotheses that explain the size of this large estimated coefficient are monetary policy inertia and serially correlated macro shocks. In the first part of my dissertation, I show that the effect of inertia on the Federal Reserve’s monthly funds rate adjustment is only moderate, and smaller than suggested by previous studies. In the second part, I present evidence that the temporal aggregation of interest rates puts an upward bias on the size of the estimated coefficient for the lagged interest rate. The third part of my dissertation is inspired by recent developments in the housing market and the resulting effect on the overall economy. In this third essay, we show that high loan-to-value mortgage borrowing reduces the effectiveness of monetary policy.
7

ARIMA demand forecasting by aggregation / Prévision de la demande type ARIMA par agrégation

Rostami Tabar, Bahman 10 December 2013 (has links)
L'objectif principal de cette recherche est d'analyser les effets de l'agrégation sur la prévision de la demande. Cet effet est examiné par l'analyse mathématique et l’étude de simulation. L'analyse est complétée en examinant les résultats sur un ensemble de données réelles. Dans la première partie de cette étude, l'impact de l'agrégation temporelle sur la prévision de la demande a été évalué. En suite, Dans la deuxième partie de cette recherche, l'efficacité des approches BU(Bottom-Up) et TD (Top-Down) est analytiquement évaluée pour prévoir la demande au niveau agrégé et désagrégé. Nous supposons que la série désagrégée suit soit un processus moyenne mobile intégrée d’ordre un, ARIMA (0,1,1), soit un processus autoregressif moyenne mobile d’ordre un, ARIMA (1,0,1) avec leur cas spéciales. / Demand forecasting performance is subject to the uncertainty underlying the time series an organisation is dealing with. There are many approaches that may be used to reduce demand uncertainty and consequently improve the forecasting (and inventory control) performance. An intuitively appealing such approach that is known to be effective is demand aggregation. One approach is to aggregate demand in lower-frequency ‘time buckets’. Such an approach is often referred to, in the academic literature, as temporal aggregation. Another approach discussed in the literature is that associated with cross-sectional aggregation, which involves aggregating different time series to obtain higher level forecasts.This research discusses whether it is appropriate to use the original (not aggregated) data to generate a forecast or one should rather aggregate data first and then generate a forecast. This Ph.D. thesis reveals the conditions under which each approach leads to a superior performance as judged based on forecast accuracy. Throughout this work, it is assumed that the underlying structure of the demand time series follows an AutoRegressive Integrated Moving Average (ARIMA) process.In the first part of our1 research, the effect of temporal aggregation on demand forecasting is analysed. It is assumed that the non-aggregate demand follows an autoregressive moving average process of order one, ARMA(1,1). Additionally, the associated special cases of a first-order autoregressive process, AR(1) and a moving average process of order one, MA(1) are also considered, and a Single Exponential Smoothing (SES) procedure is used to forecast demand. These demand processes are often encountered in practice and SES is one of the standard estimators used in industry. Theoretical Mean Squared Error expressions are derived for the aggregate and the non-aggregate demand in order to contrast the relevant forecasting performances. The theoretical analysis is validated by an extensive numerical investigation and experimentation with an empirical dataset. The results indicate that performance improvements achieved through the aggregation approach are a function of the aggregation level, the smoothing constant value used for SES and the process parameters.In the second part of our research, the effect of cross-sectional aggregation on demand forecasting is evaluated. More specifically, the relative effectiveness of top-down (TD) and bottom-up (BU) approaches are compared for forecasting the aggregate and sub-aggregate demands. It is assumed that that the sub-aggregate demand follows either a ARMA(1,1) or a non-stationary Integrated Moving Average process of order one, IMA(1,1) and a SES procedure is used to extrapolate future requirements. Such demand processes are often encountered in practice and, as discussed above, SES is one of the standard estimators used in industry (in addition to being the optimal estimator for an IMA(1) process). Theoretical Mean Squared Errors are derived for the BU and TD approach in order to contrast the relevant forecasting performances. The theoretical analysis is supported by an extensive numerical investigation at both the aggregate and sub-aggregate levels in addition to empirically validating our findings on a real dataset from a European superstore. The results show that the superiority of each approach is a function of the series autocorrelation, the cross-correlation between series and the comparison level.Finally, for both parts of the research, valuable insights are offered to practitioners and an agenda for further research in this area is provided.
8

ARIMA demand forecasting by aggregation

Rostami Tabar, Bahman 10 December 2013 (has links) (PDF)
Demand forecasting performance is subject to the uncertainty underlying the time series an organisation is dealing with. There are many approaches that may be used to reduce demand uncertainty and consequently improve the forecasting (and inventory control) performance. An intuitively appealing such approach that is known to be effective is demand aggregation. One approach is to aggregate demand in lower-frequency 'time buckets'. Such an approach is often referred to, in the academic literature, as temporal aggregation. Another approach discussed in the literature is that associated with cross-sectional aggregation, which involves aggregating different time series to obtain higher level forecasts.This research discusses whether it is appropriate to use the original (not aggregated) data to generate a forecast or one should rather aggregate data first and then generate a forecast. This Ph.D. thesis reveals the conditions under which each approach leads to a superior performance as judged based on forecast accuracy. Throughout this work, it is assumed that the underlying structure of the demand time series follows an AutoRegressive Integrated Moving Average (ARIMA) process.In the first part of our1 research, the effect of temporal aggregation on demand forecasting is analysed. It is assumed that the non-aggregate demand follows an autoregressive moving average process of order one, ARMA(1,1). Additionally, the associated special cases of a first-order autoregressive process, AR(1) and a moving average process of order one, MA(1) are also considered, and a Single Exponential Smoothing (SES) procedure is used to forecast demand. These demand processes are often encountered in practice and SES is one of the standard estimators used in industry. Theoretical Mean Squared Error expressions are derived for the aggregate and the non-aggregate demand in order to contrast the relevant forecasting performances. The theoretical analysis is validated by an extensive numerical investigation and experimentation with an empirical dataset. The results indicate that performance improvements achieved through the aggregation approach are a function of the aggregation level, the smoothing constant value used for SES and the process parameters.In the second part of our research, the effect of cross-sectional aggregation on demand forecasting is evaluated. More specifically, the relative effectiveness of top-down (TD) and bottom-up (BU) approaches are compared for forecasting the aggregate and sub-aggregate demands. It is assumed that that the sub-aggregate demand follows either a ARMA(1,1) or a non-stationary Integrated Moving Average process of order one, IMA(1,1) and a SES procedure is used to extrapolate future requirements. Such demand processes are often encountered in practice and, as discussed above, SES is one of the standard estimators used in industry (in addition to being the optimal estimator for an IMA(1) process). Theoretical Mean Squared Errors are derived for the BU and TD approach in order to contrast the relevant forecasting performances. The theoretical analysis is supported by an extensive numerical investigation at both the aggregate and sub-aggregate levels in addition to empirically validating our findings on a real dataset from a European superstore. The results show that the superiority of each approach is a function of the series autocorrelation, the cross-correlation between series and the comparison level.Finally, for both parts of the research, valuable insights are offered to practitioners and an agenda for further research in this area is provided.
9

Agregação temporal e não-linearidade da paridade do poder de compra: testes para o Brasil e seus parceiros comerciais

Simões, Oscar Rodrigues 12 August 2011 (has links)
Submitted by Oscar Simoes (oscar.simoes@citi.com) on 2011-09-06T20:01:02Z No. of bitstreams: 1 Dissertação Oscar Simoes FINAL.pdf: 585897 bytes, checksum: 7cd8393ba1823e9dcfc4bde821b40736 (MD5) / Approved for entry into archive by Gisele Isaura Hannickel (gisele.hannickel@fgv.br) on 2011-09-08T12:46:40Z (GMT) No. of bitstreams: 1 Dissertação Oscar Simoes FINAL.pdf: 585897 bytes, checksum: 7cd8393ba1823e9dcfc4bde821b40736 (MD5) / Approved for entry into archive by Gisele Isaura Hannickel (gisele.hannickel@fgv.br) on 2011-09-08T12:48:10Z (GMT) No. of bitstreams: 1 Dissertação Oscar Simoes FINAL.pdf: 585897 bytes, checksum: 7cd8393ba1823e9dcfc4bde821b40736 (MD5) / Made available in DSpace on 2011-09-08T12:48:49Z (GMT). No. of bitstreams: 1 Dissertação Oscar Simoes FINAL.pdf: 585897 bytes, checksum: 7cd8393ba1823e9dcfc4bde821b40736 (MD5) Previous issue date: 2011-08-12 / Este trabalho tem três objetivos básicos, tendo como base um banco de dados de taxas reais de câmbio entre Brasil e 21 parceiros comerciais no período de 1957 a 2010. O primeiro objetivo é o de verificar a validade da Paridade do Poder de Compra entre Brasil e seus parceiros comerciais através de três testes de raiz unitária (ADF, PP, KPSS). Para a maioria dos países, os testes de raiz unitária foram inconclusivos ou não rejeitaram raiz unitária quando foram utilizados dados mensais e modelos lineares. Já para dados de periodicidade anual, houve maior aceitação de estacionariedade, além de um número menor de resultados inconclusivos. O segundo objetivo é o de investigar a hipótese em Taylor (2001) de que a meia-vida é superestimada quando a amostra é formada a partir de um processo de agregação temporal pela média. Os resultados confirmam as conclusões de Taylor e superestimam a meia-vida em uma janela de 35% a 56% do que seria a meia-vida calculada a partir de dados de final de período. O terceiro objetivo do trabalho é o de verificar se a taxa real de câmbio possui uma reversão não-linear à média. Considerando dados mensais, foi verificado que na maioria dos testes rejeita-se a hipótese nula de raiz unitária contra a hipótese alternativa de globalmente estacionária, porém não-linear. / This dissertation has three main objectives and is based on real exchange rates between Brazil and 21 commercial counterparties for the period of 1957-2010. The first objective is to verify the validity of the Purchasing Power Parity through 3 different linear unit root tests (ADF, PP, and KPSS). For the majority of the cases, null hypotheses of unit roots could not be rejected or were inconclusive for monthly end-of-period data and linear models. For yearly end-ofperiod data, results were more inclined to accepting stationarity, and the number of inconclusive results was reduced. The second objective is to investigate Taylor’s (2001) conclusion that temporal aggregation overestimates the half-lives of the real exchange rates. Under the tests done, Taylor’s points are confirmed, and half-lives are overestimated by a range of 35% to 56% when aggregated temporally by its means and when compared with endof-period half-lives. The third objective is to verify if real exchange rates have non-linear mean-reversion. Considering monthly data, the majority of the tests confirm non-linearity and global stationarity against the unit root hypothesis
10

Essays on aggregation and cointegration of econometric models

Silvestrini, Andrea 02 June 2009 (has links)
This dissertation can be broadly divided into two independent parts. The first three chapters analyse issues related to temporal and contemporaneous aggregation of econometric models. The fourth chapter contains an application of Bayesian techniques to investigate whether the post transition fiscal policy of Poland is sustainable in the long run and consistent with an intertemporal budget constraint.<p><p><p>Chapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models. <p><p><p>A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed.<p><p><p>Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it.<p><p><p>Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results.<p><p>Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country". <p><p><p>The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions. <p><p><p>The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available. <p><p>The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less). <p><p><p>Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process. <p><p>The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors. <p><p><p>Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure. <p><p><p>Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations. <p><p><p>Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely.<p><p>The empirical analysis to examine debt stabilization is made up by two steps. <p><p>First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005).<p><p>Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999). <p><p>The priors used in the paper leads to straightforward posterior calculations which can be easily performed.<p>Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished

Page generated in 0.1235 seconds