• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 137
  • 137
  • 22
  • 20
  • 18
  • 17
  • 16
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Expectativas puras, prefer??ncia pela liquidez e modelos univariados "ARIMA" de Box & Jenkins projetam estruturas a termo de taxas de juros com efici??ncia?

Goulart, Lucio Allan 29 August 2005 (has links)
Made available in DSpace on 2015-12-03T18:32:43Z (GMT). No. of bitstreams: 1 Lucio_Allan_Goulart.pdf: 1745453 bytes, checksum: 18221402c2bb72ec072f698c188da21c (MD5) Previous issue date: 2005-08-29 / This study compared three ways of the yield to maturity curves behavior analysis in a specific negotiation time period of long term debt titles distributed at one determined period of time. Such mentioned debts are represented by two public issuers from different origins (Brazil and United States), with differentiated tenors in a Yield Time Structure (YTS), for each universe, denominated in a same currency, the United States Dollar. It was searched the definition of which methodology show greater forecasting capacity for the time data series at different time periods of YTS measurement. The time series analyses that indicate the yield behavior of the selected debts will be oriented on the principles of the Pure Expectations Theory (PET), the Liquidity Premium Theory (LPT) and the Univariate Box & Jenkins ARIMA Analysis. It was verified the low applicability of the use of PET and LPT for the YTSs in the two analyzed universes. For the ARIMA use, there is a reasonable acceptance for short term in the YTS of the United States, at the "knot" of measurement of 2 years. For the referring data to the YTS of Brazil, modeling ARIMA showed low forecasting capacity for all "knots" of the analyzed YTSs (2, 3, 4 and 5 years). For the tests of each proposed methodology, historical series of the active yields from the selected bonds had been analyzed, through the use of Excel spread sheets and the Minitab software analysis. The referring data to the historical series of the commented debts yields had been gotten at the Bloomberg L. P. electronic data system, with the previous authorization. / Este estudo comparou tr??s formas de an??lises comportamentais de curvas de juros, constitu??das pela taxa at?? o vencimento, ou yield to maturity, de t??tulos de d??vida de longo prazo, distribu??dos em um determinado per??odo de tempo. Tais curvas s??o resultantes de carteiras de t??tulos de dois perfis de emissores p??blicos de origem distinta (Brasil e Estados Unidos), com prazos diferenciados em uma Estrutura a Termo de Taxas de Juros (ETTJ), para cada universo. Todos os t??tulos s??o denominados na mesma moeda, o D??lar dos Estados Unidos. Foi buscada a defini????o da metodologia que apresentasse maior capacidade de previs??o para as s??ries temporais de dados que constitu??ssem uma ETTJ em momentos diferenciados. A an??lise de s??ries temporais que melhor retratassem o comportamento da taxa ativa das ETTJ foi feita com base na Teoria das Expectativas Puras (TEP), Teoria de Prefer??ncia pela Liquidez (TPL) e An??lise Univariada ARIMA de Box & Jenkins. Houve a verifica????o da baixa aplicabilidade do uso de TEP e TPL para as ETTJ dos dois universos analisados. Para o uso de ARIMA, houve uma aceita????o razo??vel para o curto prazo na ETTJ dos Estados Unidos, no "n??" de medi????o de 2 anos. Para os dados referentes ?? ETTJ do Brasil, a modelagem ARIMA mostrou pouca previsibilidade para todos os "n??s" das ETTJ analisadas (2, 3, 4 e 5 anos). Para os testes de cada metodologia proposta foram analisadas s??ries hist??ricas das taxas ativas dos t??tulos selecionados, atrav??s do uso de planilhas ExcelTM e por an??lise atrav??s do software MinitabTM. Os dados referentes ??s s??ries hist??ricas das taxas ativas dos t??tulos de d??vida comentados foram obtidos no sistema Bloomberg L. P. de informa????es eletr??nicas, com a devida autoriza????o.
112

Modelling irregularly spaced financial data : theory and practice of dynamic duration models /

Hautsch, Nikolaus. January 1900 (has links)
Thesis (doctoral)--Universität, Konstanz. / Includes bibliographical references (p. [273]-283) and index.
113

The development of a sustainable and cost effective sales and distribution model for FMCG products, specifically non alcoholic beverages, in the emerging markets of the greater Durban area.

Brand, Trevor Stanley. January 2005 (has links)
ABI has a sophisticated and effective distribution fleet which delivers canned and bottled non alcoholic beverages to 12000 wholesale and retail outlets in the Durban Metropole and to 46000 outlets nationally. Delivery is normally executed once per week, 48 hours after a separate order is taken by an account manager. In the more rural or "emerging market" areas traditional retail outlets such as supermarkets and superettes are scarce and reliance is made on spaza and house shops. Cash flow and storage space is limited. The sales and distribution calls are expensive, relative to the size order that the spaza would place. Spaza shop owners rely on distributors or collect from wholesalers. These outlets often run out of stock. Sales revenue is thus not maximized. Outlet development is marginal. The writer embarked on a research project to develop a sustainable and cost effective Sales and Distribution model in order to address these constraints in the Emerging Market territories of ABI Durban. Traditional theory turns to channel distribution as a means to effectively reaching an entire retail market. Levels are thus added to the distribution channel. The research however showed that service levels are sometimes compromised. The model that was developed returns ABI to DSD (direct service delivery) via specially designed vehicles and combines the function of "preseller" and "delivery merchandiser" on a dedicated route. Although a marginal increase in cost per case has been experienced, deliveries are direct to store, at least twice per week. Sales growth in these routes have been in excess of 85% while the total Umlazi area grows at 13%. Customer service levels, as surveyed, are exceptional. Although the model was specifically designed by ABI Durban for use in Durban, the concept has been adopted as a best practice and is being "rolled out" across the business. By the end of 2005, 10% of ABl's fleet nationally will function as MOTD (Merchandiser Order Taker Driver) routes. Additional vehicles have been ordered for delivery during the period July 2005 to September 2005 in order for this to be achieved. This model has assisted ABI in achieving its goal of maximizing DSD and lifting service levels to its customers (retailers). Revenue has increased significantly along with volume in these areas. Invariably MOTD acts as a significant barrier to competitor entry in those geographic areas where it is utilized. The Merchandiser Order Taker Driver (MOTD) model is successful and has potential for wider use, even in more developed markets. / Thesis (MBA)-University of KwaZulu-Natal, 2005.
114

An application of Box-Jenkins transfer function analysis to consumption-income relationship in South Africa / N.D. Moroke

Moroke, N.D. January 2005 (has links)
Using a simple linear regression model for estimation could give misleading results about the relationship between Yt, and Xt, . Possible problems involve (1) feedback from the output series to the inputs, (2) omitted time-lagged input terms, (3) an auto correlated disturbance series and, (4) common autocorrelation patterns shared by Y and X that can produce spurious correlations. The primary aim of this study was therefore to use the Box-Jenkins Transfer Function analysis to fit a model that related petroleum consumption to disposable income> The final Transfer Function Model z1t=)C(1-w1 B)/((1-δ1 B) B^5 Z(t^((x) +(1-θ1 B)at significantly described the data. Forecasts generated from this model show that petroleum consumption will hit a record of up to 4.8636 in 2014 if disposable income is augmented. There is 95% confidence that the forecasted value of petroleum consumption will lie between 4.5276 and 5.1997 in 2014. / Thesis (M. Com. (Statistics) North-West University, Mafikeng Campus, 2005
115

Análise de previsão de preços de ações de uma carteira otimizada, utilizando análise envoltória de dados, redes neurais artificiais e modelo de box-jenkins

Cechin, Rafaela Boeira 16 March 2018 (has links)
No description available.
116

Mercado preditivo: um método de previsão baseado no conhecimento coletivo / Prediction market: a forecasting method based on the collective knowledge

Ivan Roberto Ferraz 08 December 2015 (has links)
Mercado Preditivo (MP) é uma ferramenta que utiliza o mecanismo de preço de mercado para agregar informações dispersas em um grande grupo de pessoas, visando à geração de previsões sobre assuntos de interesse. Trata-se de um método de baixo custo, capaz de gerar previsões de forma contínua e que não exige amostras probabilísticas. Há diversas aplicações para esses mercados, sendo que uma das principais é o prognóstico de resultados eleitorais. Este estudo analisou evidências empíricas da eficácia de um Mercado Preditivo no Brasil, criado para fazer previsões sobre os resultados das eleições gerais do ano de 2014, sobre indicadores econômicos e sobre os resultados de jogos do Campeonato Brasileiro de futebol. A pesquisa teve dois grandes objetivos: i) desenvolver e avaliar o desempenho de um MP no contexto brasileiro, comparando suas previsões em relação a métodos alternativos; ii) explicar o que motiva as pessoas a participarem do MP, especialmente quando há pouca ou nenhuma interação entre os participantes e quando as transações são realizadas com uma moeda virtual. O estudo foi viabilizado por meio da criação da Bolsa de Previsões (BPrev), um MP online que funcionou por 61 dias, entre setembro e novembro de 2014, e que esteve aberto à participação de qualquer usuário da Internet no Brasil. Os 147 participantes registrados na BPrev efetuaram um total de 1.612 transações, sendo 760 no tema eleições, 270 em economia e 582 em futebol. Também foram utilizados dois questionários online para coletar dados demográficos e percepções dos usuários. O primeiro foi aplicado aos potenciais participantes antes do lançamento da BPrev (302 respostas válidas) e o segundo foi aplicado apenas aos usuários registrados, após dois meses de experiência de uso da ferramenta (71 respostas válidas). Com relação ao primeiro objetivo, os resultados sugerem que Mercados Preditivos são viáveis no contexto brasileiro. No tema eleições, o erro absoluto médio das previsões do MP na véspera do pleito foi de 3,33 pontos percentuais, enquanto o das pesquisas de opinião foi de 3,31. Considerando todo o período em que o MP esteve em operação, o desempenho dos dois métodos também foi parecido (erro absoluto médio de 4,20 pontos percentuais para o MP e de 4,09 para as pesquisas). Constatou-se também que os preços dos contratos não são um simples reflexo dos resultados das pesquisas, o que indica que o mercado é capaz de agregar informações de diferentes fontes. Há potencial para o uso de MPs em eleições brasileiras, principalmente como complemento às metodologias de previsão mais tradicionais. Todavia, algumas limitações da ferramenta e possíveis restrições legais podem dificultar sua adoção. No tema economia, os erros foram ligeiramente maiores do que os obtidos com métodos alternativos. Logo, um MP aberto ao público geral, como foi o caso da BPrev, mostrou-se mais indicado para previsões eleitorais do que para previsões econômicas. Já no tema futebol, as previsões do MP foram melhores do que o critério do acaso, mas não houve diferença significante em relação a outro método de previsão baseado na análise estatística de dados históricos. No que diz respeito ao segundo objetivo, a análise da participação no MP aponta que motivações intrínsecas são mais importantes para explicar o uso do que motivações extrínsecas. Em ordem decrescente de relevância, os principais fatores que influenciam a adoção inicial da ferramenta são: prazer percebido, aprendizado percebido, utilidade percebida, interesse pelo tema das previsões, facilidade de uso percebida, altruísmo percebido e recompensa percebida. Os indivíduos com melhor desempenho no mercado são mais propensos a continuar participando. Isso sugere que, com o passar do tempo, o nível médio de habilidade dos participantes tende a crescer, tornando as previsões do MP cada vez melhores. Os resultados também indicam que a prática de incluir questões de entretenimento para incentivar a participação em outros temas é pouco eficaz. Diante de todas as conclusões, o MP revelou-se como potencial técnica de previsão em variados campos de investigação. / Prediction Market (PM) is a tool which uses the market price mechanism to aggregate information scattered in a large group of people, aiming at generating predictions about matters of interest. It is a low cost method, able to generate forecasts continuously and it does not require random samples. There are several applications for these markets and one of the main ones is the prognosis of election outcomes. This study analyzed empirical evidences on the effectiveness of Prediction Markets in Brazil, regarding forecasts about the outcomes of the general elections in the year of 2014, about economic indicators and about the results of the Brazilian Championship soccer games. The research had two main purposes: i) to develop and evaluate the performance of PMs in the Brazilian context, comparing their predictions to the alternative methods; ii) to explain what motivates people´s participation in PMs, especially when there is little or no interaction among participants and when the trades are made with a virtual currency (play-money). The study was made feasible by means of the creation of a prediction exchange named Bolsa de Previsões (BPrev), an online marketplace which operated for 61 days, from September to November, 2014, being open to the participation of any Brazilian Internet user. The 147 participants enrolled in BPrev made a total of 1,612 trades, with 760 on the election markets, 270 on economy and 582 on soccer. Two online surveys were also used to collect demographic data and users´ perceptions. The first one was applied to potential participants before BPrev launching (302 valid answers) and the second was applied only to the registered users after two-month experience in tool using (71 valid answers). Regarding the first purpose, the results suggest Prediction Markets to be feasible in the Brazilian context. On the election markets, the mean absolute error of PM predictions on the eve of the elections was of 3.33 percentage points whereas the one of the polls was of 3.31. Considering the whole period in which BPrev was running, the performance of both methods was also similar (PM mean absolute error of 4.20 percentage points and poll´s 4.09). Contract prices were also found as not being a simple reflection of poll results, indicating that the market is capable to aggregate information from different sources. There is scope for the use of PMs in Brazilian elections, mainly as a complement of the most traditional forecasting methodologies. Nevertheless, some tool limitations and legal restrictions may hinder their adoption. On markets about economic indicators, the errors were slightly higher than those obtained by alternative methods. Therefore, a PM open to general public, as in the case of BPrev, showed as being more suitable to electoral predictions than to economic ones. Yet, on soccer markets, PM predictions were better than the criterion of chance although there had not been significant difference in relation to other forecasting method based on the statistical analysis of historical data. As far as the second purpose is concerned, the analysis of people´s participation in PMs points out intrinsic motivations being more important in explaining their use than extrinsic motivations. In relevance descending order, the principal factors that influenced tool´s initial adoption are: perceived enjoyment, perceived learning, perceived usefulness, interest in the theme of predictions, perceived ease of use, perceived altruism and perceived reward. Individuals with better performance in the market are more inclined to continue participating. This suggests that, over time, participants´ average skill level tends to increase, making PM forecasts better and better. Results also indicate that the practice of creating entertainment markets to encourage participation in other subjects is ineffective. Ratifying all the conclusions, PM showed as being a prediction potential technique in a variety of research fields.
117

Une étude de la relation entre la croissance et la conjoncture en Belgique

Peeterssen, A. van January 1969 (has links)
Doctorat en sciences sociales, politiques et économiques / info:eu-repo/semantics/nonPublished
118

Essays in real-time forecasting

Liebermann, Joëlle 12 September 2012 (has links)
This thesis contains three essays in the field of real-time econometrics, and more particularly<p>forecasting.<p>The issue of using data as available in real-time to forecasters, policymakers or financial<p>markets is an important one which has only recently been taken on board in the empirical<p>literature. Data available and used in real-time are preliminary and differ from ex-post<p>revised data, and given that data revisions may be quite substantial, the use of latest<p>available instead of real-time can substantially affect empirical findings (see, among others,<p>Croushore’s (2011) survey). Furthermore, as variables are released on different dates<p>and with varying degrees of publication lags, in order not to disregard timely information,<p>datasets are characterized by the so-called “ragged-edge”structure problem. Hence, special<p>econometric frameworks, such as developed by Giannone, Reichlin and Small (2008) must<p>be used.<p>The first Chapter, “The impact of macroeconomic news on bond yields: (in)stabilities over<p>time and relative importance”, studies the reaction of U.S. Treasury bond yields to real-time<p>market-based news in the daily flow of macroeconomic releases which provide most of the<p>relevant information on their fundamentals, i.e. the state of the economy and inflation. We<p>find that yields react systematically to a set of news consisting of the soft data, which have<p>very short publication lags, and the most timely hard data, with the employment report<p>being the most important release. However, sub-samples evidence reveals that parameter<p>instability in terms of absolute and relative size of yields response to news, as well as<p>significance, is present. Especially, the often cited dominance to markets of the employment<p>report has been evolving over time, as the size of the yields reaction to it was steadily<p>increasing. Moreover, over the recent crisis period there has been an overall switch in the<p>relative importance of soft and hard data compared to the pre-crisis period, with the latter<p>becoming more important even if less timely, and the scope of hard data to which markets<p>react has increased and is more balanced as less concentrated on the employment report.<p>Markets have become more reactive to news over the recent crisis period, particularly to<p>hard data. This is a consequence of the fact that in periods of high uncertainty (bad state),<p>markets starve for information and attach a higher value to the marginal information content<p>of these news releases.<p>The second and third Chapters focus on the real-time ability of models to now-and-forecast<p>in a data-rich environment. It uses an econometric framework, that can deal with large<p>panels that have a “ragged-edge”structure, and to evaluate the models in real-time, we<p>constructed a database of vintages for US variables reproducing the exact information that<p>was available to a real-time forecaster.<p>The second Chapter, “Real-time nowcasting of GDP: a factor model versus professional<p>forecasters”, performs a fully real-time nowcasting (forecasting) exercise of US real GDP<p>growth using Giannone, Reichlin and Smalls (2008), henceforth (GRS), dynamic factor<p>model (DFM) framework which enables to handle large unbalanced datasets as available<p>in real-time. We track the daily evolution throughout the current and next quarter of the<p>model nowcasting performance. Similarly to GRS’s pseudo real-time results, we find that<p>the precision of the nowcasts increases with information releases. Moreover, the Survey of<p>Professional Forecasters does not carry additional information with respect to the model,<p>suggesting that the often cited superiority of the former, attributable to judgment, is weak<p>over our sample. As one moves forward along the real-time data flow, the continuous<p>updating of the model provides a more precise estimate of current quarter GDP growth and<p>the Survey of Professional Forecasters becomes stale. These results are robust to the recent<p>recession period.<p>The last Chapter, “Real-time forecasting in a data-rich environment”, evaluates the ability<p>of different models, to forecast key real and nominal U.S. monthly macroeconomic variables<p>in a data-rich environment and from the perspective of a real-time forecaster. Among<p>the approaches used to forecast in a data-rich environment, we use pooling of bi-variate<p>forecasts which is an indirect way to exploit large cross-section and the directly pooling of<p>information using a high-dimensional model (DFM and Bayesian VAR). Furthermore forecasts<p>combination schemes are used, to overcome the choice of model specification faced by<p>the practitioner (e.g. which criteria to use to select the parametrization of the model), as<p>we seek for evidence regarding the performance of a model that is robust across specifications/<p>combination schemes. Our findings show that predictability of the real variables is<p>confined over the recent recession/crisis period. This in line with the findings of D’Agostino<p>and Giannone (2012) over an earlier period, that gains in relative performance of models<p>using large datasets over univariate models are driven by downturn periods which are characterized<p>by higher comovements. These results are robust to the combination schemes<p>or models used. A point worth mentioning is that for nowcasting GDP exploiting crosssectional<p>information along the real-time data flow also helps over the end of the great moderation period. Since this is a quarterly aggregate proxying the state of the economy,<p>monthly variables carry information content for GDP. But similarly to the findings for the<p>monthly variables, predictability, as measured by the gains relative to the naive random<p>walk model, is higher during crisis/recession period than during tranquil times. Regarding<p>inflation, results are stable across time, but predictability is mainly found at nowcasting<p>and forecasting one-month ahead, with the BVAR standing out at nowcasting. The results<p>show that the forecasting gains at these short horizons stem mainly from exploiting timely<p>information. The results also show that direct pooling of information using a high dimensional<p>model (DFM or BVAR) which takes into account the cross-correlation between the<p>variables and efficiently deals with the “ragged-edge”structure of the dataset, yields more<p>accurate forecasts than the indirect pooling of bi-variate forecasts/models. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
119

Essays on real-time econometrics and forecasting

Modugno, Michèle 14 September 2011 (has links)
The thesis contains four essays covering topics in the field of real time econometrics and forecasting.<p><p>The first Chapter, entitled “An area wide real time data base for the euro area” and coauthored with Domenico Giannone, Jerome Henry and Magda Lalik, describes how we constructed a real time database for the euro area covering more than 200 series regularly published in the European Central Bank Monthly Bulletin, as made available ahead of publication to the Governing Council members before their first meeting of the month.<p><p>Recent research has emphasised that the data revisions can be large for certain indicators and can have a bearing on the decisions made, as well as affect the assessment of their relevance. It is therefore key to be in a position to reconstruct the historical environment of economic decisions at the time they were made by private agents and policy-makers rather than using the data as they become available some years later. For this purpose, it is necessary to have the information in the form of all the different vintages of data as they were published in real time, the so-called "real-time data" that reflect the economic situation at a given point in time when models are estimated or policy decisions made.<p><p>We describe the database in details and study the properties of the euro area real-time data flow and data revisions, also providing comparisons with the United States and Japan. We finally illustrate how such revisions can contribute to the uncertainty surrounding key macroeconomic ratios and the NAIRU.<p><p>The second Chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on a joint work with Marta Banbura. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone et al (2008), we can handle datasets that are not only characterised by a 'ragged edge', but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach, which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. It has been shown by Doz et al (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz et al (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm. Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases.<p><p>We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data.<p><p>The third Chapter is entitled “Nowcasting Inflation Using High Frequency Data” and it proposes a methodology for nowcasting and forecasting inflation using data with sampling frequency higher than monthly. In particular, this Chapter focuses on the energy component of inflation given the availability of data like the Weekly Oil Bulletin Price Statistics for the euro area, the Weekly Retail Gasoline and Diesel Prices for the US and the daily spot and future prices of crude oil.<p><p>Although nowcasting inflation is a novel idea, there is a rather long literature focusing on nowcasting GDP. The use of higher frequency indicators in order to Nowcast/Forecast lower frequency indicators had started with monthly data for GDP. GDP is a quarterly variable released with a substantial time delay (e.g. two months after the end of the reference quarter for the euro area GDP). <p><p>The estimation adopts the methodology described in Chapter 2, modeling the data as a trading day frequency factor model with missing observations in a state space representation. In contrast to other procedures, the methodology proposed models all the data within a unified single framework that allows one to produce forecasts of all the involved variables from a factor model, which, by definition, does not suffer from overparametrisation. Moreover, this offers the possibility to disentangle model-based "news" from each release and then to assess their impact on the forecast revision. The Chapter provides an illustrative example of this procedure, focusing on a specific month.<p><p>In order to assess the importance of using high frequency data for forecasting inflation this Chapter compares the forecast performance of the univariate models, i.e. random walk and autoregressive process, with the forecast performance of the model that uses weekly and daily data. The provided empirical evidence shows that exploiting high frequency data relative to oil not only let us nowcast and forecast the energy component of inflation with a precision twice better than the proposed benchmarks, but we obtain a similar improvement even for total inflation.<p><p>The fourth Chapter entitled “The forecasting power of international yield curve linkages”, coauthored with Kleopatra Nikolaou, investigates dependency patterns between the yield curves of Germany and the US, by using an out-of-sample forecast exercise.<p><p>The motivation for this Chapter stems from the fact that our up to date knowledge on dependency patterns among yields curves of different countries is limited. Looking at the yield curve literature, the empirical evidence to-date informs us of strong contemporaneous interdependencies of yield curves across countries, in line with increased globalization and financial integration. Nevertheless, this yield curve literature does not investigate non-contemporaneous correlations. And yet, clear indication in favour of such dependency patterns is recorded in studies focusing on specific interest rates, which look at the role of certain countries as global players (see Frankel et al. (2004), Chinn and Frankel (2005) and Wang et al. (2007)). Evidence from these studies suggests a leading role for the US. Moreover, dependency patterns recorded in the real business cycles between the US and the euro area (Giannone and Reichlin, 2007) can also rationalize such linkages, to the extent that output affects nominal interest rates.<p><p>We propose, estimate and forecast (out-of-sample) a novel dynamic factor model for the yield curve, where dynamic information from foreign yield curves is introduced into domestic yield curve forecasts. This is the International Dependency Model (IDM). We want to compare the yield curve forecast under the IDM versus a purely domestic model and a model that allows for contemporaneous common global factors. These models serve as useful comparisons. The domestic model bears direct modeling links with IDM, as it can be seen as a nested model of IDM. The global model bears less direct links in terms of modeling, but, in line with IDM, it is also an international model that serves to highlight the advantages of introducing international information in yield curve forecasts. However, the global model aims to identify contemporaneous linkages in the yield curve of the two countries, whereas the IDM also allows for detecting dependency patterns.<p><p>Our results that shocks appear to be diffused in a rather asymmetric manner across the two countries. Namely, we find a unidirectional causality effect that runs from the US to Germany. This effect is stronger in the last ten years, where out-of-sample forecasts of Germany using the US information are even more accurate than the random walk forecasts. Our statistical results demonstrate a more independent role for the US. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
120

Structural models for macroeconomics and forecasting

De Antonio Liedo, David 03 May 2010 (has links)
This Thesis is composed by three independent papers that investigate<p>central debates in empirical macroeconomic modeling.<p><p>Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data<p>revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based<p>on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the<p>DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary<p>figures, it is not possible for them to quantify it, as done by our model. <p><p>The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.<p><p><p>Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable<p>to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.<p><p>The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE<p>models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and<p>Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which<p>models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE<p>modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that<p>resulting from the original specification. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished

Page generated in 0.2322 seconds