• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 294
  • 81
  • 18
  • 3
  • Tagged with
  • 408
  • 100
  • 94
  • 89
  • 82
  • 50
  • 49
  • 49
  • 45
  • 38
  • 38
  • 36
  • 35
  • 35
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Sequential Machine learning Approaches for Portfolio Management

Chapados, Nicolas 11 1900 (has links)
No description available.
402

Essays in functional econometrics and financial markets

Tsafack-Teufack, Idriss 07 1900 (has links)
Dans cette thèse, j’exploite le cadre d’analyse de données fonctionnelles et développe l’analyse d’inférence et de prédiction, avec une application à des sujets sur les marchés financiers. Cette thèse est organisée en trois chapitres. Le premier chapitre est un article co-écrit avec Marine Carrasco. Dans ce chapitre, nous considérons un modèle de régression linéaire fonctionnelle avec une variable prédictive fonctionnelle et une réponse scalaire. Nous effectuons une comparaison théorique des techniques d’analyse des composantes principales fonctionnelles (FPCA) et des moindres carrés partiels fonctionnels (FPLS). Nous déterminons la vitesse de convergence de l’erreur quadratique moyen d’estimation (MSE) pour ces méthodes. Aussi, nous montrons cette vitesse est sharp. Nous découvrons également que le biais de régularisation de la méthode FPLS est plus petit que celui de FPCA, tandis que son erreur d’estimation a tendance à être plus grande que celle de FPCA. De plus, nous montrons que le FPLS surpasse le FPCA en termes de prédiction avec moins de composantes. Le deuxième chapitre considère un modèle autorégressif entièrement fonctionnel (FAR) pour prèvoir toute la courbe de rendement du S&P 500 a la prochaine journée. Je mène une analyse comparative de quatre techniques de Big Data, dont la méthode de Tikhonov fonctionnelle (FT), la technique de Landweber-Fridman fonctionnelle (FLF), la coupure spectrale fonctionnelle (FSC) et les moindres carrés partiels fonctionnels (FPLS). La vitesse de convergence, la distribution asymptotique et une stratégie de test statistique pour sélectionner le nombre de retard sont fournis. Les simulations et les données réelles montrent que les méthode FPLS performe mieux les autres en terme d’estimation du paramètre tandis que toutes ces méthodes affichent des performances similaires en termes de prédiction. Le troisième chapitre propose d’estimer la densité de neutralité au risque (RND) dans le contexte de la tarification des options, à l’aide d’un modèle fonctionnel. L’avantage de cette approche est qu’elle exploite la théorie d’absence d’arbitrage et qu’il est possible d’éviter toute sorte de paramétrisation. L’estimation conduit à un problème d’inversibilité et la technique fonctionnelle de Landweber-Fridman (FLF) est utilisée pour le surmonter. / In this thesis, I exploit the functional data analysis framework and develop inference, prediction and forecasting analysis, with an application to topics in the financial market. This thesis is organized in three chapters. The first chapter is a paper co-authored with Marine Carrasco. In this chapter, we consider a functional linear regression model with a functional predictor variable and a scalar response. We develop a theoretical comparison of the Functional Principal Component Analysis (FPCA) and Functional Partial Least Squares (FPLS) techniques. We derive the convergence rate of the Mean Squared Error (MSE) for these methods. We show that this rate of convergence is sharp. We also find that the regularization bias of the FPLS method is smaller than the one of FPCA, while its estimation error tends to be larger than that of FPCA. Additionally, we show that FPLS outperforms FPCA in terms of prediction accuracy with a fewer number of components. The second chapter considers a fully functional autoregressive model (FAR) to forecast the next day’s return curve of the S&P 500. In contrast to the standard AR(1) model where each observation is a scalar, in this research each daily return curve is a collection of 390 points and is considered as one observation. I conduct a comparative analysis of four big data techniques including Functional Tikhonov method (FT), Functional Landweber-Fridman technique (FLF), Functional spectral-cut off (FSC), and Functional Partial Least Squares (FPLS). The convergence rate, asymptotic distribution, and a test-based strategy to select the lag number are provided. Simulations and real data show that FPLS method tends to outperform the other in terms of estimation accuracy while all the considered methods display almost the same predictive performance. The third chapter proposes to estimate the risk neutral density (RND) for options pricing with a functional linear model. The benefit of this approach is that it exploits directly the fundamental arbitrage-free equation and it is possible to avoid any additional density parametrization. The estimation problem leads to an inverse problem and the functional Landweber-Fridman (FLF) technique is used to overcome this issue.
403

Essais sur la prévision de la défaillance bancaire : validation empirique des modèles non-paramétriques et étude des déterminants des prêts non performants / Essays on the prediction of bank failure : empirical validation of non-parametric models and study of the determinants of non-performing loans

Affes, Zeineb 05 March 2019 (has links)
La récente crise financière qui a débuté aux États-Unis en 2007 a révélé les faiblesses du système bancaire international se traduisant par l’effondrement de nombreuses institutions financières aux États-Unis et aussi par l’augmentation de la part des prêts non performants dans les bilans des banques européennes. Dans ce cadre, nous proposons d’abord d’estimer et de tester l’efficacité des modèles de prévisions des défaillances bancaires. L’objectif étant d’établir un système d’alerte précoce (EWS) de difficultés bancaires basées sur des variables financières selon la typologie CAMEL (Capital adequacy, Asset quality, Management quality, Earnings ability, Liquidity). Dans la première étude, nous avons comparé la classification et la prédiction de l’analyse discriminante canonique (CDA) et de la régression logistique (LR) avec et sans coûts de classification en combinant ces deux modèles paramétriques avec le modèle descriptif d’analyse en composantes principales (ACP). Les résultats montrent que les modèles (LR et CDA) peuvent prédire la faillite des banques avec précision. De plus, les résultats de l’ACP montrent l’importance de la qualité des actifs, de l’adéquation des fonds propres et de la liquidité en tant qu’indicateurs des conditions financières de la banque. Nous avons aussi comparé la performance de deux méthodes non paramétriques, les arbres de classification et de régression (CART) et le nouveau modèle régression multivariée par spline adaptative (MARS), dans la prévision de la défaillance. Un modèle hybride associant ’K-means clustering’ et MARS est également testé. Nous cherchons à modéliser la relation entre dix variables financières et le défaut d’une banque américaine. L’approche comparative a mis en évidence la suprématie du modèle hybride en termes de classification. De plus, les résultats ont montré que les variables d’adéquation du capital sont les plus importantes pour la prévision de la faillite d’une banque. Enfin, nous avons étudié les facteurs déterminants des prêts non performants des banques de l’Union Européenne durant la période 2012-2015 en estimant un modèle à effets fixe sur données de panel. Selon la disponibilité des données nous avons choisi un ensemble de variables qui se réfèrent à la situation macroéconomique du pays de la banque et d’autres variables propres à chaque banque. Les résultats ont prouvé que la dette publique, les provisions pour pertes sur prêts, la marge nette d’intérêt et la rentabilité des capitaux propres affectent positivement les prêts non performants, par contre la taille de la banque et l’adéquation du capital (EQTA et CAR) ont un impact négatif sur les créances douteuses. / The recent financial crisis that began in the United States in 2007 revealed the weaknesses of the international banking system resulting in the collapse of many financial institutions in the United States and also the increase in the share of non-performing loans in the balance sheets of European banks. In this framework, we first propose to estimate and test the effectiveness of banking default forecasting models. The objective is to establish an early warning system (EWS) of banking difficulties based on financial variables according to CAMEL’s ratios (Capital adequacy, Asset quality, Management quality, Earnings ability, Liquidity). In the first study, we compared the classification and the prediction of the canonical discriminant analysis (CDA) and the logistic regression (LR) with and without classification costs by combining these two parametric models with the descriptive model of principal components analysis (PCA). The results show that the LR and the CDA can predict bank failure accurately. In addition, the results of the PCA show the importance of asset quality, capital adequacy and liquidity as indicators of the bank’s financial conditions. We also compared the performance of two non-parametric methods, the classification and regression trees (CART) and the newly multivariate adaptive regression splines (MARS) models, in the prediction of failure. A hybrid model combining ’K-means clustering’ and MARS is also tested. We seek to model the relationship between ten financial variables (CAMEL’s ratios) and the default of a US bank. The comparative approach has highlighted the supremacy of the hybrid model in terms of classification. In addition, the results showed that the capital adequacy variables are the most important for predicting the bankruptcy of a bank. Finally, we studied the determinants of non-performing loans from European Union banks during the period 2012-2015 by estimating a fixed effects model on panel data. Depending on the availability of data we have chosen a set of variables that refer to the macroeconomic situation of the country of the bank and other variables specific to each bank. The results showed that public debt, loan loss provisions, net interest margin and return on equity positively affect non performing loans, while the size of the bank and the adequacy of capital (EQTA and CAR) have a negative impact on bad debts.
404

La distinction entre la formation et l’exécution du contrat : contribution à l’étude du contrat dans le temps / Distinction between the stage of formation and execution at the contract : contribution to the study on the contract of duration

Van Haecke-Lepic, Sabine 07 December 2017 (has links)
C’est au cours de l’étude de la distinction entre la formation et l'exécution du contrat que s’est imposée une réflexion sur une autre alternative au modèle du contrat à exécution instantanée : le contrat de durée. En consacrant un modèle de contrat hors du temps le droit contractuel s’est construit sur une chimère. En effet, en niant l’infiltration du temps dans le contrat, les frontières entre la formation et l’exécution se sont fissurées. Devant cet état de fait, les attentes de clarification de la réforme furent nombreuses. Cependant, la réforme du droit des contrats bien que codifiant les apports épars de la jurisprudence n’en a pas tiré les conséquences de fond en consacrant une possible incomplétude du contrat à sa formation. En continuant à ignorer l’impact de la durée sur les contrats qui s’exécutent dans le temps, la réforme a aggravé l’éclatement des concepts et a empêché le droit commun d’évoluer.L’auteur s’est attaché donc à vouloir englober l’ensemble de la réalité contractuelle en faisant émerger à côté du modèle du contrat échange, le modèle du contrat de durée. La proposition d’un contrat de durée serait donc de nature à réconcilier le droit contractuel entre la culture contractuelle de l’échange et la culture contractuelle de la coopération qui prend naissance dans la durée. La durée du contrat transforme le contrat et émancipe son exécution en permettant au moment de la formation une certaine incomplétude. / While studying the distinction between the preparation and the execution of a contract, a reflection on a new alternative to the contract of instantaneous performance imposed itself: an adjustable circumstance-based contract. Devoted to a timeless model of contract, contract law has built itself on a pipe dream. Indeed, the negation of time’s infiltration in a contract led the boundaries between preparation and execution to crack apart. In front of this situation the expectations for clarification with the reform were numerous. However, the 2016 reform of contract law, although systematising the scattered provisions of case law, did not drew the needed conclusions by sanctioning the possible incompleteness of a contract in its preparation. Still ignoring time’s impact in time-based contracts, the 2016 reform has worsened the splitting of concepts and prevented the evolution of common right. Thus the author focused on embracing the whole of contractual reality and developing alongside the swap contract: the adjustable circumstance-based contract. Contract law has indeed been confronted to types of contract that struggled to integrate duration but which, in the need to happen alongside a unique swap model, distorted its concepts. This is why the offer of an adjustable circumstance-based contract would be able to reconcile, in contract law, the contract culture of swap and the contract culture of cooperation which arise in duration.
405

On antarctic wind engineering

Sanz Rodrigo, Javier 18 March 2011 (has links)
Antarctic Wind Engineering deals with the effects of wind on the built environment. The assessment of wind induced forces, wind resource and wind driven snowdrifts are the main tasks for a wind engineer when participating on the design of an Antarctic building. While conventional Wind Engineering techniques are generally applicable to the Antarctic environment, there are some aspects that require further analysis due to the special characteristics of the Antarctic wind climate and its boundary layer meteorology. <p>The first issue in remote places like Antarctica is the lack of site wind measurements and meteorological information in general. In order to complement this shortage of information various meteorological databases have been surveyed. Global Reanalyses, produced by the European Met Office ECMWF, and RACMO/ANT mesoscale model simulations, produced by the Institute for Marine and Atmospheric Research of Utrecht University (IMAU), have been validated versus independent observations from a network of 115 automatic weather stations. The resolution of these models, of some tens of kilometers, is sufficient to characterize the wind climate in areas of smooth topography like the interior plateaus or the coastal ice shelves. In contrast, in escarpment and coastal areas, where the terrain gets rugged and katabatic winds are further intensified in confluence zones, the models lack resolution and underestimate the wind velocity. <p>The Antarctic atmospheric boundary layer (ABL) is characterized by the presence of strong katabatic winds that are generated by the presence of surface temperature inversions in sloping terrain. This inversion is persistent in Antarctica due to an almost continuous cooling by longwave radiation, especially during the winter night. As a result, the ABL is stably stratified most of the time and, only when the wind speed is high it becomes near neutrally stratified. This thesis also aims at making a critical review of the hypothesis underlying wind engineering models when extreme boundary layer situations are faced. It will be shown that the classical approach of assuming a neutral log-law in the surface layer can hold for studies of wind loading under strong winds but can be of limited use when detailed assessments are pursued. <p>The Antarctic landscape, mostly composed of very long fetches of ice covered terrain, makes it an optimum natural laboratory for the development of homogeneous boundary layers, which are a basic need for the formulation of ABL theories. Flux-profile measurements, made at Halley Research Station in the Brunt Ice Shelf by the British Antarctic Survery (BAS), have been used to analyze boundary layer similarity in view of formulating a one-dimensional ABL model. A 1D model of the neutral and stable boundary layer with a transport model for blowing snow has been implemented and verified versus test cases of the literature. A validation of quasi-stationary homogeneous profiles at different levels of stability confirms that such 1D models can be used to classify wind profiles to be used as boundary conditions for detailed 3D computational wind engineering studies. <p>A summary of the wind engineering activities carried out during the design of the Antarctic Research Station is provided as contextual reference and point of departure of this thesis. An elevated building on top of sloping terrain and connected to an under-snow garage constitutes a challenging environment for building design. Building aerodynamics and snowdrift management were tested in the von Karman Institute L1B wind tunnel for different building geometries and ridge integrations. Not only for safety and cost reduction but also for the integration of renewable energies, important benefits in the design of a building can be achieved if wind engineering is considered since the conceptual phase of the integrated building design process.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
406

Better representation learning for TPMS

Raza, Amir 10 1900 (has links)
Avec l’augmentation de la popularité de l’IA et de l’apprentissage automatique, le nombre de participants a explosé dans les conférences AI/ML. Le grand nombre d’articles soumis et la nature évolutive des sujets constituent des défis supplémentaires pour les systèmes d’évaluation par les pairs qui sont cruciaux pour nos communautés scientifiques. Certaines conférences ont évolué vers l’automatisation de l’attribution des examinateurs pour les soumissions, le TPMS [1] étant l’un de ces systèmes existants. Actuellement, TPMS prépare des profils de chercheurs et de soumissions basés sur le contenu, afin de modéliser l’adéquation des paires examinateur-soumission. Dans ce travail, nous explorons différentes approches pour le réglage fin auto-supervisé des transformateurs BERT pour les données des documents de conférence. Nous démontrons quelques nouvelles approches des vues d’augmentation pour l’auto-supervision dans le traitement du langage naturel, qui jusqu’à présent était davantage axée sur les problèmes de vision par ordinateur. Nous utilisons ensuite ces représentations d’articles individuels pour construire un modèle d’expertise qui apprend à combiner la représentation des différents travaux publiés d’un examinateur et à prédire leur pertinence pour l’examen d’un article soumis. Au final, nous montrons que de meilleures représentations individuelles des papiers et une meilleure modélisation de l’expertise conduisent à de meilleures performances dans la tâche de prédiction de l’adéquation de l’examinateur. / With the increase in popularity of AI and Machine learning, participation numbers have exploded in AI/ML conferences. The large number of submission papers and the evolving nature of topics constitute additional challenges for peer-review systems that are crucial for our scientific communities. Some conferences have moved towards automating the reviewer assignment for submissions, TPMS [1] being one such existing system. Currently, TPMS prepares content-based profiles of researchers and submission papers, to model the suitability of reviewer-submission pairs. In this work, we explore different approaches to self-supervised fine-tuning of BERT transformers for conference papers data. We demonstrate some new approaches to augmentation views for self-supervision in natural language processing, which till now has been more focused on problems in computer vision. We then use these individual paper representations for building an expertise model which learns to combine the representation of different published works of a reviewer and predict their relevance for reviewing a submission paper. In the end, we show that better individual paper representations and expertise modeling lead to better performance on the reviewer suitability prediction task.
407

Personnalité, fonctionnement réflexif et ajustement conjugal dans un contexte de traumas en enfance : associations avec les pratiques parentales et la désorganisation de l'attachement de l'enfant

Perron-Bouchard, Marie-Ève 24 April 2018 (has links)
Le présent mémoire s'est intéressé à la contribution respective de la gravité des traumas vécus en enfance, des traits de la personnalité, du fonctionnement réflexif et de l'ajustement conjugal des survivants, à la sensibilité et à la négativité (i.e., hostilité et intrusion) des pratiques parentales adoptées et à la prédiction de la désorganisation de l'attachement de leur enfant à 17 mois. Dans cette perspective, 100 mères ayant vécu des mauvais traitements en enfance et de faible statut socio-économique ont été recrutées à l'hôpital Rosemont-Maisonneuve de Montréal à l'aide du Parental Bonding Instrument. Des analyses corrélationnelles menées sur l'ensemble des variables ont permis de mettre en évidence des associations significatives entre la sensibilité maternelle et l'ajustement conjugal et le FR-G ainsi qu'entre les comportements maternels négatifs et la gravité du trauma subi en enfance, le FR-G et la désorganisation de l'attachement de l'enfant. De plus, les variables corrélées à la désorganisation de l'attachement à 17 mois ont été incluses dans un modèle de régression logistique hiérarchique afin de tester leur contribution à sa prédiction. Les résultats indiquent qu'un modèle significatif de prédiction de l'organisation de l'attachement de l'enfant inclut le FR-T et l'insensibilité maternelle et explique environ 43 % de la variance de cette dernière. Selon ce modèle, l'augmentation de 1 point de FR-T diminue de 1,7 fois le risque que l'attachement de l'enfant soit désorganisé et l'augmentation de 1 de l'indice d'insensibilité maternelle augmente de 1,4 fois ce même risque. Ces résultats s'inscrivent dans le raffinement de la compréhension des pratiques maternelles sensibles et négatives dans un contexte d'historique de trauma en enfance ainsi que de la désorganisation de l'attachement de l'enfant et ils sont discutés en fonction de la littérature à ce sujet.
408

Essays on monetary policy, saving and investment

Lenza, Michèle 04 June 2007 (has links)
This thesis addresses three relevant macroeconomic issues: (i) why<p>Central Banks behave so cautiously compared to optimal theoretical<p>benchmarks, (ii) do monetary variables add information about<p>future Euro Area inflation to a large amount of non monetary<p>variables and (iii) why national saving and investment are so<p>correlated in OECD countries in spite of the high degree of<p>integration of international financial markets.<p><p>The process of innovation in the elaboration of economic theory<p>and statistical analysis of the data witnessed in the last thirty<p>years has greatly enriched the toolbox available to<p>macroeconomists. Two aspects of such a process are particularly<p>noteworthy for addressing the issues in this thesis: the<p>development of macroeconomic dynamic stochastic general<p>equilibrium models (see Woodford, 1999b for an historical<p>perspective) and of techniques that enable to handle large data<p>sets in a parsimonious and flexible manner (see Reichlin, 2002 for<p>an historical perspective).<p><p>Dynamic stochastic general equilibrium models (DSGE) provide the<p>appropriate tools to evaluate the macroeconomic consequences of<p>policy changes. These models, by exploiting modern intertemporal<p>general equilibrium theory, aggregate the optimal responses of<p>individual as consumers and firms in order to identify the<p>aggregate shocks and their propagation mechanisms by the<p>restrictions imposed by optimizing individual behavior. Such a<p>modelling strategy, uncovering economic relationships invariant to<p>a change in policy regimes, provides a framework to analyze the<p>effects of economic policy that is robust to the Lucas'critique<p>(see Lucas, 1976). The early attempts of explaining business<p>cycles by starting from microeconomic behavior suggested that<p>economic policy should play no role since business cycles<p>reflected the efficient response of economic agents to exogenous<p>sources of fluctuations (see the seminal paper by Kydland and Prescott, 1982}<p>and, more recently, King and Rebelo, 1999). This view was challenged by<p>several empirical studies showing that the adjustment mechanisms<p>of variables at the heart of macroeconomic propagation mechanisms<p>like prices and wages are not well represented by efficient<p>responses of individual agents in frictionless economies (see, for<p>example, Kashyap, 1999; Cecchetti, 1986; Bils and Klenow, 2004 and Dhyne et al. 2004). Hence, macroeconomic models currently incorporate<p>some sources of nominal and real rigidities in the DSGE framework<p>and allow the study of the optimal policy reactions to inefficient<p>fluctuations stemming from frictions in macroeconomic propagation<p>mechanisms.<p><p>Against this background, the first chapter of this thesis sets up<p>a DSGE model in order to analyze optimal monetary policy in an<p>economy with sectorial heterogeneity in the frequency of price<p>adjustments. Price setters are divided in two groups: those<p>subject to Calvo type nominal rigidities and those able to change<p>their prices at each period. Sectorial heterogeneity in price<p>setting behavior is a relevant feature in real economies (see, for<p>example, Bils and Klenow, 2004 for the US and Dhyne, 2004 for the Euro<p>Area). Hence, neglecting it would lead to an understatement of the<p>heterogeneity in the transmission mechanisms of economy wide<p>shocks. In this framework, Aoki (2001) shows that a Central<p>Bank maximizing social welfare should stabilize only inflation in<p>the sector where prices are sticky (hereafter, core inflation).<p>Since complete stabilization is the only true objective of the<p>policymaker in Aoki (2001) and, hence, is not only desirable<p>but also implementable, the equilibrium real interest rate in the<p>economy is equal to the natural interest rate irrespective of the<p>degree of heterogeneity that is assumed. This would lead to<p>conclude that stabilizing core inflation rather than overall<p>inflation does not imply any observable difference in the<p>aggressiveness of the policy behavior. While maintaining the<p>assumption of sectorial heterogeneity in the frequency of price<p>adjustments, this chapter adds non negligible transaction<p>frictions to the model economy in Aoki (2001). As a<p>consequence, the social welfare maximizing monetary policymaker<p>faces a trade-off among the stabilization of core inflation,<p>economy wide output gap and the nominal interest rate. This<p>feature reflects the trade-offs between conflicting objectives<p>faced by actual policymakers. The chapter shows that the existence<p>of this trade-off makes the aggressiveness of the monetary policy<p>reaction dependent on the degree of sectorial heterogeneity in the<p>economy. In particular, in presence of sectorial heterogeneity in<p>price adjustments, Central Banks are much more likely to behave<p>less aggressively than in an economy where all firms face nominal<p>rigidities. Hence, the chapter concludes that the excessive<p>caution in the conduct of monetary policy shown by actual Central<p>Banks (see, for example, Rudebusch and Svennsson, 1999 and Sack, 2000) might not<p>represent a sub-optimal behavior but, on the contrary, might be<p>the optimal monetary policy response in presence of a relevant<p>sectorial dispersion in the frequency of price adjustments.<p><p>DSGE models are proving useful also in empirical applications and<p>recently efforts have been made to incorporate large amounts of<p>information in their framework (see Boivin and Giannoni, 2006). However, the<p>typical DSGE model still relies on a handful of variables. Partly,<p>this reflects the fact that, increasing the number of variables,<p>the specification of a plausible set of theoretical restrictions<p>identifying aggregate shocks and their propagation mechanisms<p>becomes cumbersome. On the other hand, several questions in<p>macroeconomics require the study of a large amount of variables.<p>Among others, two examples related to the second and third chapter<p>of this thesis can help to understand why. First, policymakers<p>analyze a large quantity of information to assess the current and<p>future stance of their economies and, because of model<p>uncertainty, do not rely on a single modelling framework.<p>Consequently, macroeconomic policy can be better understood if the<p>econometrician relies on large set of variables without imposing<p>too much a priori structure on the relationships governing their<p>evolution (see, for example, Giannone et al. 2004 and Bernanke et al. 2005).<p>Moreover, the process of integration of good and financial markets<p>implies that the source of aggregate shocks is increasingly global<p>requiring, in turn, the study of their propagation through cross<p>country links (see, among others, Forni and Reichlin, 2001 and Kose et al. 2003). A<p>priori, country specific behavior cannot be ruled out and many of<p>the homogeneity assumptions that are typically embodied in open<p>macroeconomic models for keeping them tractable are rejected by<p>the data. Summing up, in order to deal with such issues, we need<p>modelling frameworks able to treat a large amount of variables in<p>a flexible manner, i.e. without pre-committing on too many<p>a-priori restrictions more likely to be rejected by the data. The<p>large extent of comovement among wide cross sections of economic<p>variables suggests the existence of few common sources of<p>fluctuations (Forni et al. 2000 and Stock and Watson, 2002) around which<p>individual variables may display specific features: a shock to the<p>world price of oil, for example, hits oil exporters and importers<p>with different sign and intensity or global technological advances<p>can affect some countries before others (Giannone and Reichlin, 2004). Factor<p>models mainly rely on the identification assumption that the<p>dynamics of each variable can be decomposed into two orthogonal<p>components - common and idiosyncratic - and provide a parsimonious<p>tool allowing the analysis of the aggregate shocks and their<p>propagation mechanisms in a large cross section of variables. In<p>fact, while the idiosyncratic components are poorly<p>cross-sectionally correlated, driven by shocks specific of a<p>variable or a group of variables or measurement error, the common<p>components capture the bulk of cross-sectional correlation, and<p>are driven by few shocks that affect, through variable specific<p>factor loadings, all items in a panel of economic time series.<p>Focusing on the latter components allows useful insights on the<p>identity and propagation mechanisms of aggregate shocks underlying<p>a large amount of variables. The second and third chapter of this<p>thesis exploit this idea.<p><p>The second chapter deals with the issue whether monetary variables<p>help to forecast inflation in the Euro Area harmonized index of<p>consumer prices (HICP). Policymakers form their views on the<p>economic outlook by drawing on large amounts of potentially<p>relevant information. Indeed, the monetary policy strategy of the<p>European Central Bank acknowledges that many variables and models<p>can be informative about future Euro Area inflation. A peculiarity<p>of such strategy is that it assigns to monetary information the<p>role of providing insights for the medium - long term evolution of<p>prices while a wide range of alternative non monetary variables<p>and models are employed in order to form a view on the short term<p>and to cross-check the inference based on monetary information.<p>However, both the academic literature and the practice of the<p>leading Central Banks other than the ECB do not assign such a<p>special role to monetary variables (see Gali et al. 2004 and<p>references therein). Hence, the debate whether money really<p>provides relevant information for the inflation outlook in the<p>Euro Area is still open. Specifically, this chapter addresses the<p>issue whether money provides useful information about future<p>inflation beyond what contained in a large amount of non monetary<p>variables. It shows that a few aggregates of the data explain a<p>large amount of the fluctuations in a large cross section of Euro<p>Area variables. This allows to postulate a factor structure for<p>the large panel of variables at hand and to aggregate it in few<p>synthetic indexes that still retain the salient features of the<p>large cross section. The database is split in two big blocks of<p>variables: non monetary (baseline) and monetary variables. Results<p>show that baseline variables provide a satisfactory predictive<p>performance improving on the best univariate benchmarks in the<p>period 1997 - 2005 at all horizons between 6 and 36 months.<p>Remarkably, monetary variables provide a sensible improvement on<p>the performance of baseline variables at horizons above two years.<p>However, the analysis of the evolution of the forecast errors<p>reveals that most of the gains obtained relative to univariate<p>benchmarks of non forecastability with baseline and monetary<p>variables are realized in the first part of the prediction sample<p>up to the end of 2002, which casts doubts on the current<p>forecastability of inflation in the Euro Area.<p><p>The third chapter is based on a joint work with Domenico Giannone<p>and gives empirical foundation to the general equilibrium<p>explanation of the Feldstein - Horioka puzzle. Feldstein and Horioka (1980) found<p>that domestic saving and investment in OECD countries strongly<p>comove, contrary to the idea that high capital mobility should<p>allow countries to seek the highest returns in global financial<p>markets and, hence, imply a correlation among national saving and<p>investment closer to zero than one. Moreover, capital mobility has<p>strongly increased since the publication of Feldstein - Horioka's<p>seminal paper while the association between saving and investment<p>does not seem to comparably decrease. Through general equilibrium<p>mechanisms, the presence of global shocks might rationalize the<p>correlation between saving and investment. In fact, global shocks,<p>affecting all countries, tend to create imbalance on global<p>capital markets causing offsetting movements in the global<p>interest rate and can generate the observed correlation across<p>national saving and investment rates. However, previous empirical<p>studies (see Ventura, 2003) that have controlled for the effects<p>of global shocks in the context of saving-investment regressions<p>failed to give empirical foundation to this explanation. We show<p>that previous studies have neglected the fact that global shocks<p>may propagate heterogeneously across countries, failing to<p>properly isolate components of saving and investment that are<p>affected by non pervasive shocks. We propose a novel factor<p>augmented panel regression methodology that allows to isolate<p>idiosyncratic sources of fluctuations under the assumption of<p>heterogenous transmission mechanisms of global shocks. Remarkably,<p>by applying our methodology, the association between domestic<p>saving and investment decreases considerably over time,<p>consistently with the observed increase in international capital<p>mobility. In particular, in the last 25 years the correlation<p>between saving and investment disappears.<p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished

Page generated in 0.0756 seconds