• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 227
  • 65
  • Tagged with
  • 292
  • 284
  • 203
  • 169
  • 169
  • 85
  • 81
  • 81
  • 74
  • 44
  • 43
  • 43
  • 40
  • 39
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Intégration de l'expérience des livreurs aux problèmes de tournées de véhicules

Chanca, Etienne 18 October 2019 (has links)
Ce mémoire introduit une nouvelle variante aux problèmes de tournée de véhicules visant à prendre en considération la connaissance et l'expérience des livreurs dans le processus d'affectation aux clients. Ce nouveau type de tournée de véhicules basés sur l'expérience (EB-VRP) est motivé par le gain potentiel que pourrait apporter la familiarité des livreurs avec leur zone de travail, non seulement en termes de coûts de transports, mais également de satisfaction des livreurs. Cette approche pourrait également avoir pour corollaire une consommation ré- duite de carburant ainsi qu'une affectation non biaisée des livreurs aux clients. Pour traiter ce nouveau problème, une méthode de représentation de la connaissance construite à partir d'un historique de livraison est proposée. La résolution est ensuite effectuée par une approche bi-objectif combinée à une heuristique à grand voisinage. Des résultats numériques issus de données réelles viennent corroborer la pertinence de cette méthodologie. / This thesis introduces a new type of vehicle routing problem aiming to take into account the knowledge and experience for the driver-customer affectation process. This new experiencebased problem (EB-VRP) is motivated by the potential gain that could result from the ftting between drivers and their working areas in, in terms of global transportation costs and driver's satisfaction. This approach could also lead to a lower fuel consumption and an unbiased driver-customer affectation. A new methodology had been introduced to address this new problem, featuring a way to model the driver's knowledge based on a delivery's history. A bi-objective approach combines with a large-scale neighborhood heuristic had been used as a solving method. Numerical results from real data support the relevance of our methodology
182

Understanding co-movements in macro and financial variables

D'Agostino, Antonello 09 January 2007 (has links)
Over the last years, the growing availability of large datasets and the improvements in the computational speed of computers have further fostered the research in the fields of both macroeconomic modeling and forecasting analysis. A primary focus of these research areas is to improve the models performance by exploiting the informational content of several time series. Increasing the dimension of macro models is indeed crucial for a detailed structural understanding of the economic environment, as well as for an accurate forecasting analysis. As consequence, a new generation of large-scale macro models, based on the micro-foundations of a fully specified dynamic stochastic general equilibrium set-up, has became one of the most flourishing research areas of interest both in central banks and academia. At the same time, there has been a revival of forecasting methods dealing with many predictors, such as the factor models. The central idea of factor models is to exploit co-movements among variables through a parsimonious econometric structure. Few underlying common shocks or factors explain most of the co-variations among variables. The unexplained component of series movements is on the other hand due to pure idiosyncratic dynamics. The generality of their framework allows factor models to be suitable for describing a broad variety of models in a macroeconomic and a financial context. The revival of factor models, over the recent years, comes from important developments achieved by Stock and Watson (2002) and Forni, Hallin, Lippi and Reichlin (2000). These authors find the conditions under which some data averages become collinear to the space spanned by the factors when, the cross section dimension, becomes large. Moreover, their factor specifications allow the idiosyncratic dynamics to be mildly cross-correlated (an effect referred to as the 'approximate factor structure' by Chamberlain and Rothschild, 1983), a situation empirically verified in many applications. These findings have relevant implications. The most important being that the use of a large number of series is no longer representative of a dimensional constraint. On the other hand, it does help to identify the factor space. This new generation of factor models has been applied in several areas of macroeconomics and finance as well as for policy evaluation. It is consequently very likely to become a milestone in the literature of forecasting methods using many predictors. This thesis contributes to the empirical literature on factor models by proposing four original applications. <p><p>In the first chapter of this thesis, the generalized dynamic factor model of Forni et. al (2002) is employed to explore the predictive content of the asset returns in forecasting Consumer Price Index (CPI) inflation and the growth rate of Industrial Production (IP). The connection between stock markets and economic growth is well known. In the fundamental valuation of equity, the stock price is equal to the discounted future streams of expected dividends. Since the future dividends are related to future growth, a revision of prices, and hence returns, should signal movements in the future growth path. Though other important transmission channels, such as the Tobin's q theory (Tobin, 1969), the wealth effect as well as capital market imperfections, have been widely studied in this literature. I show that an aggregate index, such as the S&P500, could be misleading if used as a proxy for the informative content of the stock market as a whole. Despite the widespread wisdom of considering such index as a leading variable, only part of the assets included in the composition of the index has a leading behaviour with respect to the variables of interest. Its forecasting performance might be poor, leading to sceptical conclusions about the effectiveness of asset prices in forecasting macroeconomic variables. The main idea of the first essay is therefore to analyze the lead-lag structure of the assets composing the S&P500. The classification in leading, lagging and coincident variables is achieved by means of the cross correlation function cleaned of idiosyncratic noise and short run fluctuations. I assume that asset returns follow a factor structure. That is, they are the sum of two parts: a common part driven by few shocks common to all the assets and an idiosyncratic part, which is rather asset specific. The correlation<p>function, computed on the common part of the series, is not affected by the assets' specific dynamics and should provide information only on the series driven by the same common factors. Once the leading series are identified, they are grouped within the economic sector they belong to. The predictive content that such aggregates have in forecasting IP growth and CPI inflation is then explored and compared with the forecasting power of the S&P500 composite index. The forecasting exercise is addressed in the following way: first, in an autoregressive (AR) model I choose the truncation lag that minimizes the Mean Square Forecast Error (MSFE) in 11 years out of sample simulations for 1, 6 and 12 steps ahead, both for the IP growth rate and the CPI inflation. Second, the S&P500 is added as an explanatory variable to the previous AR specification. I repeat the simulation exercise and find that there are very small improvements of the MSFE statistics. Third, averages of stock return leading series, in the respective sector, are added as additional explanatory variables in the benchmark regression. Remarkable improvements are achieved with respect to the benchmark specification especially for one year horizon forecast. Significant improvements are also achieved for the shorter forecast horizons, when the leading series of the technology and energy sectors are used. <p><p>The second chapter of this thesis disentangles the sources of aggregate risk and measures the extent of co-movements in five European stock markets. Based on the static factor model of Stock and Watson (2002), it proposes a new method for measuring the impact of international, national and industry-specific shocks. The process of European economic and monetary integration with the advent of the EMU has been a central issue for investors and policy makers. During these years, the number of studies on the integration and linkages among European stock markets has increased enormously. Given their forward looking nature, stock prices are considered a key variable to use for establishing the developments in the economic and financial markets. Therefore, measuring the extent of co-movements between European stock markets has became, especially over the last years, one of the main concerns both for policy makers, who want to best shape their policy responses, and for investors who need to adapt their hedging strategies to the new political and economic environment. An optimal portfolio allocation strategy is based on a timely identification of the factors affecting asset returns. So far, literature dating back to Solnik (1974) identifies national factors as the main contributors to the co-variations among stock returns, with the industry factors playing a marginal role. The increasing financial and economic integration over the past years, fostered by the decline of trade barriers and a greater policy coordination, should have strongly reduced the importance of national factors and increased the importance of global determinants, such as industry determinants. However, somehow puzzling, recent studies demonstrated that countries sources are still very important and generally more important of the industry ones. This paper tries to cast some light on these conflicting results. The chapter proposes an econometric estimation strategy more flexible and suitable to disentangle and measure the impact of global and country factors. Results point to a declining influence of national determinants and to an increasing influence of the industries ones. The international influences remains the most important driving forces of excess returns. These findings overturn the results in the literature and have important implications for strategic portfolio allocation policies; they need to be revisited and adapted to the changed financial and economic scenario. <p><p>The third chapter presents a new stylized fact which can be helpful for discriminating among alternative explanations of the U.S. macroeconomic stability. The main finding is that the fall in time series volatility is associated with a sizable decline, of the order of 30% on average, in the predictive accuracy of several widely used forecasting models, included the factor models proposed by Stock and Watson (2002). This pattern is not limited to the measures of inflation but also extends to several indicators of real economic activity and interest rates. The generalized fall in predictive ability after the mid-1980s is particularly pronounced for forecast horizons beyond one quarter. Furthermore, this empirical regularity is not simply specific to a single method, rather it is a common feature of all models including those used by public and private institutions. In particular, the forecasts for output and inflation of the Fed's Green book and the Survey of Professional Forecasters (SPF) are significantly more accurate than a random walk only before 1985. After this date, in contrast, the hypothesis of equal predictive ability between naive random walk forecasts and the predictions of those institutions is not rejected for all horizons, the only exception being the current quarter. The results of this chapter may also be of interest for the empirical literature on asymmetric information. Romer and Romer (2000), for instance, consider a sample ending in the early 1990s and find that the Fed produced more accurate forecasts of inflation and output compared to several commercial providers. The results imply that the informational advantage of the Fed and those private forecasters is in fact limited to the 1970s and the beginning of the 1980s. In contrast, during the last two decades no forecasting model is better than "tossing a coin" beyond the first quarter horizon, thereby implying that on average uninformed economic agents can effectively anticipate future macroeconomics developments. On the other hand, econometric models and economists' judgement are quite helpful for the forecasts over the very short horizon, that is relevant for conjunctural analysis. Moreover, the literature on forecasting methods, recently surveyed by Stock and Watson (2005), has devoted a great deal of attention towards identifying the best model for predicting inflation and output. The majority of studies however are based on full-sample periods. The main findings in the chapter reveal that most of the full sample predictability of U.S. macroeconomic series arises from the years before 1985. Long time series appear<p>to attach a far larger weight on the earlier sub-sample, which is characterized by a larger volatility of inflation and output. Results also suggest that some caution should be used in evaluating the performance of alternative forecasting models on the basis of a pool of different sub-periods as full sample analysis are likely to miss parameter instability. <p><p>The fourth chapter performs a detailed forecast comparison between the static factor model of Stock and Watson (2002) (SW) and the dynamic factor model of Forni et. al. (2005) (FHLR). It is not the first work in performing such an evaluation. Boivin and Ng (2005) focus on a very similar problem, while Stock and Watson (2005) compare the performances of a larger class of predictors. The SW and FHLR methods essentially differ in the computation of the forecast of the common component. In particular, they differ in the estimation of the factor space and in the way projections onto this space are performed. In SW, the factors are estimated by static Principal Components (PC) of the sample covariance matrix and the forecast of the common component is simply the projection of the predicted variable on the factors. FHLR propose efficiency improvements in two directions. First, they estimate the common factors based on Generalized Principal Components (GPC) in which observations are weighted according to their signal to noise ratio. Second, they impose the constraints implied by the dynamic factors structure when the variables of interest are projected on the common factors. Specifically, they take into account the leading and lagging relations across series by means of principal components in the frequency domain. This allows for an efficient aggregation of variables that may be out of phase. Whether these efficiency improvements are helpful to forecast in a finite sample is however an empirical question. Literature has not yet reached a consensus. On the one hand, Stock and Watson (2005) show that both methods perform similarly (although they focus on the weighting of the idiosyncratic and not on the dynamic restrictions), while Boivin and Ng (2005) show that SW's method largely outperforms the FHLR's and, in particular, conjecture that the dynamic restrictions implied by the method are harmful for the forecast accuracy of the model. This chapter tries to shed some new light on these conflicting results. It<p>focuses on the Industrial Production index (IP) and the Consumer Price Index (CPI) and bases the evaluation on a simulated out-of sample forecasting exercise. The data set, borrowed from Stock and Watson (2002), consists of 146 monthly observations for the US economy. The data spans from 1959 to 1999. In order to isolate and evaluate specific characteristics of the methods, a procedure, where the<p>two non-parametric approaches are nested in a common framework, is designed. In addition, for both versions of the factor model forecasts, the chapter studies the contribution of the idiosyncratic component to the forecast. Other non-core aspects of the model are also investigated: robustness with respect to the choice of the number of factors and variable transformations. Finally, the chapter performs a sub-sample performances of the factor based forecasts. The purpose of this exercise is to design an experiment for assessing the contribution of the core characteristics of different models to the forecasting performance and discussing auxiliary issues. Hopefully this may also serve as a guide for practitioners in the field. As in Stock and Watson (2005), results show that efficiency improvements due to the weighting of the idiosyncratic components do not lead to significant more accurate forecasts, but, in contrast to Boivin and Ng (2005), it is shown that the dynamic restrictions imposed by the procedure of Forni et al. (2005) are not harmful for predictability. The main conclusion is that the two methods have a similar performance and produce highly collinear forecasts. <p> / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
183

Three essays in international economics

Malek Mansour, Jeoffrey H.G. 25 January 2006 (has links)
This thesis consists in a collection of research works dealing with various aspects of International Economics. More precisely, we focus on three main themes: (i) the existence of a world business cycle and the implications thereof, (ii) the likelihood of asymmetric shocks in the Euro Zone resulting from fluctuations in the euro exchange rate because of differences in sector specialization patterns and some consequences of such shocks, and (iii) the relationship between trade openness and growth influence of the sector specialization structure on that relationship.<p><p>Regarding the approach pursued to tackle these problems, we have chosen to strictly remain within the boundaries of empirical (macro)economics - that is, applied econometrics. Though we systematically provide theoretical models to back up our empirical approach, our only real concern is to look at the stories the data can (or cannot) tell us. As to the econometric methodology, we will restrict ourselves to the use of panel data analysis. The large spectrum of techniques available within the panel framework allows us to utilize, for each of the problems at hand, the most suitable approach (or what we think it is). / Doctorat en sciences économiques, Orientation économie / info:eu-repo/semantics/nonPublished
184

Allocation stratégique des transferts gouvernementaux au Mexique entre 1997 et 2000 : une analyse de durée

Msaid, Youcef 16 April 2018 (has links)
En 1997, le gouvernement du PRI au Mexique lance PROGRESA, un programme d'aide financière aux familles pauvres vivant en milieu rural. Du fait des contraintes budgétaires et organisationnelles, toutes les localités admissibles ne pouvaient être incorporées avant l'élection présidentielle de juillet 2000. Un indice de marginalité devait déterminer l'ordre d'incorporation. Après estimation d'un modèle de durée, nous trouvons que le taux de vote pour le PRI et le taux de participation à l'élection législative de 1997 ont un effet négatif et significatif sur la durée avant l'incorporation d'une localité. L'ampleur de cet effet reste cependant assez faible. L'augmentation du vote pour le PRI d'un écart-type diminue la durée moyenne de moins de 3%, alors que l'augmentation du taux de participation d'un écart-type diminue la durée moyenne de moins de 4%. En prenant les critères d'éligibilité comme acquis, la présence d'une forte opposition au Parlement fédéral dans la période 1997-2000 semble avoir permis une expansion équitable du programme.
185

Essays on demographic changes, health and economic development

Housseini, Bouba 20 April 2018 (has links)
Dans un contexte de changements démographiques, ma thèse de doctorat vise à clarifier deux questions principales : i)comment évaluer le progrès des nations lorsque les populations diffèrent en taille, longévité et répartition des revenus ? et ii)quels sont les effets de la fécondité et de la mortalité sur la croissance économique en Afrique subsaharienne ? La première partie (chapitres 1 et 2) élucide la manière dont les changements en taille de la population, en longévité et en répartition des revenus pourraient être socialement évalués, tandis que la seconde partie (chapitre 3) fournit un cadre de politique publique et des éclairages sur les moyens de réaliser une dividende démographique dans le contexte de l’Afrique subsaharienne. J’adopte deux approches différentes pour aborder ces questions. La première partie utilise une méthode welfariste qui développe et applique (sur l’Afrique subsaharienne) des fonctions et critères d’évaluation sociale intertemporelle adaptés aux populations de taille et de durée de vie variables. La deuxième partie utilise une approche économétrique qui développe et estime un modèle d’équations simultanées des déterminants de la mortalité, de la fécondité et de la performance économique en utilisant des données de panel des pays de l’Afrique subsaharienne. Le chapitre 1 explore les principes axiomatiques et welfaristes d’évaluation du bien-être social dans un cadre intertemporel. Il apporte des réponses à certaines des limites des méthodes existantes dans la littérature, en proposant en particulier une fonction d’évaluation sociale qui échappe à la conclusion répugnante temporelle, qui est neutre vis-à-vis de la fragmentation des vies et qui satisfait la cohérence temporelle de niveau critique. Pour ce faire, nous caractérisons une fonction d’utilité intertemporelle de niveau critique qui évalue la vie de manière périodique. Pour palier les controverses sur l’actualisation ou pas des utilités à travers le temps, deux versions de la fonction sont développées, l’une avec actualisation et l’autre sans. Le chapitre 2 met l’accent sur la manière d’évaluer le progrès des nations lorsque les populations diffèrent en taille, en longévité et en répartition des revenus. Le cadre d’analyse est ensuite appliqué au contexte démographique (particulier) de l’Afrique subsaharienne. Les résultats indiquent que la contribution de la taille de la population au bien-être social dépend des considérations éthiques concernant le choix d’un niveau critique au-delà duquel une vie est considérée comme digne d’être vécue (ou améliorant le bien-être social). La durée de vie n’a pas un effet significatif sur le bien-être social avant la transition démographique. L’explosion démographique observée au cours du dernier siècle en Afrique subsaharienne a détérioré le bien-être social pour des valeurs de niveau critique supérieures à 180$ par année, soit environ la moitié du seuil bien connu de pauvreté d’un dollar par jour. Cela corrobore l’idée souvent émise selon laquelle le ralentissement de la croissance démographique en Afrique subsaharienne n’ élèverait pas seulement le niveau de vie moyen, mais augmenterait également le bien-être social en général. Le chapitre 3 développe et estime un modèle économétrique des déterminants conjointes de la fécondité, de la mortalité et de la performance économique en Afrique subsaharienne afin d’identifier les actions de politique publique pour accélérer la transition démographique dans la région et par conséquent réaliser son corollaire de dividende démographique. L’analyse s’appuie sur un modèle économétrique d’équations simultanées utilisant des données de panel multi-pays pour la période 1960-2010. Pour faire face au problème d’endogénéité, nous adoptons la méthode des variables instrumentales en exploitant différentes sources de variations exogènes du revenu par tête, de la fécondité et de la mortalité. Les résultats montrent que chaque année supplémentaire en espérance de vie à la naissance implique une croissance du revenu par tête de 13,1%. En outre, un doublement du revenu par tête entraîne une augmentation de la longévité de 6,3 ans. Toutefois, les relations entre la fécondité et le revenu par tête et l’espérance de vie à la naissance ressortent être ambigues en raison certainement de la dépendance des économies de l’Afrique subsaharienne aux ressources naturelles et au commerce international. Nos résultats soulignent la nécessité de promouvoir la transformation structurelle des économies de l’Afrique subsaharienne afin d’accélérer la transition démographique dans la région et réaliser une dividende démographique. / In a context of demographic changes, my PhD thesis aims to clarify two main questions: i)how the progress of nations can be evaluated when populations differ in size, longevity and income distribution? and ii)what are the effects of fertility and mortality on economic growth in Sub-Saharan Africa (SSA)? The first part (chapters 1 and 2) elucidates how changes in population size, longevity and income distribution can be socially evaluated while the second part (chapter 3) provides a public policy framework and insights on how the demographic dividend can be captured in the Sub-Saharan Africa context. I adopt two different approaches to analyse these questions. The first part uses a welfarist method that develops and applies (to SSA) intertemporal social evaluation functions and criteria suitable to variable populations. The second part uses an econometric approach that develops and estimates a simultaneous equations model of the determinants of mortality, fertility and economic performance using country-level panel data from SSA. Chapter 1 explores the use of axiomatic and welfarist principles to assess social welfare in an intertemporal framework. It attempts to overcome some of the limits of existing methods in the literature, in particular by avoiding a temporal repugnant conclusion, by neither penalizing nor favoring life fragmentation, and by satisfying critical-level temporal consistency. It does this by characterizing a critical-level lifetime utility function that values life periodically. To address some of the controversies in discounting utilities across time, two alternative versions of the function are developed, one with discounting and one without. Chapter 2 focusses on how the progress of nations can be evaluated when populations differ in size, longevity and income distributions. The framework is applied to the (particular) demographic context of SSA. The findings indicate that the contribution of population size to social welfare depends on ethical considerations regarding the choice of a critical level above which a life is considered to be worth living (or social welfare improving). Length of life does not have a significant effect on social welfare prior to the demographic transition. SSA’s demographic explosion over the last century has worsened social welfare for critical-level values greater than $180 per year, i.e. roughly half the well-known dollar-a-day poverty line. This supports the often heard view that slowing down demographic growth in SSA may not only increase average living standards but may also raise overall social welfare. Chapter 3 develops and estimates an econometric model of the joint determinants of fertility, mortality and economic performance in SSA in order to identify public policy actions to accelerate the demographic transition in the region and consequently to achieve its corollary demographic dividend. The analysis builds on a simultaneous equations econometric model using multi-country panel data for the period 1960 - 2010. To deal with endogeneity, we use the instrumental variable approach, exploiting different sources of exogenous variations of per capita income, fertility and mortality. The results show that each additional year of life expectancy at birth implies a growth of per capita income of 13.1%. Also, a doubling of per capita income leads to a rise in longevity of 6.3 years. However the relationships between fertility and both per capita income and life expectancy at birth appear to be ambiguous probably due to the dependency of SSA economies on natural resources and international trade. Our findings point to the necessity of fostering the structural transformation of SSA economies in order to accelerate the demographic transition in the region and to capture the demographic dividend.
186

Wages and the bargaining regimes in corporatists countries: a series of empirical essays

Rusinek, Michael 17 June 2009 (has links)
In the first chapter,a harmonised linked employer-employee dataset is used to study the impact of firm-level agreements on the wage structure in the manufacturing sector in Belgium, Denmark and Spain. To our knowledge, this is one of the first cross-country studies that examines the impact of firm-level bargaining on the wage structure in European countries. We find that firm-level agreements have a positive effect both on wage levels and on wage dispersion in Belgium and Denmark. In Spain, firm also increase wage levels but reduce wage dispersion. Our interpretation is that in Belgium and Denmark, where firm-level bargaining greatly expanded since the 1980s on the initiative of the employers and the governments, firm-level bargaining is mainly used to adapt pay to the specific needs of the firm. In Spain, the structure of collective bargaining has not changed very much since the Franco period where firm agreements were used as a tool for worker mobilisation and for political struggle. Therefore, firm-level bargaining in Spain is still mainly used by trade unions in order to reduce the wage dispersion. <p>In the second chapter, we analyse the impact of the bargaining level and of the degree of centralisation of wage bargaining on rent-sharing in Belgium. To the best of our knowledge, this is the first study that considers simultaneously both dimensions of collective bargaining. This is also one of the first papers that looks at the impact of wage bargaining institutions on rent-sharing in European countries. This question is important because if wage bargaining decentralisation increases the link between wages and firm specific profits, it may prevent an efficient allocation of labour across firms, increase wage inequality, lead to smaller employment adjustments, and affect the division of surplus between capital and labour (Bryson et al. 2006). Controlling for the endogeneity of profits, for heterogeneity among workers and firms and for differences in characteristics between bargaining regimes, we find that wages depend substantially more on firm specific profits in decentralised than in centralised industries ,irrespective of the presence of a formal firm collective agreement. In addition, the impact of the presence of a formal firm collective agreement on the wage-profit elasticity depends on the degree of centralisation of the industry. In centralised industries, profits influence wages only when a firm collective agreement is present. This result is not surprising since industry agreements do not take into account firm-specific characteristics. Within decentralised industries, firms share their profits with their workers even if they are not covered by a formal firm collective agreement. This is probably because, in those industries, workers only covered by an industry agreement (i.e. not covered by a formal firm agreement) receive wage supplements that are paid unilaterally by their employer. The fact that those workers also benefit from rent-sharing implies that pay-setting does not need to be collective to generate rent-sharing, which is in line with the Anglo-American literature that shows that rent-sharing is not a particularity of the unionised sector. <p>In the first two chapters, we have shown that, in Belgium, firm-level bargaining is used by firms to adapt pay to the specific characteristics of the firm, including firm’s profits. In the third and final chapter, it is shown that firm-level bargaining also allows wages to adapt to the local environment that the company may face. This aspect is of particular importance in the debate about a potential regionalisation of wage bargaining in Belgium. This debate is, however, not specific to Belgium. Indeed, the potential failure of national industry agreements to take into account the productivity levels of the least productive regions has been considered as one of the causes of regional unemployment in European countries (Davies and Hallet, 2001; OECD, 2006). Two kinds of solutions are generally proposed to solve this problem. The first, encouraged by the European Commission and the OECD, consists in decentralising wage bargaining toward the firm level (Davies and Hallet, 2001; OECD, 2006). The second solution, the regionalisation of wage bargaining, is frequently mentioned in Belgium or in Italy where regional unemployment differentials are high. In this chapter we show that, in Belgium, regional wage differentials and regional productivity differentials within joint committees are positively correlated. Moreover, this relation is stronger (i) for joint committees where firm-level bargaining is relatively frequent and (ii) for joint committees already sub-divided along a local line. We conclude that the present Belgian wage bargaining system which combines interprofessional, industry and firm bargaining, already includes the mechanisms that allow regional productivity to be taken into account in wage formation. It is therefore not necessary to further regionalise wage bargaining in Belgium. <p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
187

Essays on econometric modelling of temporal networks / Essais sur la modélisation économétrique des réseaux temporels

Iacopini, Matteo 05 July 2018 (has links)
La théorie des graphes a longtemps été étudiée en mathématiques et en probabilité en tant qu’outil pour décrire la dépendance entre les nœuds. Cependant, ce n’est que récemment qu’elle a été mise en œuvre sur des données, donnant naissance à l’analyse statistique des réseaux réels.La topologie des réseaux économiques et financiers est remarquablement complexe: elle n’est généralement pas observée, et elle nécessite ainsi des procédures inférentielles adéquates pour son estimation, d’ailleurs non seulement les nœuds, mais la structure de la dépendance elle-même évolue dans le temps. Des outils statistiques et économétriques pour modéliser la dynamique de changement de la structure du réseau font défaut, malgré leurs besoins croissants dans plusieurs domaines de recherche. En même temps, avec le début de l’ère des “Big data”, la taille des ensembles de données disponibles devient de plus en plus élevée et leur structure interne devient de plus en plus complexe, entravant les processus inférentiels traditionnels dans plusieurs cas. Cette thèse a pour but de contribuer à ce nouveau champ littéraire qui associe probabilités, économie, physique et sociologie en proposant de nouvelles méthodologies statistiques et économétriques pour l’étude de l’évolution temporelle des structures en réseau de moyenne et haute dimension. / Graph theory has long been studied in mathematics and probability as a tool for describing dependence between nodes. However, only recently it has been implemented on data, giving birth to the statistical analysis of real networks.The topology of economic and financial networks is remarkably complex: it is generally unobserved, thus requiring adequate inferential procedures for it estimation, moreover not only the nodes, but the structure of dependence itself evolves over time. Statistical and econometric tools for modelling the dynamics of change of the network structure are lacking, despite their increasing requirement in several fields of research. At the same time, with the beginning of the era of “Big data” the size of available datasets is becoming increasingly high and their internal structure is growing in complexity, hampering traditional inferential processes in multiple cases.This thesis aims at contributing to this newborn field of literature which joins probability, economics, physics and sociology by proposing novel statistical and econometric methodologies for the study of the temporal evolution of network structures of medium-high dimension.
188

Investissement direct étranger et tourisme international / Foreign direct investment and international tourism

Bourdarias-Pham, Vân 06 June 2016 (has links)
Cette étude porte sur l’investissement direct étranger et le tourisme international. Il s’agit d’une étude simultanée sur la demande touristique internationale à la fois en termes d’arrivées et de recettes ; ces éléments n’ont fait l’objet que de peu de travaux antérieurs, en raison de la spécificité du tourisme et les lacunes des données statistiques. Ce travail comporte deux parties. La première partie est divisée en deux chapitres. Le premier chapitre présente une analyse économique des IDE, y compris le secteur du tourisme et du tourisme international. Dans le deuxième chapitre, les principaux déterminants de l’IDE et du secteur touristique sont étudiés. La deuxième partie concerne les applications économétriques et le classement typologique des déterminants des IDE ; elle comporte deux chapitres. Dans le premier chapitre, les données statistiques, la méthodologie concernant les statistiques descriptives, et les modèles économétriques sont étudiés afin de démontrer le lien d’interdépendance et d’interaction. Le deuxième chapitre est consacré à l’analyse des tests des pays concernés. L’association des résultats des tests économétriques avec une étude de la monographie de chaque pays, permet d’établir un classement des déterminants d’IDE à destination touristique. / This work focuses on foreign direct investment and international tourism. It is a simultaneous study in tourism demand international both in terms of arrivals and revenue; These elements have been little the subject of earlier work, due to the specificity of tourism and the shortcomings of statistical data. This work consists of two parts. The first is divided into two chapters. In the first chapter, it comes to the economic analysis of the IDE, including the sector of tourism and international tourism. In the second chapter, the main determinants of FDI and tourism are studied. The second part concerns the econometric applications and the classification of the typology of the determinants of FDI; it includes two chapters. In the first chapter, statistical data, the methodology for descriptive statistics and econometric models are studied in order to demonstrate the link of interdependence and the relationship of interaction. The second chapter is devoted to the analysis of tests of the countries concerned. By combining the results of econometric tests, a study of the product monograph of each country, it is permitted to establish a ranking of the determinants of FDI to tourist destination.
189

Quantile-based inference and estimation of heavy-tailed distributions

Dominicy, Yves 18 April 2014 (has links)
This thesis is divided in four chapters. The two first chapters introduce a parametric quantile-based estimation method of univariate heavy-tailed distributions and elliptical distributions, respectively. If one is interested in estimating the tail index without imposing a parametric form for the entire distribution function, but only on the tail behaviour, we propose a multivariate Hill estimator for elliptical distributions in chapter three. In the first three chapters we assume an independent and identically distributed setting, and so as a first step to a dependent setting, using quantiles, we prove in the last chapter the asymptotic normality of marginal sample quantiles for stationary processes under the S-mixing condition.<p><p><p>The first chapter introduces a quantile- and simulation-based estimation method, which we call the Method of Simulated Quantiles, or simply MSQ. Since it is based on quantiles, it is a moment-free approach. And since it is based on simulations, we do not need closed form expressions of any function that represents the probability law of the process. Thus, it is useful in case the probability density functions has no closed form or/and moments do not exist. It is based on a vector of functions of quantiles. The principle consists in matching functions of theoretical quantiles, which depend on the parameters of the assumed probability law, with those of empirical quantiles, which depend on the data. Since the theoretical functions of quantiles may not have a closed form expression, we rely on simulations.<p><p><p>The second chapter deals with the estimation of the parameters of elliptical distributions by means of a multivariate extension of MSQ. In this chapter we propose inference for vast dimensional elliptical distributions. Estimation is based on quantiles, which always exist regardless of the thickness of the tails, and testing is based on the geometry of the elliptical family. The multivariate extension of MSQ faces the difficulty of constructing a function of quantiles that is informative about the covariation parameters. We show that the interquartile range of a projection of pairwise random variables onto the 45 degree line is very informative about the covariation.<p><p><p>The third chapter consists in constructing a multivariate tail index estimator. In the univariate case, the most popular estimator for the tail exponent is the Hill estimator introduced by Bruce Hill in 1975. The aim of this chapter is to propose an estimator of the tail index in a multivariate context; more precisely, in the case of regularly varying elliptical distributions. Since, for univariate random variables, our estimator boils down to the Hill estimator, we name it after Bruce Hill. Our estimator is based on the distance between an elliptical probability contour and the exceedance observations. <p><p><p>Finally, the fourth chapter investigates the asymptotic behaviour of the marginal sample quantiles for p-dimensional stationary processes and we obtain the asymptotic normality of the empirical quantile vector. We assume that the processes are S-mixing, a recently introduced and widely applicable notion of dependence. A remarkable property of S-mixing is the fact that it doesn't require any higher order moment assumptions to be verified. Since we are interested in quantiles and processes that are probably heavy-tailed, this is of particular interest.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
190

Essays on the econometrics of macroeconomic survey data

Conflitti, Cristina 11 September 2012 (has links)
This thesis contains three essays covering different topics in the field of statistics<p>and econometrics of survey data. Chapters one and two analyse two aspects<p>of the Survey of Professional Forecasters (SPF hereafter) dataset. This survey<p>provides a large information on macroeconomic expectations done by the professional<p>forecasters and offers an opportunity to exploit a rich information set.<p>But it poses a challenge on how to extract the relevant information in a proper<p>way. The last chapter addresses the issue of analyzing the opinions on the euro<p>reported in the Flash Eurobaromenter dataset.<p>The first chapter Measuring Uncertainty and Disagreement in the European<p>Survey of Professional Forecasters proposes a density forecast methodology based<p>on the piecewise linear approximation of the individual’s forecasting histograms,<p>to measure uncertainty and disagreement of the professional forecasters. Since<p>1960 with the introduction of the SPF in the US, it has been clear that they were a<p>useful source of information to address the issue on how to measure disagreement<p>and uncertainty, without relying on macroeconomic or time series models. Direct<p>measures of uncertainty are seldom available, whereas many surveys report point<p>forecasts from a number of individual respondents. There has been a long tradition<p>of using measures of the dispersion of individual respondents’ point forecasts<p>(disagreement or consensus) as proxies for uncertainty. Unlike other surveys, the<p>SPF represents an exception. It directly asks for the point forecast, and for the<p>probability distribution, in the form of histogram, associated with the macro variables<p>of interest. An important issue that should be considered concerns how to<p>approximate individual probability densities and get accurate individual results<p>for disagreement and uncertainty before computing the aggregate measures. In<p>contrast to Zarnowitz and Lambros (1987), and Giordani and Soderlind (2003) we<p>overcome the problem associated with distributional assumptions of probability<p>density forecasts by using a non parametric approach that, instead of assuming<p>a functional form for the individual probability law, approximates the histogram<p>by a piecewise linear function. In addition, and unlike earlier works that focus on<p>US data, we employ European data, considering gross domestic product (GDP),<p>inflation and unemployment.<p>The second chapter Optimal Combination of Survey Forecasts is based on<p>a joint work with Christine De Mol and Domenico Giannone. It proposes an<p>approach to optimally combine survey forecasts, exploiting the whole covariance<p>structure among forecasters. There is a vast literature on forecast combination<p>methods, advocating their usefulness both from the theoretical and empirical<p>points of view (see e.g. the recent review by Timmermann (2006)). Surprisingly,<p>it appears that simple methods tend to outperform more sophisticated ones, as<p>shown for example by Genre et al. (2010) on the combination of the forecasts in<p>the SPF conducted by the European Central Bank (ECB). The main conclusion of<p>several studies is that the simple equal-weighted average constitutes a benchmark<p>that is hard to improve upon. In contrast to a great part of the literature which<p>does not exploit the correlation among forecasters, we take into account the full<p>covariance structure and we determine the optimal weights for the combination<p>of point forecasts as the minimizers of the mean squared forecast error (MSFE),<p>under the constraint that these weights are nonnegative and sum to one. We<p>compare our combination scheme with other methodologies in terms of forecasting<p>performance. Results show that the proposed optimal combination scheme is an<p>appropriate methodology to combine survey forecasts.<p>The literature on point forecast combination has been widely developed, however<p>there are fewer studies analyzing the issue for combination density forecast.<p>We extend our work considering the density forecasts combination. Moving from<p>the main results presented in Hall and Mitchell (2007), we propose an iterative<p>algorithm for computing the density weights which maximize the average logarithmic<p>score over the sample period. The empirical application is made for the<p>European GDP and inflation forecasts. Results suggest that optimal weights,<p>obtained via an iterative algorithm outperform the equal-weighted used by the<p>ECB density combinations.<p>The third chapter entitled Opinion surveys on the euro: a multilevel multinomial<p>logistic analysis outlines the multilevel aspects related to public attitudes<p>toward the euro. This work was motivated by the on-going debate whether the<p>perception of the euro among European citizenships after ten years from its introduction<p>was positive or negative. The aim of this work is, therefore, to disentangle<p>the issue of public attitudes considering either individual socio-demographic characteristics<p>and macroeconomic features of each country, counting each of them<p>as two separate levels in a single analysis. Considering a hierarchical structure<p>represents an advantage as it models within-country as well as between-country<p>relations using a single analysis. The multilevel analysis allows the consideration<p>of the existence of dependence between individuals within countries induced by<p>unobserved heterogeneity between countries, i.e. we include in the estimation<p>specific country characteristics not directly observable. In this chapter we empirically<p>investigate which individual characteristics and country specificities are<p>most important and affect the perception of the euro. The attitudes toward the<p>euro vary across individuals and countries, and are driven by personal considerations<p>based on the benefits and costs of using the single currency. Individual<p>features, such as a high level of education or living in a metropolitan area, have<p>a positive impact on the perception of the euro. Moreover, the country-specific<p>economic condition can influence individuals attitudes. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished

Page generated in 0.0949 seconds