• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 294
  • 81
  • 18
  • 3
  • Tagged with
  • 408
  • 100
  • 94
  • 89
  • 82
  • 50
  • 49
  • 49
  • 45
  • 38
  • 38
  • 36
  • 35
  • 35
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Proposition d'un modèle de prévision spatio-temporel à court terme de l'ensoleillement global, à partir de trois sites en Guadeloupe / Proposal of a spatio-temporal forecasting model at short time for global solar radiattion from three sites in Guadeloupe

Andre, Maina 28 October 2015 (has links)
En Guadeloupe, actuellement, 5,92% de la demande en énergie électrique sont couverts par la filière photovoltaïque et 3,14% par la filière éolienne soit 9,06% pour leur production cumulée selon le bilan 2015 de l’OREC (Observatoire Régional de l’Energie et du Climat). Selon le plan énergétique régional de prospection, la production cumulée du photovoltaïque et de l’éolien devrait représenter 14% du mix électrique en 2020 et 18% en 2030. Pour atteindre les 14% du mix électrique d’ici les cinq prochaines années, il va donc falloir entre autres, améliorer la prédictibilité pour un développement à un rythme soutenu de ces énergies. Ces travaux de recherches ont consisté à apporter de nouveaux résultats de performance de prévision de l’ensoleillement global à court terme et à donner une connaissance plus fine de la ressource sur trois stations en Guadeloupe. L’étude est basée sur une analyse et un modèle de prévision de l’ensoleillement, faisant intervenir des paramètres spatiaux et temporels. La littérature montre qu’un important nombre de sites est en général utilisé pour une analyse spatio-temporelle, ce qui impliquerait pour nous, de poser de multiples capteurs sur l’ensemble du territoire. Les coûts d’un tel système seraient considérables. Notre approche ici consistera à effectuer une analyse spatio-temporelle sur trois stations. Avec peu de stations et des distances non uniformes nous avons donc cherché à développer un modèle de prévision de l’ensoleillement à court terme en dépit de ces contraintes qui ne répondent pas à une approche classique. Le modèle est basé sur une méthodologie VAR (Vecteur Autorégressif) incluant des paramètres spatiaux et temporels. Une stratégie de sélection des variables est développée afin de sélectionner les prédicteurs (stations) utiles pour la prévision sur une localisation. Cette stratégie itérative permettra d’une part d’être plus proche de la réalité, d’autre part d’un point de vue algorithmique, la tendance des calculs sera plus rapide. En amont du développement du modèle, une étude de la variabilité spatio-temporelle de l’ensoleillement a permis de quantifier et caractériser de manière fine, les interactions dynamiques entre ces trois stations. Par comparaison avec les modèles de la littérature, notre modèle de prévision montre une bonne performance avec des valeurs de RMSE relative allant de 17,48% à 23,79% pour des horizons de prévisions de 5 min à 1h. Les méthodologies développées pourraient à terme offrir une opportunité d’assurer des garanties au gestionnaire du réseau. Si d'avenir des solutions de prévision performantes se généralisaient, cette opportunité permettrait d’ouvrir le marché au-delà du seuil de 30% imposé actuellement. / Currently in Guadeloupe, there is 5,92 % of the electric power request covered by the photovoltaic sector and 3,14 % by the wind sector which represents 9,06 % for their accumulated production, according to the OREC report (Regional Monitoring center of Energy and Climate). According to the regional energy plan, the accumulated production of the photovoltaic and the wind energy should represent 14 % of the electric mix in 2020 and 18 % in 2030. To reach the 14 % of the electric mix within the next five years, we need, among other things, to improve forecast for a sustained development of these energies. These research works consisted in bringing new performance results of short-term forecast of the global solar radiation and in giving a finer knowledge of the resource onto three stations in Guadeloupe. The study is based on an analysis and a forecast model of global solar radiation, by including spatial and temporal parameters. The literature shows that an important number of sites is generally used for a spatio-temporal analysis, which would imply for us, to put multiple sensors on the whole territory. The costs of such a system would be considerable. Our approach here will consist in making a spatiotemporal analysis on three stations. With few stations and not uniform distances, we, thus, tried to define a short-term forecast model of global solar radiation, in spite of these constraints which do not answer to a classic approach. The model is based on a methodology the VAR ( Autoregressive Vector) including spatial and temporal parameters. A strategy of selection of variables is developed to select useful predictors (stations) for the forecast on localization. This iterative strategy, on one hand will allow being closer to the reality, on the other hand to the point of algorithmic view, the trend of the calculations will be faster. Preliminarily, a study of the spatiotemporal variability of global solar radiation, allowed to quantify and to characterize in a fine way, the dynamic interactions between these three stations. Compared with the models of the literature, our forecast model shows a good performance with relative RMSE values going from 17.48 % to 23.79 % for horizons from 5 min to 1 hour. The developed methodologies could eventually offer an opportunity to assure guarantees to the network manager. If in the future the successful solutions of forecast became widespread, this opportunity would allow the opening of the market beyond the 30 % threshold imposed at present.
272

Essays on the econometrics of macroeconomic survey data

Conflitti, Cristina 11 September 2012 (has links)
This thesis contains three essays covering different topics in the field of statistics<p>and econometrics of survey data. Chapters one and two analyse two aspects<p>of the Survey of Professional Forecasters (SPF hereafter) dataset. This survey<p>provides a large information on macroeconomic expectations done by the professional<p>forecasters and offers an opportunity to exploit a rich information set.<p>But it poses a challenge on how to extract the relevant information in a proper<p>way. The last chapter addresses the issue of analyzing the opinions on the euro<p>reported in the Flash Eurobaromenter dataset.<p>The first chapter Measuring Uncertainty and Disagreement in the European<p>Survey of Professional Forecasters proposes a density forecast methodology based<p>on the piecewise linear approximation of the individual’s forecasting histograms,<p>to measure uncertainty and disagreement of the professional forecasters. Since<p>1960 with the introduction of the SPF in the US, it has been clear that they were a<p>useful source of information to address the issue on how to measure disagreement<p>and uncertainty, without relying on macroeconomic or time series models. Direct<p>measures of uncertainty are seldom available, whereas many surveys report point<p>forecasts from a number of individual respondents. There has been a long tradition<p>of using measures of the dispersion of individual respondents’ point forecasts<p>(disagreement or consensus) as proxies for uncertainty. Unlike other surveys, the<p>SPF represents an exception. It directly asks for the point forecast, and for the<p>probability distribution, in the form of histogram, associated with the macro variables<p>of interest. An important issue that should be considered concerns how to<p>approximate individual probability densities and get accurate individual results<p>for disagreement and uncertainty before computing the aggregate measures. In<p>contrast to Zarnowitz and Lambros (1987), and Giordani and Soderlind (2003) we<p>overcome the problem associated with distributional assumptions of probability<p>density forecasts by using a non parametric approach that, instead of assuming<p>a functional form for the individual probability law, approximates the histogram<p>by a piecewise linear function. In addition, and unlike earlier works that focus on<p>US data, we employ European data, considering gross domestic product (GDP),<p>inflation and unemployment.<p>The second chapter Optimal Combination of Survey Forecasts is based on<p>a joint work with Christine De Mol and Domenico Giannone. It proposes an<p>approach to optimally combine survey forecasts, exploiting the whole covariance<p>structure among forecasters. There is a vast literature on forecast combination<p>methods, advocating their usefulness both from the theoretical and empirical<p>points of view (see e.g. the recent review by Timmermann (2006)). Surprisingly,<p>it appears that simple methods tend to outperform more sophisticated ones, as<p>shown for example by Genre et al. (2010) on the combination of the forecasts in<p>the SPF conducted by the European Central Bank (ECB). The main conclusion of<p>several studies is that the simple equal-weighted average constitutes a benchmark<p>that is hard to improve upon. In contrast to a great part of the literature which<p>does not exploit the correlation among forecasters, we take into account the full<p>covariance structure and we determine the optimal weights for the combination<p>of point forecasts as the minimizers of the mean squared forecast error (MSFE),<p>under the constraint that these weights are nonnegative and sum to one. We<p>compare our combination scheme with other methodologies in terms of forecasting<p>performance. Results show that the proposed optimal combination scheme is an<p>appropriate methodology to combine survey forecasts.<p>The literature on point forecast combination has been widely developed, however<p>there are fewer studies analyzing the issue for combination density forecast.<p>We extend our work considering the density forecasts combination. Moving from<p>the main results presented in Hall and Mitchell (2007), we propose an iterative<p>algorithm for computing the density weights which maximize the average logarithmic<p>score over the sample period. The empirical application is made for the<p>European GDP and inflation forecasts. Results suggest that optimal weights,<p>obtained via an iterative algorithm outperform the equal-weighted used by the<p>ECB density combinations.<p>The third chapter entitled Opinion surveys on the euro: a multilevel multinomial<p>logistic analysis outlines the multilevel aspects related to public attitudes<p>toward the euro. This work was motivated by the on-going debate whether the<p>perception of the euro among European citizenships after ten years from its introduction<p>was positive or negative. The aim of this work is, therefore, to disentangle<p>the issue of public attitudes considering either individual socio-demographic characteristics<p>and macroeconomic features of each country, counting each of them<p>as two separate levels in a single analysis. Considering a hierarchical structure<p>represents an advantage as it models within-country as well as between-country<p>relations using a single analysis. The multilevel analysis allows the consideration<p>of the existence of dependence between individuals within countries induced by<p>unobserved heterogeneity between countries, i.e. we include in the estimation<p>specific country characteristics not directly observable. In this chapter we empirically<p>investigate which individual characteristics and country specificities are<p>most important and affect the perception of the euro. The attitudes toward the<p>euro vary across individuals and countries, and are driven by personal considerations<p>based on the benefits and costs of using the single currency. Individual<p>features, such as a high level of education or living in a metropolitan area, have<p>a positive impact on the perception of the euro. Moreover, the country-specific<p>economic condition can influence individuals attitudes. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
273

Essays in real-time forecasting

Liebermann, Joëlle 12 September 2012 (has links)
This thesis contains three essays in the field of real-time econometrics, and more particularly<p>forecasting.<p>The issue of using data as available in real-time to forecasters, policymakers or financial<p>markets is an important one which has only recently been taken on board in the empirical<p>literature. Data available and used in real-time are preliminary and differ from ex-post<p>revised data, and given that data revisions may be quite substantial, the use of latest<p>available instead of real-time can substantially affect empirical findings (see, among others,<p>Croushore’s (2011) survey). Furthermore, as variables are released on different dates<p>and with varying degrees of publication lags, in order not to disregard timely information,<p>datasets are characterized by the so-called “ragged-edge”structure problem. Hence, special<p>econometric frameworks, such as developed by Giannone, Reichlin and Small (2008) must<p>be used.<p>The first Chapter, “The impact of macroeconomic news on bond yields: (in)stabilities over<p>time and relative importance”, studies the reaction of U.S. Treasury bond yields to real-time<p>market-based news in the daily flow of macroeconomic releases which provide most of the<p>relevant information on their fundamentals, i.e. the state of the economy and inflation. We<p>find that yields react systematically to a set of news consisting of the soft data, which have<p>very short publication lags, and the most timely hard data, with the employment report<p>being the most important release. However, sub-samples evidence reveals that parameter<p>instability in terms of absolute and relative size of yields response to news, as well as<p>significance, is present. Especially, the often cited dominance to markets of the employment<p>report has been evolving over time, as the size of the yields reaction to it was steadily<p>increasing. Moreover, over the recent crisis period there has been an overall switch in the<p>relative importance of soft and hard data compared to the pre-crisis period, with the latter<p>becoming more important even if less timely, and the scope of hard data to which markets<p>react has increased and is more balanced as less concentrated on the employment report.<p>Markets have become more reactive to news over the recent crisis period, particularly to<p>hard data. This is a consequence of the fact that in periods of high uncertainty (bad state),<p>markets starve for information and attach a higher value to the marginal information content<p>of these news releases.<p>The second and third Chapters focus on the real-time ability of models to now-and-forecast<p>in a data-rich environment. It uses an econometric framework, that can deal with large<p>panels that have a “ragged-edge”structure, and to evaluate the models in real-time, we<p>constructed a database of vintages for US variables reproducing the exact information that<p>was available to a real-time forecaster.<p>The second Chapter, “Real-time nowcasting of GDP: a factor model versus professional<p>forecasters”, performs a fully real-time nowcasting (forecasting) exercise of US real GDP<p>growth using Giannone, Reichlin and Smalls (2008), henceforth (GRS), dynamic factor<p>model (DFM) framework which enables to handle large unbalanced datasets as available<p>in real-time. We track the daily evolution throughout the current and next quarter of the<p>model nowcasting performance. Similarly to GRS’s pseudo real-time results, we find that<p>the precision of the nowcasts increases with information releases. Moreover, the Survey of<p>Professional Forecasters does not carry additional information with respect to the model,<p>suggesting that the often cited superiority of the former, attributable to judgment, is weak<p>over our sample. As one moves forward along the real-time data flow, the continuous<p>updating of the model provides a more precise estimate of current quarter GDP growth and<p>the Survey of Professional Forecasters becomes stale. These results are robust to the recent<p>recession period.<p>The last Chapter, “Real-time forecasting in a data-rich environment”, evaluates the ability<p>of different models, to forecast key real and nominal U.S. monthly macroeconomic variables<p>in a data-rich environment and from the perspective of a real-time forecaster. Among<p>the approaches used to forecast in a data-rich environment, we use pooling of bi-variate<p>forecasts which is an indirect way to exploit large cross-section and the directly pooling of<p>information using a high-dimensional model (DFM and Bayesian VAR). Furthermore forecasts<p>combination schemes are used, to overcome the choice of model specification faced by<p>the practitioner (e.g. which criteria to use to select the parametrization of the model), as<p>we seek for evidence regarding the performance of a model that is robust across specifications/<p>combination schemes. Our findings show that predictability of the real variables is<p>confined over the recent recession/crisis period. This in line with the findings of D’Agostino<p>and Giannone (2012) over an earlier period, that gains in relative performance of models<p>using large datasets over univariate models are driven by downturn periods which are characterized<p>by higher comovements. These results are robust to the combination schemes<p>or models used. A point worth mentioning is that for nowcasting GDP exploiting crosssectional<p>information along the real-time data flow also helps over the end of the great moderation period. Since this is a quarterly aggregate proxying the state of the economy,<p>monthly variables carry information content for GDP. But similarly to the findings for the<p>monthly variables, predictability, as measured by the gains relative to the naive random<p>walk model, is higher during crisis/recession period than during tranquil times. Regarding<p>inflation, results are stable across time, but predictability is mainly found at nowcasting<p>and forecasting one-month ahead, with the BVAR standing out at nowcasting. The results<p>show that the forecasting gains at these short horizons stem mainly from exploiting timely<p>information. The results also show that direct pooling of information using a high dimensional<p>model (DFM or BVAR) which takes into account the cross-correlation between the<p>variables and efficiently deals with the “ragged-edge”structure of the dataset, yields more<p>accurate forecasts than the indirect pooling of bi-variate forecasts/models. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
274

Essays on real-time econometrics and forecasting

Modugno, Michèle 14 September 2011 (has links)
The thesis contains four essays covering topics in the field of real time econometrics and forecasting.<p><p>The first Chapter, entitled “An area wide real time data base for the euro area” and coauthored with Domenico Giannone, Jerome Henry and Magda Lalik, describes how we constructed a real time database for the euro area covering more than 200 series regularly published in the European Central Bank Monthly Bulletin, as made available ahead of publication to the Governing Council members before their first meeting of the month.<p><p>Recent research has emphasised that the data revisions can be large for certain indicators and can have a bearing on the decisions made, as well as affect the assessment of their relevance. It is therefore key to be in a position to reconstruct the historical environment of economic decisions at the time they were made by private agents and policy-makers rather than using the data as they become available some years later. For this purpose, it is necessary to have the information in the form of all the different vintages of data as they were published in real time, the so-called "real-time data" that reflect the economic situation at a given point in time when models are estimated or policy decisions made.<p><p>We describe the database in details and study the properties of the euro area real-time data flow and data revisions, also providing comparisons with the United States and Japan. We finally illustrate how such revisions can contribute to the uncertainty surrounding key macroeconomic ratios and the NAIRU.<p><p>The second Chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on a joint work with Marta Banbura. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone et al (2008), we can handle datasets that are not only characterised by a 'ragged edge', but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach, which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. It has been shown by Doz et al (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz et al (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm. Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases.<p><p>We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data.<p><p>The third Chapter is entitled “Nowcasting Inflation Using High Frequency Data” and it proposes a methodology for nowcasting and forecasting inflation using data with sampling frequency higher than monthly. In particular, this Chapter focuses on the energy component of inflation given the availability of data like the Weekly Oil Bulletin Price Statistics for the euro area, the Weekly Retail Gasoline and Diesel Prices for the US and the daily spot and future prices of crude oil.<p><p>Although nowcasting inflation is a novel idea, there is a rather long literature focusing on nowcasting GDP. The use of higher frequency indicators in order to Nowcast/Forecast lower frequency indicators had started with monthly data for GDP. GDP is a quarterly variable released with a substantial time delay (e.g. two months after the end of the reference quarter for the euro area GDP). <p><p>The estimation adopts the methodology described in Chapter 2, modeling the data as a trading day frequency factor model with missing observations in a state space representation. In contrast to other procedures, the methodology proposed models all the data within a unified single framework that allows one to produce forecasts of all the involved variables from a factor model, which, by definition, does not suffer from overparametrisation. Moreover, this offers the possibility to disentangle model-based "news" from each release and then to assess their impact on the forecast revision. The Chapter provides an illustrative example of this procedure, focusing on a specific month.<p><p>In order to assess the importance of using high frequency data for forecasting inflation this Chapter compares the forecast performance of the univariate models, i.e. random walk and autoregressive process, with the forecast performance of the model that uses weekly and daily data. The provided empirical evidence shows that exploiting high frequency data relative to oil not only let us nowcast and forecast the energy component of inflation with a precision twice better than the proposed benchmarks, but we obtain a similar improvement even for total inflation.<p><p>The fourth Chapter entitled “The forecasting power of international yield curve linkages”, coauthored with Kleopatra Nikolaou, investigates dependency patterns between the yield curves of Germany and the US, by using an out-of-sample forecast exercise.<p><p>The motivation for this Chapter stems from the fact that our up to date knowledge on dependency patterns among yields curves of different countries is limited. Looking at the yield curve literature, the empirical evidence to-date informs us of strong contemporaneous interdependencies of yield curves across countries, in line with increased globalization and financial integration. Nevertheless, this yield curve literature does not investigate non-contemporaneous correlations. And yet, clear indication in favour of such dependency patterns is recorded in studies focusing on specific interest rates, which look at the role of certain countries as global players (see Frankel et al. (2004), Chinn and Frankel (2005) and Wang et al. (2007)). Evidence from these studies suggests a leading role for the US. Moreover, dependency patterns recorded in the real business cycles between the US and the euro area (Giannone and Reichlin, 2007) can also rationalize such linkages, to the extent that output affects nominal interest rates.<p><p>We propose, estimate and forecast (out-of-sample) a novel dynamic factor model for the yield curve, where dynamic information from foreign yield curves is introduced into domestic yield curve forecasts. This is the International Dependency Model (IDM). We want to compare the yield curve forecast under the IDM versus a purely domestic model and a model that allows for contemporaneous common global factors. These models serve as useful comparisons. The domestic model bears direct modeling links with IDM, as it can be seen as a nested model of IDM. The global model bears less direct links in terms of modeling, but, in line with IDM, it is also an international model that serves to highlight the advantages of introducing international information in yield curve forecasts. However, the global model aims to identify contemporaneous linkages in the yield curve of the two countries, whereas the IDM also allows for detecting dependency patterns.<p><p>Our results that shocks appear to be diffused in a rather asymmetric manner across the two countries. Namely, we find a unidirectional causality effect that runs from the US to Germany. This effect is stronger in the last ten years, where out-of-sample forecasts of Germany using the US information are even more accurate than the random walk forecasts. Our statistical results demonstrate a more independent role for the US. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
275

Structural models for macroeconomics and forecasting

De Antonio Liedo, David 03 May 2010 (has links)
This Thesis is composed by three independent papers that investigate<p>central debates in empirical macroeconomic modeling.<p><p>Chapter 1, entitled “A Model for Real-Time Data Assessment with an Application to GDP Growth Rates”, provides a model for the data<p>revisions of macroeconomic variables that distinguishes between rational expectation updates and noise corrections. Thus, the model encompasses the two polar views regarding the publication process of statistical agencies: noise versus news. Most of the studies previous studies that analyze data revisions are based<p>on the classical noise and news regression approach introduced by Mankiew, Runkle and Shapiro (1984). The problem is that the statistical tests available do not formulate both extreme hypotheses as collectively exhaustive, as recognized by Aruoba (2008). That is, it would be possible to reject or accept both of them simultaneously. In turn, the model for the<p>DPP presented here allows for the simultaneous presence of both noise and news. While the “regression approach” followed by Faust et al. (2005), along the lines of Mankiew et al. (1984), identifies noise in the preliminary<p>figures, it is not possible for them to quantify it, as done by our model. <p><p>The second and third chapters acknowledge the possibility that macroeconomic data is measured with errors, but the approach followed to model the missmeasurement is extremely stylized and does not capture the complexity of the revision process that we describe in the first chapter.<p><p><p>Chapter 2, entitled “Revisiting the Success of the RBC model”, proposes the use of dynamic factor models as an alternative to the VAR based tools for the empirical validation of dynamic stochastic general equilibrium (DSGE) theories. Along the lines of Giannone et al. (2006), we use the state-space parameterisation of the factor models proposed by Forni et al. (2007) as a competitive benchmark that is able to capture weak statistical restrictions that DSGE models impose on the data. Our empirical illustration compares the out-of-sample forecasting performance of a simple RBC model augmented with a serially correlated noise component against several specifications belonging to classes of dynamic factor and VAR models. Although the performance of the RBC model is comparable<p>to that of the reduced form models, a formal test of predictive accuracy reveals that the weak restrictions are more useful at forecasting than the strong behavioral assumptions imposed by the microfoundations in the model economy.<p><p>The last chapter, “What are Shocks Capturing in DSGE modeling”, contributes to current debates on the use and interpretation of larger DSGE<p>models. Recent tendency in academic work and at central banks is to develop and estimate large DSGE models for policy analysis and forecasting. These models typically have many shocks (e.g. Smets and Wouters, 2003 and Adolfson, Laseen, Linde and Villani, 2005). On the other hand, empirical studies point out that few large shocks are sufficient to capture the covariance structure of macro data (Giannone, Reichlin and<p>Sala, 2005, Uhlig, 2004). In this Chapter, we propose to reconcile both views by considering an alternative DSGE estimation approach which<p>models explicitly the statistical agency along the lines of Sargent (1989). This enables us to distinguish whether the exogenous shocks in DSGE<p>modeling are structural or instead serve the purpose of fitting the data in presence of misspecification and measurement problems. When applied to the original Smets and Wouters (2007) model, we find that the explanatory power of the structural shocks decreases at high frequencies. This allows us to back out a smoother measure of the natural output gap than that<p>resulting from the original specification. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
276

Physical parameterisations for a high resolution operational numerical weather prediction model / Paramétrisations physiques pour un modèle opérationnel de prévision météorologique à haute résolution

Gerard, Luc 31 August 2001 (has links)
Les modèles de prévision opérationnelle du temps résolvent numériquement les équations de la mécanique des fluides en calculant l'évolution de champs (pression, température, humidité, vitesses) définis comme moyennes horizontales à l'échelle des mailles d'une grille (et à différents niveaux verticaux).<p><p>Les processus d'échelle inférieure à la maille jouent néanmoins un rôle essentiel dans les transferts et les bilans de chaleur, humidité et quantité de mouvement. Les paramétrisations physiques visent à évaluer les termes de source correspondant à ces phénomènes, et apparaissant dans les équations des champs moyens aux points de grille.<p><p>Lorsque l'on diminue la taille des mailles afin de représenter plus finement l'évolution des phénomènes atmosphériques, certaines hypothèses utilisées dans ces paramétrisations perdent leur validité. Le problème se pose surtout quand la taille des mailles passe en dessous d'une dizaine de kilomètres, se rapprochant de la taille des grands systèmes de nuages convectifs (systèmes orageux, lignes de grain).<p><p>Ce travail s'inscrit dans le cadre des développements du modèle à mailles fines ARPÈGE ALADIN, utilisé par une douzaine de pays pour l'élaboration de prévisions à courte échéance (jusque 48 heures).<p><p>Nous décrivons d'abord l'ensemble des paramétrisations physiques du modèle.<p>Suit une analyse détaillée de la paramétrisation actuelle de la convection profonde. Nous présentons également notre contribution personnelle à celle ci, concernant l'entraînement de la quantité de mouvement horizontale dans le nuage convectif.<p>Nous faisons ressortir les principaux points faibles ou hypothèses nécessitant des mailles de grandes dimensions, et dégageons les voies pour de nouveaux développements.<p>Nous approfondissons ensuite deux des aspects sortis de cette discussion: l'usage de variables pronostiques de l'activité convective, et la prise en compte de différences entre l'environnement immédiat du nuage et les valeurs des champs à grande échelle. Ceci nous conduit à la réalisation et la mise en œuvre d'un schéma pronostique de la convection profonde.<p>A ce schéma devraient encore s'ajouter une paramétrisation pronostique des phases condensées suspendues (actuellement en cours de développement par d'autres personnes) et quelques autres améliorations que nous proposons.<p>Des tests de validation et de comportement du schéma pronostique ont été effectués en modèle à aire limitée à différentes résolutions et en modèle global. Dans ce dernier cas l'effet du nouveau schéma sur les bilans globaux est également examiné.<p>Ces expériences apportent un éclairage supplémentaire sur le comportement du schéma convectif et les problèmes de partage entre la schéma de convection profonde et le schéma de précipitation de grande échelle.<p><p>La présente étude fait donc le point sur le statut actuel des différentes paramétrisations du modèle, et propose des solutions pratiques pour améliorer la qualité de la représentation des phénomènes convectifs.<p><p>L'utilisation de mailles plus petites que 5 km nécessite enfin de lever l'hypothèse hydrostatique dans les équations de grande échelle, et nous esquissons les raffinements supplémentaires de la paramétrisation possibles dans ce cas.<p><p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
277

Determination of end user power load profiles by parallel evolutionary computing / Détermination de profils de consommation électrique par évolution artificielle parallèle

Krüger, Frédéric 17 February 2014 (has links)
Il est primordial, pour un distributeur d’énergie électrique, d’obtenir des estimations précises de la demande en énergie de leurs réseaux. Des outils statistiques tels que des profils de consommation électrique offrent des estimations de qualité acceptable. Ces profils ne sont cependant généralement pas assez précis, car ils ne tiennent pas compte de l’influence de facteurs tels que la présence de chauffage électrique ou le type d’habitation. Il est néanmoins possible d’obtenir des profils précis en utilisant uniquement les historiques de consommations d’énergie des clients, les mesures desdéparts 20kV, et un algorithme génétique de séparation de sources. Un filtrage et un prétraitement des données a permis de proposer à l’algorithme génétique de séparation de sources des données adaptées. La séparation de sources particulièrement bruitées est résolue par un algorithme génétique complètement parallélisé sur une carte GPGPU. Les profils de consommation électrique obtenus correspondent aux attentes initiales, et démontrent une amélioration considérable de la précision des estimations de courbes de charge de départs 20kV et de postes de transformation moyenne tension-basse tension. / Precise estimations of the energy demand of a power network are paramount for electrical distribution companies. Statistical tools such as load profiles offer acceptable estimations. These load profiles are, however, usually not precise enough for network engineering at the local level, as they do not take into account factors such as the presence of electrical heating devices or the type of housing. It is however possible to obtain accurate load profiles with no more than end user energy consumption histories, 20kV feeder load measurements, a blind source separation and a genetic algorithm. Filtering and preliminary treatments performed on the data allowed the blind source separation to work with adequate information. The blind source separation presented in this document is successfully solved by a completely parallel genetic algorithm running on a GPGPU card. The power load profiles obtained match the requirements, and demonstrate a considerable improvement in the forecast of 20kV feeder as well as MV substation load curves.
278

Extraction des utilisations typiques à partir de données hétérogènes en vue d'optimiser la maintenance d'une flotte de véhicules / Critical usages extraction from historical and heterogénius data in order to optimize fleet maintenance

Ben Zakour, Asma 06 July 2012 (has links)
Le travail produit s'inscrit dans un cadre industriel piloté par la société 2MoRO Solutions. La réalisation présentée dans cette thèse doit servir à l'élaboration d'un service à haute valeur, permettant aux exploitants aéronautiques d'optimiser leurs actions de maintenance. Les résultats obtenus permettent d'intégrer et de regrouper les tâches de maintenance en vue de minimiser la durée d'immobilisation des aéronefs et d'en réduire les risques de panne.La méthode que nous proposons comporte trois étapes : (i) une étape de rationalisation des séquences afin de pouvoir les combiner [...] / The present work is part of an industrial project driven by 2MoRO Solutions company.It aims to develop a high value service enabling aircraft operators to optimize their maintenance actions.Given the large amount of data available around aircraft exploitation, we aim to analyse the historical events recorded with each aircraft in order to extract maintenance forecasting. Theresults are used to integrate and consolidate maintenance tasks in order to minimize aircraft downtime and risk of failure. The proposed method involves three steps : (i) streamlining information in order to combinethem, (ii) organizing this data for easy analysis and (iii) an extraction step of useful knowledgein the form of interesting sequences. [...]
279

Stochastic model of high-speed train dynamics for the prediction of long-time evolution of the track irregularities / Modèle stochastique de la dynamique des trains à grande vitesse pour la prévision de l'évolution à long terme des défauts de géométrie de la voie

Lestoille, Nicolas 16 October 2015 (has links)
Les voies ferrées sont de plus en plus sollicitées: le nombre de trains à grande vitesse, leur vitesse et leur charge ne cessent d'augmenter, ce qui contribue à la formation de défauts de géométrie sur la voie. En retour, ces défauts de géométrie influencent la réponse dynamique du train et dégradent les conditions de confort. Pour garantir de bonnes conditions de confort, les entreprises ferroviaires réalisent des opérations de maintenance de la voie, qui sont très coûteuses. Ces entreprises ont donc intérêt à prévoir l'évolution temporelle des défauts de géométrie de la voie pour anticiper les opérations de maintenance, et ainsi réduire les coûts de maintenance et améliorer les conditions de transport. Dans cette thèse, on analyse l'évolution temporelle d'une portion de voie par un indicateur vectoriel sur la dynamique du train. Pour la portion de voie choisie, on construit un modèle stochastique local des défauts de géométrie de la voie à partir d'un modèle global des défauts de géométrie et de big data de défauts mesurés par un train de mesure. Ce modèle stochastique local prend en compte la variabilité des défauts de géométrie de la voie et permet de générer des réalisations des défauts pour chaque temps de mesure. Après avoir validé le modèle numérique de la dynamique du train, les réponses dynamiques du train sur la portion de voie mesurée sont simulées numériquement en utilisant le modèle stochastique local des défauts de géométrie. Un indicateur dynamique, vectoriel et aléatoire, est introduit pour caractériser la réponse dynamique du train sur la portion de voie. Cet indicateur dynamique est construit de manière à prendre en compte les incertitudes de modèle dans le modèle numérique de la dynamique du train. Pour identifier le modèle stochastique des défauts de géométrie et pour caractériser les incertitudes de modèle, des méthodes stochastiques avancées, comme par exemple la décomposition en chaos polynomial ou le maximum de vraisemblance multidimensionnel, sont appliquées à des champs aléatoires non gaussiens et non stationnaires. Enfin, un modèle stochastique de prédiction est proposé pour prédire les quantités statistiques de l'indicateur dynamique, ce qui permet d'anticiper le besoin en maintenance. Ce modèle est construit en utilisant les résultats de la simulation de la dynamique du train et consiste à utiliser un modèle non stationnaire de type filtre de Kalman avec une condition initiale non gaussienne / Railways tracks are subjected to more and more constraints, because the number of high-speed trains using the high-speed lines, the trains speed, and the trains load keep increasing. These solicitations contribute to produce track irregularities. In return, track irregularities influence the train dynamic responses, inducing degradation of the comfort. To guarantee good conditions of comfort in the train, railways companies perform maintenance operations of the track, which are very costly. Consequently, there is a great interest for the railways companies to predict the long-time evolution of the track irregularities for a given track portion, in order to be able to anticipate the start off of the maintenance operations, and therefore to reduce the maintenance costs and to improve the running conditions. In this thesis, the long-time evolution of a given track portion is analyzed through a vector-valued indicator on the train dynamics. For this given track portion, a local stochastic model of the track irregularities is constructed using a global stochastic model of the track irregularities and using big data made up of experimental measurements of the track irregularities performed by a measuring train. This local stochastic model takes into account the variability of the track irregularities and allows for generating realizations of the track irregularities at each long time. After validating the computational model of the train dynamics, the train dynamic responses on the measured track portion are numerically simulated using the local stochastic model of the track irregularities. A vector-valued random dynamic indicator is defined to characterize the train dynamic responses on the given track portion. This dynamic indicator is constructed such that it takes into account the model uncertainties in the train dynamics computational model. For the identification of the track irregularities stochastic model and the characterization of the model uncertainties, advanced stochastic methods such as the polynomial chaos expansion and the multivariate maximum likelihood are applied to non-Gaussian and non-stationary random fields. Finally, a stochastic predictive model is proposed for predicting the statistical quantities of the random dynamic indicator, which allows for anticipating the need for track maintenance. This modeling is constructed using the results of the train dynamics simulation and consists in using a non-stationary Kalman-filter type model with a non-Gaussian initial condition. The proposed model is validated using experimental data for the French railways network for the high-speed trains
280

Mixed-Frequency Modeling and Economic Forecasting / De la modélisation multifréquentielle pour la prévision économique

Marsilli, Clément 06 May 2014 (has links)
La prévision macroéconomique à court terme est un exercice aussi complexe qu’essentiel pour la définition de la politique économique et monétaire. Les crises financières récentes ainsi que les récessions qu’ont endurées et qu’endurent aujourd’hui encore, en ce début d’année 2014, nombre de pays parmi les plus riches, témoignent de la difficulté d’anticiper les fluctuations économiques, même à des horizons proches. Les recherches effectuées dans le cadre de la thèse de doctorat qui est présentée dans ce manuscrit se sont attachées à étudier, analyser et développer des modélisations pour la prévision de croissance économique. L’ensemble d’informations à partir duquel construire une méthodologie prédictive est vaste mais également hétérogène. Celle-ci doit en effet concilier le mélange des fréquences d’échantillonnage des données et la parcimonie nécessaire à son estimation. Nous évoquons à cet effet dans un premier chapitre les éléments économétriques fondamentaux de la modélisation multi-fréquentielle. Le deuxième chapitre illustre l’apport prédictif macroéconomique que constitue l’utilisation de la volatilité des variables financières en période de retournement conjoncturel. Le troisième chapitre s’étend ensuite sur l’inférence bayésienne et nous présentons par ce biais un travail empirique issu de l’adjonction d’une volatilité stochastique à notre modèle. Enfin, le quatrième chapitre propose une étude des techniques de sélection de variables à fréquence multiple dans l’optique d’améliorer la capacité prédictive de nos modélisations. Diverses méthodologies sont à cet égard développées, leurs aptitudes empiriques sont comparées, et certains faits stylisés sont esquissés. / Economic downturn and recession that many countries experienced in the wake of the global financial crisis demonstrate how important but difficult it is to forecast macroeconomic fluctuations, especially within a short time horizon. The doctoral dissertation studies, analyses and develops models for economic growth forecasting. The set of information coming from economic activity is vast and disparate. In fact, time series coming from real and financial economy do not have the same characteristics, both in terms of sampling frequency and predictive power. Therefore short-term forecasting models should both allow the use of mixed-frequency data and parsimony. The first chapter is dedicated to time series econometrics within a mixed-frequency framework. The second chapter contains two empirical works that sheds light on macro-financial linkages by assessing the leading role of the daily financial volatility in macroeconomic prediction during the Great Recession. The third chapter extends mixed-frequency model into a Bayesian framework and presents an empirical study using a stochastic volatility augmented mixed data sampling model. The fourth chapter focuses on variable selection techniques in mixed-frequency models for short-term forecasting. We address the selection issue by developing mixed-frequency-based dimension reduction techniques in a cross-validation procedure that allows automatic in-sample selection based on recent forecasting performances. Our model succeeds in constructing an objective variable selection with broad applicability.

Page generated in 0.0648 seconds