• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 57
  • 13
  • 12
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 212
  • 212
  • 75
  • 38
  • 35
  • 34
  • 34
  • 24
  • 24
  • 18
  • 18
  • 18
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Uncertainty Quantification and Assimilation for Efficient Coastal Ocean Forecasting

Siripatana, Adil 21 April 2019 (has links)
Bayesian inference is commonly used to quantify and reduce modeling uncertainties in coastal ocean models by computing the posterior probability distribution function (pdf) of some uncertain quantities to be estimated conditioned on available observations. The posterior can be computed either directly, using a Markov Chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation (DA) approach. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without a significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach often due to restricted Gaussian prior and noise assumptions. This thesis aims to develop, implement and test novel efficient Bayesian inference techniques to quantify and reduce modeling and parameter uncertainties of coastal ocean models. Both state and parameter estimations will be addressed within the framework of a state of-the-art coastal ocean model, the Advanced Circulation (ADCIRC) model. The first part of the thesis proposes efficient Bayesian inference techniques for uncertainty quantification (UQ) and state-parameters estimation. Based on a realistic framework of observation system simulation experiments (OSSEs), an ensemble Kalman filter (EnKF) is first evaluated against a Polynomial Chaos (PC)-surrogate MCMC method under identical scenarios. After demonstrating the relevance of the EnKF for parameters estimation, an iterative EnKF is introduced and validated for the estimation of a spatially varying Manning’s n coefficients field. Karhunen-Lo`eve (KL) expansion is also tested for dimensionality reduction and conditioning of the parameter search space. To further enhance the performance of PC-MCMC for estimating spatially varying parameters, a coordinate transformation of a Gaussian process with parameterized prior covariance function is next incorporated into the Bayesian inference framework to account for the uncertainty in covariance model hyperparameters. The second part of the thesis focuses on the use of UQ and DA on adaptive mesh models. We developed new approaches combining EnKF and multiresolution analysis, and demonstrated significant reduction in the cost of data assimilation compared to the traditional EnKF implemented on a non-adaptive mesh.
52

Detection and localisation of pipe bursts in a district metered area using an online hydraulic model

Okeya, Olanrewaju Isaac January 2018 (has links)
This thesis presents a research work on the development of new methodology for near-real-time detection and localisation of pipe bursts in a Water Distribution System (WDS) at the District Meters Area (DMA) level. The methodology makes use of online hydraulic model coupled with a demand forecasting methodology and several statistical techniques to process the hydraulic meters data (i.e., flows and pressures) coming from the field at regular time intervals (i.e. every 15 minutes). Once the detection part of the methodology identifies a potential burst occurrence in a system it raises an alarm. This is followed by the application of the burst localisation methodology to approximately locate the event within the District Metered Area (DMA). The online hydraulic model is based on data assimilation methodology coupled with a short-term Water Demand Forecasting Model (WDFM) based on Multi-Linear Regression. Three data assimilation methods were tested in the thesis, namely the iterative Kalman Filter method, the Ensemble Kalman Filter method and the Particle Filter method. The iterative Kalman Filter (i-KF) method was eventually chosen for the online hydraulic model based on the best overall trade-off between water system state prediction accuracy and computational efficiency. The online hydraulic model created this way was coupled with the Statistical Process Control (SPC) technique and a newly developed burst detection metric based on the moving average residuals between the predicted and observed hydraulic states (flows/pressures). Two new SPC-based charts with associated generic set of control rules for analysing burst detection metric values over consecutive time steps were introduced to raise burst alarms in a reliable and timely fashion. The SPC rules and relevant thresholds were determined offline by performing appropriate statistical analysis of residuals. The above was followed by the development of the new methodology for online burst localisation. The methodology integrates the information on burst detection metric values obtained during the detection stage with the new sensitivity matrix developed offline and hydraulic model runs used to simulate potential bursts to identify the most likely burst location in the pipe network. A new data algorithm for estimating the ‘normal’ DMA demand and burst flow during the burst period is developed and used for localisation. A new data algorithm for statistical analysis of flow and pressure data was also developed and used to determine the approximate burst area by producing a list of top ten suspected burst location nodes. The above novel methodologies for burst detection and localisation were applied to two real-life District Metred Areas in the United Kingdom (UK) with artificially generated flow and pressure observations and assumed bursts. The results obtained this way show that the developed methodology detects pipe bursts in a reliable and timely fashion, provides good estimate of a burst flow and accurately approximately locates the burst within a DMA. In addition, the results obtained show the potential of the methodology described here for online burst detection and localisation in assisting Water Companies (WCs) to conserve water, save energy and money. It can also enhance the UK WCs’ profile customer satisfaction, improve operational efficiency and improve the OFWAT’s Service Incentive Mechanism (SIM) scores.
53

Numerical model error in data assimilation

Jenkins, Siân January 2015 (has links)
In this thesis, we produce a rigorous and quantitative analysis of the errors introduced by finite difference schemes into strong constraint 4D-Variational (4D-Var) data assimilation. Strong constraint 4D-Var data assimilation is a method that solves a particular kind of inverse problem; given a set of observations and a numerical model for a physical system together with a priori information on the initial condition, estimate an improved initial condition for the numerical model, known as the analysis vector. This method has many forms of error affecting the accuracy of the analysis vector, and is derived under the assumption that the numerical model is perfect, when in reality this is not true. Therefore it is important to assess whether this assumption is realistic and if not, how the method should be modified to account for model error. Here we analyse how the errors introduced by finite difference schemes used as the numerical model, affect the accuracy of the analysis vector. Initially the 1D linear advection equation is considered as our physical system. All forms of error, other than those introduced by finite difference schemes, are initially removed. The error introduced by `representative schemes' is considered in terms of numerical dissipation and numerical dispersion. A spectral approach is successfully implemented to analyse the impact on the analysis vector, examining the effects on unresolvable wavenumber components and the l2-norm of the error. Subsequently, a similar also successful analysis is conducted when observation errors are re-introduced to the problem. We then explore how the results can be extended to weak constraint 4D-Var. The 2D linear advection equation is then considered as our physical system, demonstrating how the results from the 1D problem extend to 2D. The linearised shallow water equations extend the problem further, highlighting the difficulties associated with analysing a coupled system of PDEs.
54

Assimilation variationnelle d'observations multi-échelles : Application à la fusion de données hétérogènes pour l'étude de la dynamique micro et macrophysique des systèmes précipitants / Variationnal assimilation of multi-scale observations : fusion of heterogeneous data for the study of dynamics of rainfall at macro and microscopic scales

Mercier, Francois 05 July 2016 (has links)
D’une part, les instruments permettant de mesurer les précipitations (pluviomètres, radars, etc.) effectuent des mesures de natures différentes et à différentes échelles. Leurs données sont difficilement comparables. D’autre part, les modèles décrivant l’évolution des précipitations sont eux complexes et difficiles à paramétrer et à valider. Dans cette thèse, nous utilisons l’assimilation de données afin de coupler des observations hétérogènes des précipitations et des modèles, pour étudier les précipitations et leur variabilité spatiotemporelle à différentes échelles (macrophysique, qui s’intéresse aux cellules de pluie, et microphysique, qui s’intéresse à la distribution en taille des gouttes – DSD – qui les composent). Tout d’abord, nous développons un algorithme permettant de restituer des cartes de précipitations à partir de mesures de l’atténuation causée par la pluie à des ondes provenant de satellites de télévision. Nos restitutions sont validées par rapport à des données radar et pluviomètres sur un cas d’étude dans le sud de la France. Ensuite, nous restituons, toujours par assimilation de données, des profils verticaux de DSD et de vents verticaux à partir de mesures de flux de gouttes au sol (par disdromètres) et de spectres Doppler en altitude (par radar). Nous utilisons ces restitutions sur 3 cas d’étude pour étudier les phénomènes physiques agissant sur les gouttes de pluie durant leur chute et pour évaluer la paramétrisation de ces phénomènes dans les modèles. / On the one hand, the instruments designed to measure rainfall (rain gages, radars, etc.) perform measurements at different scales and of different natures. Their data are hard to compare. On the other hand, models simulating the evolution of rainfall are complex. It is not an easy task to parameterize and to validate them. In this thesis, we use data assimilation in order to couple heterogeneous observations of rainfall and models for studying rain and its spatiotemporal variability at different scales (macrophysical scale, which is interested in rain cells, as well as microphysical scale, which is interested in the drop size distribution – DSD). First, we develop an algorithm able to retrieve rain maps from measurements of attenuation of waves coming from TV satellites due to rainfall. Our retrievals are validated by comparison with radar and rain gages data for a case study in south of France. Second, we retrieve – again with data assimilation – vertical profiles of DSD and vertical winds from measurements of rain drop fluxes on the ground (using a disdrometer) and of Doppler spectra aloft (using a radar). We use these retrievals for 3 case studies to study the physical phenomena acting on rain drops during their fall and to evaluate the parameterization of these phenomena in models.
55

Modélisation du zooplancton et du micronecton marins / Modeling marine zooplankton and micronekton

Conchon, Anna 20 June 2016 (has links)
Le zooplancton et le micronecton sont les deux premiers échelons animaux de la chaine trophique marine. Bien que de tailles très différentes (200μm à 2mm pour le zooplancton, 2 à 20cm pour le micronecton), ces deux groupes d'espèces variées partagent un comportement singulier : les migrations nycthémérales. Ces migrations journalières entre la profondeur de jour et la surface de nuit induisent des flux de matière organique très importants entre les différentes profondeurs de l'océan. L'étude des cycles biogéochimiques océaniques a une grande importance pour l'étude du changement climatique. Cette étude est notamment conduite à travers le développement de modèles globaux de circulation océanique et de biogéochimie. La suite logique de ces développements est donc la modélisation du zooplancton et du micronecton. La gamme de modèles SEAPODYM modélise avec parcimonie la chaine trophique depuis le zooplancton jusqu'aux prédateurs supérieurs à l'aide de trois modèles. Cette thèse présente le modèle de biomasse de zooplancton SEAPODYM-LTL (pour lower trophic level, niveau trophique bas), ainsi qu'une analyse de sa sensibilité aux forçages. En effet, la particularité de ces modèles est leur forçage offline par des champs de courants, température et production primaire produits par d'autres modèles. Le modèle SEAPODYM-LTL est également comparé au modèle PISCES (NPZD), et présente des performances similaires à ce dernier dans le cas testé. Afin d'améliorer les prédictions du modèle SEAPODYM-MTL (mid-trophic level, i.e. le modèle de biomasse de micronecton), une méthodologie d'assimilation de données a été mise en place pour affiner la paramétrisation utilisée. Des données d'acoustique active (38kHz) sont donc utilisées pour enrichir le modèle. Cette méthodologie a été conçue autour d'un cas test présenté dans cette thèse. L'extension du jeu de données acoustiques assimilées au modèle a permis de mettre en évidence le besoin de mieux modéliser les profondeurs des couches verticales de SEAPODYM. Cela a été réalisé à l'aide du jeu de données acoustiques évoqué précédemment. Cette étude est également présentée dans cette thèse. / Zooplankton and micronecton are the first marine trophic levels. Different by their size (200μm to 2mm for zooplankton, 2 to 20cm for micronekton), this two groups undergo diel vertical migration from depth by day to the surface during the night. These migrations create major organic matter fluxes between the deep ocean and the surface. Biogeochemical cycles are of great importance for climate change studies. These studies are conducted with ocean global circulation model and biogeochemical model. The way to go is develop low and mid-trophic level modelling approaches. SEAPODYM ensemble of models are three parsimonious model of biomass at diverse level of the trophic chain, from zooplankton to top predators. This thesis introduce the zooplankton biomass model SEAPODYM-LTL (lower trophic level) and a forcing fields sensitivity analysis. Indeed, these model are forced off line by currents, temperature and primary production fields produced by other models. SEAPODYM-LTL has also been compared to PISCES (NPZD) and both have similar performance score in this study. In order to improve SEAPODYM-MTL (mid trophic level) predictions, a data assimilation framework has been developed to find a better parameterisation. 38kHz active acoustic data have been used to improve the model. This methodology has been develop thanks to a test case that we present in this thesis. The gathered acoustic dataset permitted to show the need of a better definition of vertical layer depths. It has been developed using the acoustic dataset. The related study is presented in this thesis.
56

Turbulent complex flows reconstruction via data assimilation in large eddy models / Reconstruction d’écoulements turbulents complexes par assimilation de données images dans des modèles grandes échelles

Chandramouli, Pranav 19 October 2018 (has links)
L'assimilation de données en tant qu'outil pour la mécanique des fluides a connu une croissance exponentielle au cours des dernières décennies. La possibilité de combiner des mesures précises mais partielles avec un modèle dynamique complet est précieuse et a de nombreuses applications dans des domaines allant de l'aérodynamique, à la géophysique et à l’aéraulique. Cependant, son utilité reste limitée en raison des contraintes imposées par l'assimilation de données notamment en termes de puissance de calcul, de besoins en mémoire et en informations préalables. Cette thèse tente de remédier aux différentes limites de la procédure d'assimilation pour faciliter plus largement son utilisation en mécanique des fluides. Un obstacle majeur à l'assimilation des données est un coût de calcul prohibitif pour les écoulements complexes. Une modélisation de la turbulence à grande échelle est intégrée à la procédure d'assimilation afin de réduire considérablement la coût de calcul et le temps requis. La nécessité d'une information volumétrique préalable pour l'assimilation est abordée à l'aide d'une nouvelle méthodologie de reconstruction développée et évaluée dans cette thèse. L'algorithme d'optimisation reconstruit les champs 3D à partir d'observations dans deux plans orthogonaux en exploitant l'homogénéité directionnelle. La méthode et ses variantes fonctionnent bien avec des ensembles de données synthétiques et expérimentaux fournissant des reconstructions précises. La méthodologie de reconstruction permet également d’estimer la matrice de covariance d’ébauche indispensable à un algorithme d’assimilation efficace. Tous les ingrédients sont combinés pour effectuer avec succès l'assimilation de données variationnelles d'un écoulement turbulent dans le sillage d'un cylindre à un nombre de Reynolds transitoire. L'algorithme d'assimilation est validé pour des observations volumétriques synthétiques et est évalué sur des observations expérimentales dans deux plans orthogonaux. / Data assimilation as a tool for fluid mechanics has grown exponentially over the last few decades. The ability to combine accurate but partial measurements with a complete dynamical model is invaluable and has numerous applications to fields ranging from aerodynamics, geophysics, and internal ventilation. However, its utility remains limited due to the restrictive requirements for performing data assimilation in the form of computing power, memory, and prior information. This thesis attempts at redressing various limitations of the assimilation procedure in order to facilitate its wider use in fluid mechanics. A major roadblock for data assimilation is the computational cost which is restrictive for all but the simplest of flows. Following along the lines of Joseph Smagorinsky, turbulence modelling through large-eddy simulation is incorporated in to the assimilation procedure to significantly reduce computing power and time required. The requirement for prior volumetric information for assimilation is tackled using a novel reconstruction methodology developed and assessed in this thesis. The snapshot optimisation algorithm reconstructs 3D fields from 2D cross- planar observations by exploiting directional homogeneity. The method and its variants work well with synthetic and experimental data-sets providing accurate reconstructions. The reconstruction methodology also provides the means to estimate the background covariance matrix which is essential for an efficient assimilation algorithm. All the ingredients are combined to perform variational data assimilation of a turbulent wake flow around a cylinder successfully at a transitional Reynolds number. The assimilation algorithm is validated with synthetic volumetric observation and assessed on 2D cross-planar observations emulating experimental data.
57

Dispersion atmosphérique et modélisation inverse pour la reconstruction de sources accidentelles de polluants / Atmospheric dispersion and inverse modelling for the reconstruction of accidental sources of pollutants

Winiarek, Victor 04 March 2014 (has links)
Les circonstances pouvant conduire à un rejet incontrôlé de polluants dans l'atmosphère sont variées : il peut s'agir de situations accidentelles, par exemples des fuites ou explosions sur un site industriel, ou encore de menaces terroristes : bombe sale, bombe biologique, notamment en milieu urbain. Face à de telles situations, les objectifs des autorités sont multiples : prévoir les zones impactées à court terme, notamment pour évacuer les populations concernées ; localiser la source pour pouvoir intervenir directement sur celle-ci ; enfin déterminer les zones polluées à plus long terme, par exemple par le dépôt de polluants persistants, et soumises à restriction de résidence ou d'utilisation agricole. Pour atteindre ces objectifs, des modèles numériques peuvent être utilisés pour modéliser la dispersion atmosphérique des polluants. Après avoir rappelé les processus physiques qui régissent le transport de polluants dans l'atmosphère, nous présenterons les différents modèles à disposition. Le choix de l'un ou l'autre de ces modèles dépend de l'échelle d'étude et du niveau de détails (topographiques notamment) désiré. Nous présentons ensuite le cadre général (bayésien) de la modélisation inverse pour l'estimation de sources. Le principe est l'équilibre entre des informations a priori et des nouvelles informations apportées par des observations et le modèle numérique. Nous mettons en évidence la forte dépendance de l'estimation du terme source et de son incertitude aux hypothèses réalisées sur les statistiques des erreurs a priori. Pour cette raison nous proposons plusieurs méthodes pour estimer rigoureusement ces statistiques. Ces méthodes sont appliquées sur des exemples concrets : tout d'abord un algorithme semi-automatique est proposé pour la surveillance opérationnelle d'un parc de centrales nucléaires. Un second cas d'étude est la reconstruction des termes sources de césium-137 et d'iode-131 consécutifs à l'accident de la centrale nucléaire de Fukushima Daiichi. En ce qui concerne la localisation d'une source inconnue, deux stratégies sont envisageables : les méthodes dites paramétriques et les méthodes non-paramétriques. Les méthodes paramétriques s'appuient sur le caractère particulier des situations accidentelles dans lesquelles les émissions de polluants sont généralement d'étendue limitée. La source à reconstruire est alors paramétrisée et le problème inverse consiste à estimer ces paramètres, en nombre réduit. Dans les méthodes non-paramétriques, aucune hypothèse sur la nature de la source (ponctuelle, localisée, ...) n'est réalisée et le système cherche à reconstruire un champs d'émission complet (en 4 dimensions). Plusieurs méthodes sont proposées et testées sur des situations réelles à l'échelle urbaine avec prise en compte des bâtiments, pour lesquelles les méthodes que nous proposons parviennent à localiser la source à quelques mètres près, suivant les situations modélisées et les méthodes inverses utilisées / Uncontrolled releases of pollutant in the atmosphere may be the consequence of various situations : accidents, for instance leaks or explosions in an industrial plant, or terrorist attacks such as biological bombs, especially in urban areas. In the event of such situations, authorities' objectives are various : predict the contaminated zones to apply first countermeasures such as evacuation of concerned population ; determine the source location ; assess the long-term polluted areas, for instance by deposition of persistent pollutants in the soil. To achieve these objectives, numerical models can be used to model the atmospheric dispersion of pollutants. We will first present the different processes that govern the transport of pollutants in the atmosphere, then the different numerical models that are commonly used in this context. The choice between these models mainly depends of the scale and the details one seeks to take into account.We will then present the general framework of inverse modeling for the estimation of source. Inverse modeling techniques make an objective balance between prior information and new information contained in the observation and the model. We will show the strong dependency of the source term estimation and its uncertainty towards the assumptions made on the statistics of the prior errors in the system. We propose several methods to estimate rigorously these statistics. We will apply these methods on different cases, using either synthetic or real data : first, a semi-automatic algorithm is proposed for the operational monitoring of nuclear facilities. The second and third studies concern the source term estimation of the accidental releases from the Fukushima Daiichi nuclear power plant. Concerning the localization of an unknown source of pollutant, two strategies can be considered. On one hand parametric methods use a limited number of parameters to characterize the source term to be reconstructed. To do so, strong assumptions are made on the nature of the source. The inverse problem is hence to estimate these parameters. On the other hand non-parametric methods attempt to reconstruct a full emission field. Several parametric and non-parametric methods are proposed and evaluated on real situations at a urban scale, with a CFD model taking into account buildings influence on the air flow. In these experiments, some proposed methods are able to localize the source with a mean error of some meters, depending on the simulated situations and the inverse modeling methods
58

Data Assimilation in the Boussinesq Approximation for Mantle Convection

McQuarrie, Shane Alexander 01 July 2018 (has links)
Many highly developed physical models poorly approximate actual physical systems due to natural random noise. For example, convection in the earth's mantle—a fundamental process for understanding the geochemical makeup of the earth's crust and the geologic history of the earth—exhibits chaotic behavior, so it is difficult to model accurately. In addition, it is impossible to directly measure temperature and fluid viscosity in the mantle, and any indirect measurements are not guaranteed to be highly accurate. Over the last 50 years, mathematicians have developed a rigorous framework for reconciling noisy observations with reasonable physical models, a technique called data assimilation. We apply data assimilation to the problem of mantle convection with the infinite-Prandtl Boussinesq approximation to the Navier-Stokes equations as the model, providing rigorous conditions that guarantee synchronization between the observational system and the model. We validate these rigorous results through numerical simulations powered by a flexible new Python package, Dedalus. This methodology, including the simulation and post-processing code, may be generalized to many other systems. The numerical simulations show that the rigorous synchronization conditions are not sharp; that is, synchronization may occur even when the conditions are not met. These simulations also cast some light on the true relationships between the system parameters that are required in order to achieve synchronization. To conclude, we conduct experiments for two closely related data assimilation problems to further demonstrate the limitations of the rigorous results and to test the flexibility of data assimilation for mantle-like systems.
59

Optimal interpolation schemes to constrain Pm2.5 In Regional Modeling Over The United States

Sousan, Sinan Dhia Jameel 01 July 2012 (has links)
This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 μm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that scaled the observation error by land use (i.e. urban or rural locations). In theory, urban locations should have less effect on surrounding areas than rural sites, which can be controlled using site representation error. The annual evaluations showed substantial improvements in model performance with increases in the correlation coefficient from 0.36 (prior) to 0.76 (posterior), and decreases in the fractional error from 0.43 (prior) to 0.15 (posterior). In addition, the normalized mean error decreased from 0.36 (prior) to 0.13 (posterior), and the RMSE decreased from 5.39 μg m-3 (prior) to 2.32 μg m-3 (posterior). OI decreased model bias for both large spatial areas and point locations, and could be extended to more advanced data assimilation methods. The current work will be applied to a five year (2000-2004) CMAQ simulation aimed at improving aerosol model estimates. The posterior model concentrations will be used to inform exposure studies over the U.S. that relate aerosol exposure to mortality and morbidity rates. Future improvements for the OI techniques used in the current study will include combining both surface and satellite data to improve posterior model estimates. Satellite data have high spatial and temporal resolutions in comparison to surface measurements, which are scarce but more accurate than model estimates. The satellite data are subject to noise affected by location and season of retrieval. The implementation of OI to combine satellite and surface data sets has the potential to improve posterior model estimates for locations that have no direct measurements.
60

Observation adaptative : limites de la prévision et du contrôle des incertitudes / Adaptive Observation : limits of the forecast and monitoring of the uncertainties

Oger, Niels 02 July 2015 (has links)
L'observation adaptative (OA) est une pratique de prévision numérique du temps (PNT) qui cherche à prévoir quel jeu (ou réseau) d'observations supplémentaires à déployer et à assimiler dans le futur améliorera les prévisions. L'objectif est d'accroître la qualité des prévisions météorologiques en ajoutant des observations là où elles auront le meilleur impact (optimal). Des méthodes numériques d'OA apportent des réponses objectives mais partielles. Elles prennent en compte à la fois les aspects dynamiques de l'atmosphère à travers le modèle adjoint, et aussi le système d'assimilation de données. Le système d'assimilation de données le plus couramment utilisé pour l'OA est le 4D-Var. Ces méthodes linéaires (technologie de l'adjoint) reposent cependant sur une réalisation déterministe (ou trajectoire) unique. Cette trajectoire est entachée d'une incertitude qui affecte l'efficacité de l'OA. Le point de départ de ce travail est d'évaluer l'impact de l'incertitude associée au choix de cette trajectoire sur une technique: la KFS. Un ensemble de prévisions est utilisé pour étudier cette sensibilité. Les expériences réalisées dans un cadre simplifié montrent que les solutions de déploiement peuvent changer en fonction de la trajectoire choisie. Il est d'autant plus nécessaire de prendre cette incertitude en considération que le système d'assimilation utilisé n'est pas vraiment optimal du fait de simplifications liées à sa mise en oeuvre. Une nouvelle méthode d'observation adaptative, appelée Variance Reduction Field (VRF), a été développée dans le cadre de cette thèse. Cette méthode permet de déterminer la réduction de variance de la fonction score attendue en assimilant une pseudo-observation supplémentaire pour chaque point de grille. Deux approches de la VRF sont proposées, la première est basée sur une simulation déterministe. Et la seconde utilise un ensemble d'assimilations et de prévisions. Les deux approches de la VRF ont été implémentées et étudiées dans le modèle de Lorenz 96. Le calcul de la VRF à partir d'un ensemble est direct si l'on dispose déjà des membres de l'ensemble. Le modèle adjoint n'est pas nécessaire pour le calcul.L'implémentation de la VRF dans un système de prévision du temps de grande taille, tel qu'un système opérationnel, n'a pas pu être réalisée dans le cadre de cette thèse. Cependant, l'étude de faisabilité de la construction de la VRF dans l'environnement OOPS a été menée. Une description de OOPS (version 2013) est d'abord présentée dans le manuscrit, car cet environnement est une nouveauté en soi. Elle est suivie de la réflexion sur les développements à introduire pour l'implémentation de la VRF. / The purpose of adaptive observation (AO) strategies is to design optimal observation networks in a prognostic way to provide guidance on how to deploy future observations. The overarching objective is to improve forecast skill. Most techniques focus on adding observations. Some AO techniques account for the dynamical aspects of the atmosphere using the adjoint model and for the data assimilation system (DAS), which is usually either a 3D or 4D-Var (ie. solved by the minimization of a cost function). But these techniques rely on a single (linearisation) trajectory. One issue is to estimate how the uncertainty related to the trajectory affects the efficiency of one technique in particular: the KFS. An ensemble-based approach is used to assess the sensitivity to the trajectory within this deterministic approach (ie. with the adjoint model). Experiments in a toy model show that the trajectory uncertainties can lead to significantly differing deployments of observations when using a deterministic AO method (with adjoint model and VDAS). This is especially true when we lack knowledge on the VDAS component. During this work a new tool for observation targeting called Variance Reduction Field (VRF)has been developed. This technique computes the expected variance reduction of a forecast Score function that quantifies forecast quality. The increase of forecast quality that is a reduction of variance of that function is linked to the location of an assimilated test probe. Each model grid point is tested as a potential location. The VRF has been implemented in a Lorenz 96 model using two approaches. The first one is based on a deterministic simulation. The second approach consists of using an ensemble data assimilation and prediction system. The ensemble approach can be easily implemented when we already have an assimilation ensemble and a forecast ensemble. It does not need the use of the adjoint model. The implementation in real NWP system of the VRF has not been conducted during this work. However a preliminary study has been done to implement the VRF within OOPS (2013 version). After a description of the different components of OOPS, the elements required for the implementation of the VRF are described.

Page generated in 0.1215 seconds