• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bayesian inference for source determination in the atmospheric environment

Keats, William Andrew January 2009 (has links)
In the event of a hazardous release (chemical, biological, or radiological) in an urban environment, monitoring agencies must have the tools to locate and characterize the source of the emission in order to respond and minimize damage. Given a finite and noisy set of concentration measurements, determining the source location, strength and time of release is an ill-posed inverse problem. We treat this problem using Bayesian inference, a framework under which uncertainties in modelled and measured concentrations can be propagated, in a consistent, rigorous manner, toward a final probabilistic estimate for the source. The Bayesian methodology operates independently of the chosen dispersion model, meaning it can be applied equally well to problems in urban environments, at regional scales, or at global scales. Both Lagrangian stochastic (particle-tracking) and Eulerian (fixed-grid, finite-volume) dispersion models have been used successfully. Calculations are accomplished efficiently by using adjoint (backward) dispersion models, which reduces the computational effort required from calculating one [forward] plume per possible source configuration to calculating one [backward] plume per detector. Markov chain Monte Carlo (MCMC) is used to efficiently sample from the posterior distribution for the source parameters; both the Metropolis-Hastings and hybrid Hamiltonian algorithms are used. In this thesis, four applications falling under the rubric of source determination are addressed: dispersion in highly disturbed flow fields characteristic of built-up (urban) environments; dispersion of a nonconservative scalar over flat terrain in a statistically stationary and horizontally homogeneous (turbulent) wind field; optimal placement of an auxiliary detector using a decision-theoretic approach; and source apportionment of particulate matter (PM) using a chemical mass balance (CMB) receptor model. For the first application, the data sets used to validate the proposed methodology include a water-channel simulation of the near-field dispersion of contaminant plumes in a large array of building-like obstacles (Mock Urban Setting Trial) and a full-scale field experiment (Joint Urban 2003) in Oklahoma City. For the second and third applications, the background wind and terrain conditions are based on those encountered during the Project Prairie Grass field experiment; mean concentration and turbulent scalar flux data are synthesized using a Lagrangian stochastic model where necessary. In the fourth and final application, Bayesian source apportionment results are compared to the US Environmental Protection Agency's standard CMB model using a test case involving PM data from Fresno, California. For each of the applications addressed in this thesis, combining Bayesian inference with appropriate computational techniques results in a computationally efficient methodology for performing source determination.
2

Bayesian inference for source determination in the atmospheric environment

Keats, William Andrew January 2009 (has links)
In the event of a hazardous release (chemical, biological, or radiological) in an urban environment, monitoring agencies must have the tools to locate and characterize the source of the emission in order to respond and minimize damage. Given a finite and noisy set of concentration measurements, determining the source location, strength and time of release is an ill-posed inverse problem. We treat this problem using Bayesian inference, a framework under which uncertainties in modelled and measured concentrations can be propagated, in a consistent, rigorous manner, toward a final probabilistic estimate for the source. The Bayesian methodology operates independently of the chosen dispersion model, meaning it can be applied equally well to problems in urban environments, at regional scales, or at global scales. Both Lagrangian stochastic (particle-tracking) and Eulerian (fixed-grid, finite-volume) dispersion models have been used successfully. Calculations are accomplished efficiently by using adjoint (backward) dispersion models, which reduces the computational effort required from calculating one [forward] plume per possible source configuration to calculating one [backward] plume per detector. Markov chain Monte Carlo (MCMC) is used to efficiently sample from the posterior distribution for the source parameters; both the Metropolis-Hastings and hybrid Hamiltonian algorithms are used. In this thesis, four applications falling under the rubric of source determination are addressed: dispersion in highly disturbed flow fields characteristic of built-up (urban) environments; dispersion of a nonconservative scalar over flat terrain in a statistically stationary and horizontally homogeneous (turbulent) wind field; optimal placement of an auxiliary detector using a decision-theoretic approach; and source apportionment of particulate matter (PM) using a chemical mass balance (CMB) receptor model. For the first application, the data sets used to validate the proposed methodology include a water-channel simulation of the near-field dispersion of contaminant plumes in a large array of building-like obstacles (Mock Urban Setting Trial) and a full-scale field experiment (Joint Urban 2003) in Oklahoma City. For the second and third applications, the background wind and terrain conditions are based on those encountered during the Project Prairie Grass field experiment; mean concentration and turbulent scalar flux data are synthesized using a Lagrangian stochastic model where necessary. In the fourth and final application, Bayesian source apportionment results are compared to the US Environmental Protection Agency's standard CMB model using a test case involving PM data from Fresno, California. For each of the applications addressed in this thesis, combining Bayesian inference with appropriate computational techniques results in a computationally efficient methodology for performing source determination.
3

Applications of Adjoint Modelling in Chemical Composition: Studies of Tropospheric Ozone at Middle and High Northern Latitudes

Walker, Thomas 01 September 2014 (has links)
Ozone is integral to tropospheric chemistry, and understanding the processes controlling its distribution is important in climate and air pollution contexts. The GEOS-Chem global chemical transport model and its adjoint are used to interpret the impacts of midlatitude precursor emissions and atmospheric transport on the tropospheric ozone distribution at middle and high northern latitudes. In the Arctic, the model reproduces seasonal cycles of peroxyacetyl nitrate (PAN) and ozone measured at the surface, and observed ozone abundances in the summer free troposphere. Source attribution analysis suggests that local photochemical production, ≤ 0.25 ppbv/day, driven by PAN decomposition accounts for more than 50% of ozone in the summertime Arctic boundary layer. In the mid-troposphere, photochemical production accounts for 30-40% of ozone, while ozone transported from midlatitudes contributes 25-35%. Adjoint sensitivity studies link summertime ozone production to anthropogenic, biomass burning, soil, and lightning emissions between 50N-70N. Over Alert, Nunavut, the sensitivity of mid-tropospheric ozone to lightning emissions sometimes exceeds that to anthropogenic emissions. Over the eastern U.S., numerous models overestimate ozone in the summertime boundary layer. An inversion analysis, using the GEOS-Chem four-dimensional variational data assimilation system, optimizes emissions of NOx and isoprene. Inversion results suggest the model bias cannot be explained by discrepancies in these precursor emissions. A separate inversion optimizes rates of key chemical reactions including ozone deposition rates, which are parameterized and particularly uncertain. The inversion suggests a factor of 2-3 increase in deposition rates in the northeastern U.S., decreasing the ozone bias from 17.5 ppbv to 6.0 ppbv. This analysis, however, is sensitive to the model boundary layer mixing scheme. Several inversion analyses are conducted to estimate lightning NOx emissions over North America in August 2006, using ozonesonde data. The high-resolution nested version of GEOS-Chem is used to better capture variability in the ozonesonde data. The analyses suggest North American lightning NOx totals between 0.076-0.204 Tg N. A major challenge is that the vertical distribution of the lightning source is not optimized, but the results suggest a bias in the vertical distribution. Reliably optimizing the three-dimensional distribution of lightning NOx emissions requires more information than the ozonesonde dataset contains.
4

MATLODE: A MATLAB ODE Solver and Sensitivity Analysis Toolbox

D'Augustine, Anthony Frank 04 May 2018 (has links)
Sensitivity analysis quantifies the effect that of perturbations of the model inputs have on the model's outputs. Some of the key insights gained using sensitivity analysis are to understand the robustness of the model with respect to perturbations, and to select the most important parameters for the model. MATLODE is a tool for sensitivity analysis of models described by ordinary differential equations (ODEs). MATLODE implements two distinct approaches for sensitivity analysis: direct (via the tangent linear model) and adjoint. Within each approach, four families of numerical methods are implemented, namely explicit Runge-Kutta, implicit Runge-Kutta, Rosenbrock, and single diagonally implicit Runge-Kutta. Each approach and family has its own strengths and weaknesses when applied to real world problems. MATLODE has a multitude of options that allows users to find the best approach for a wide range of initial value problems. In spite of the great importance of sensitivity analysis for models governed by differential equations, until this work there was no MATLAB ordinary differential equation sensitivity analysis toolbox publicly available. The two most popular sensitivity analysis packages, CVODES [8] and FATODE [10], are geared toward the high performance modeling space; however, no native MATLAB toolbox was available. MATLODE fills this need and offers sensitivity analysis capabilities in MATLAB, one of the most popular programming languages within scientific communities such as chemistry, biology, ecology, and oceanogra- phy. We expect that MATLODE will prove to be a useful tool for these communities to help facilitate their research and fill the gap between theory and practice. / Master of Science
5

Robust Nonlinear Model Predictive Control based on Constrained Saddle Point Optimization : Stability Analysis and Application to Type 1 Diabetes

Penet, Maxime 10 October 2013 (has links) (PDF)
This thesis deals with the design of a robust and safe control algorithm to aim at an artificial pancreas. More precisely we will be interested in controlling the stabilizing part of a classical cure. To meet this objective, the design of a robust nonlinear model predictive controller based on the solution of a saddle point optimization problem is considered. Also, to test the controller performances in a realistic case, numerical simulations on a FDA validated testing platform are envisaged.In a first part, we present an extension of the usual nonlinear model predictive controller designed to robustly control, in a sampled-data framework, systems described by nonlinear ordinary differential equations. This controller, which computes the best control input by considering the solution of a constrained saddle point optimization problem, is called saddle point model predictive controller (SPMPC). Using this controller, it is proved that the closed-loop is Ultimately Bounded and, with some assumptions on the problem structure, Input-to State practically Stable. Then, we are interested in numerically solving the corresponding control problem. To do so, we propose an algorithm inspired from the augmented Lagrangian technique and which makes use of adjoint model.In a second part, we consider the application of this controller to the problem of artificial blood glucose control. After a modeling phase, two models are retained. A simple one will be used to design the controller and a complex one will be used to simulate realistic virtual patients. This latter is needed to validate our control approach. In order to compute a good control input, the SPMPC controller needs the full state value. However, the sensors can only provide the value of blood glucose. That is why the design of an adequate observer is envisaged. Then, numerical simulations are performed. The results show the interest of the approach. For all virtual patients, no hypoglycemia event occurs and the time spent in hyperglycemia is too short to induce damageable consequences. Finally, the interest of extending the SPMPC approach to consider the control of time delay systems in a sampled-data framework is numerically explored.
6

Formalisation et automatisation de YAO, générateur de code pour l’assimilation variationnelle de données

Nardi, Luigi 08 March 2011 (has links)
L’assimilation variationnelle de données 4D-Var est une technique très utilisée en géophysique, notamment en météorologie et océanographie. Elle consiste à estimer des paramètres d’un modèle numérique direct, en minimisant une fonction de coût mesurant l’écart entre les sorties du modèle et les mesures observées. La minimisation, qui est basée sur une méthode de gradient, nécessite le calcul du modèle adjoint (produit de la transposée de la matrice jacobienne avec le vecteur dérivé de la fonction de coût aux points d’observation). Lors de la mise en œuvre de l’AD 4D-Var, il faut faire face à des problèmes d’implémentation informatique complexes, notamment concernant le modèle adjoint, la parallélisation du code et la gestion efficace de la mémoire. Afin d’aider au développement d’applications d’AD 4D-Var, le logiciel YAO qui a été développé au LOCEAN, propose de modéliser le modèle direct sous la forme d’un graphe de flot de calcul appelé graphe modulaire. Les modules représentent des unités de calcul et les arcs décrivent les transferts des données entre ces modules. YAO est doté de directives de description qui permettent à un utilisateur de décrire son modèle direct, ce qui lui permet de générer ensuite le graphe modulaire associé à ce modèle. Deux algorithmes, le premier de type propagation sur le graphe et le second de type rétropropagation sur le graphe permettent, respectivement, de calculer les sorties du modèle direct ainsi que celles de son modèle adjoint. YAO génère alors le code du modèle direct et de son adjoint. En plus, il permet d’implémenter divers scénarios pour la mise en œuvre de sessions d’assimilation.Au cours de cette thèse, un travail de recherche en informatique a été entrepris dans le cadre du logiciel YAO. Nous avons d’abord formalisé d’une manière plus générale les spécifications deYAO. Par la suite, des algorithmes permettant l’automatisation de certaines tâches importantes ont été proposés tels que la génération automatique d’un parcours “optimal” de l’ordre des calculs et la parallélisation automatique en mémoire partagée du code généré en utilisant des directives OpenMP. L’objectif à moyen terme, des résultats de cette thèse, est d’établir les bases permettant de faire évoluer YAO vers une plateforme générale et opérationnelle pour l’assimilation de données 4D-Var, capable de traiter des applications réelles et de grandes tailles. / Variational data assimilation 4D-Var is a well-known technique used in geophysics, and in particular in meteorology and oceanography. This technique consists in estimating the control parameters of a direct numerical model, by minimizing a cost function which measures the misfit between the forecast values and some actual observations. The minimization, which is based on a gradient method, requires the computation of the adjoint model (product of the transpose Jacobian matrix and the derivative vector of the cost function at the observation points). In order to perform the 4DVar technique, we have to cope with complex program implementations, in particular concerning the adjoint model, the parallelization of the code and an efficient memory management. To address these difficulties and to facilitate the implementation of 4D-Var applications, LOCEAN is developing the YAO framework. YAO proposes to represent a direct model with a computation flow graph called modular graph. Modules depict computation units and edges between modules represent data transfer. Description directives proper to YAO allow a user to describe its direct model and to generate the modular graph associated to this model. YAO contains two core algorithms. The first one is a forward propagation algorithm on the graph that computes the output of the numerical model; the second one is a back propagation algorithm on the graph that computes the adjoint model. The main advantage of the YAO framework, is that the direct and adjoint model programming codes are automatically generated once the modular graph has been conceived by the user. Moreover, YAO allows to cope with many scenarios for running different data assimilation sessions.This thesis introduces a computer science research on the YAO framework. In a first step, we have formalized in a more general way the existing YAO specifications. Then algorithms allowing the automatization of some tasks have been proposed such as the automatic generation of an “optimal” computational ordering and the automatic parallelization of the generated code on shared memory architectures using OpenMP directives. This thesis permits to lay the foundations which, at medium term, will make of YAO a general and operational platform for data assimilation 4D-Var, allowing to process applications of high dimensions.
7

Robust Nonlinear Model Predictive Control based on Constrained Saddle Point Optimization : Stability Analysis and Application to Type 1 Diabetes / Commande Prédictive Nonlinéaire Robuste par Méthode de Point Selle en Optimisation sous Contraintes : Analyse de Stabilité et Application au Diabète de Type 1

Penet, Maxime 10 October 2013 (has links)
Cette thèse s’intéresse au développement d’un contrôleur sûre et robuste en tant que partie intégrante d’un pancréas artificiel. Plus précisément, nous sommes intéressés à contrôler la partie du traitement usuel qui a pour but d’équilibrer la glycémie du patient. C’est ainsi que le développement d’une commande prédictive nonlinéaire robuste basée sur la résolution d’un problème de point selle a été envisagé. Afin de valider les performances du contrôleur dans une situation réaliste, des simulations numériques en utilisant une plate-forme de tests validée par la FDA sont envisagées.Dans une première partie, nous présentons une extension de la classique commande prédictive nonlinéaire dont le but est d’assurer le contrôle robuste de systèmes décrits par des équations différentielles ordinaires non linéaires dans un cadre échantillonné. Ce contrôleur, qui calcule une action de contrôle adéquate en considérant la solution d’un problème de point selle, est appelé saddle point model predictive controller (SPMPC). En utilisant cette commande, il est prouvé que le système converge en temps fini dans un espace borné et, en supposant une certaine structure dans le problème, qu’il est pratiquement stable entrée-état. Ensuite, nous nous sommes intéressés à la résolution numérique. Pour ce faire, nous proposons une méthode de résolution inspirée de la méthode du Langrangien augmenté et qui fait usage de modèles adjoints.Dans un deuxième temps, nous considérons l’application de ce contrôleur au problème du contrôle artificiel de la glycémie. Après une phase de modélisation, nous avons retenu deux modèles : un modèle simple qui est utilisé pour développer la commande et un modèle complexe qui est utilisé comme un simulateur réaliste de patients. Ce dernier est nécessaire pour valider notre approche de contrôle. Afin de calculer une entrée de commande adéquate, la commande SPMPC a besoin de l’état complet du système. Or, les capteurs ne peuvent fournir qu’une valeur du glucose sanguin. C’est pourquoi le développement d’un observateur est envisagé. Ensuite, des simulations sont réalisées. Les résultats obtenus témoignent de l’intérêt de l’approche retenue. En effet, pour tous les patients, aucune hypoglycémie n’a été observée et le temps passé en état hyperglycémique est suffisamment faible pour ne pas être dommageable. Enfin, l’intérêt d’étendre l’approche de commande SPMPC au problème de contrôle de systèmes décrits par des équations différentielles retardées non linéaires dans un cadre échantillonné est formellement investigué. / This thesis deals with the design of a robust and safe control algorithm to aim at an artificial pancreas. More precisely we will be interested in controlling the stabilizing part of a classical cure. To meet this objective, the design of a robust nonlinear model predictive controller based on the solution of a saddle point optimization problem is considered. Also, to test the controller performances in a realistic case, numerical simulations on a FDA validated testing platform are envisaged.In a first part, we present an extension of the usual nonlinear model predictive controller designed to robustly control, in a sampled-data framework, systems described by nonlinear ordinary differential equations. This controller, which computes the best control input by considering the solution of a constrained saddle point optimization problem, is called saddle point model predictive controller (SPMPC). Using this controller, it is proved that the closed-loop is Ultimately Bounded and, with some assumptions on the problem structure, Input-to State practically Stable. Then, we are interested in numerically solving the corresponding control problem. To do so, we propose an algorithm inspired from the augmented Lagrangian technique and which makes use of adjoint model.In a second part, we consider the application of this controller to the problem of artificial blood glucose control. After a modeling phase, two models are retained. A simple one will be used to design the controller and a complex one will be used to simulate realistic virtual patients. This latter is needed to validate our control approach. In order to compute a good control input, the SPMPC controller needs the full state value. However, the sensors can only provide the value of blood glucose. That is why the design of an adequate observer is envisaged. Then, numerical simulations are performed. The results show the interest of the approach. For all virtual patients, no hypoglycemia event occurs and the time spent in hyperglycemia is too short to induce damageable consequences. Finally, the interest of extending the SPMPC approach to consider the control of time delay systems in a sampled-data framework is numerically explored.
8

Formalisation et automatisation de YAO, générateur de code pour l’assimilation variationnelle de données / Formalisation and automation of YAO, code generator for variational data assimilation

Nardi, Luigi 08 March 2011 (has links)
L’assimilation variationnelle de données 4D-Var est une technique très utilisée en géophysique, notamment en météorologie et océanographie. Elle consiste à estimer des paramètres d’un modèle numérique direct, en minimisant une fonction de coût mesurant l’écart entre les sorties du modèle et les mesures observées. La minimisation, qui est basée sur une méthode de gradient, nécessite le calcul du modèle adjoint (produit de la transposée de la matrice jacobienne avec le vecteur dérivé de la fonction de coût aux points d’observation). Lors de la mise en œuvre de l’AD 4D-Var, il faut faire face à des problèmes d’implémentation informatique complexes, notamment concernant le modèle adjoint, la parallélisation du code et la gestion efficace de la mémoire. Afin d’aider au développement d’applications d’AD 4D-Var, le logiciel YAO qui a été développé au LOCEAN, propose de modéliser le modèle direct sous la forme d’un graphe de flot de calcul appelé graphe modulaire. Les modules représentent des unités de calcul et les arcs décrivent les transferts des données entre ces modules. YAO est doté de directives de description qui permettent à un utilisateur de décrire son modèle direct, ce qui lui permet de générer ensuite le graphe modulaire associé à ce modèle. Deux algorithmes, le premier de type propagation sur le graphe et le second de type rétropropagation sur le graphe permettent, respectivement, de calculer les sorties du modèle direct ainsi que celles de son modèle adjoint. YAO génère alors le code du modèle direct et de son adjoint. En plus, il permet d’implémenter divers scénarios pour la mise en œuvre de sessions d’assimilation.Au cours de cette thèse, un travail de recherche en informatique a été entrepris dans le cadre du logiciel YAO. Nous avons d’abord formalisé d’une manière plus générale les spécifications deYAO. Par la suite, des algorithmes permettant l’automatisation de certaines tâches importantes ont été proposés tels que la génération automatique d’un parcours “optimal” de l’ordre des calculs et la parallélisation automatique en mémoire partagée du code généré en utilisant des directives OpenMP. L’objectif à moyen terme, des résultats de cette thèse, est d’établir les bases permettant de faire évoluer YAO vers une plateforme générale et opérationnelle pour l’assimilation de données 4D-Var, capable de traiter des applications réelles et de grandes tailles. / Variational data assimilation 4D-Var is a well-known technique used in geophysics, and in particular in meteorology and oceanography. This technique consists in estimating the control parameters of a direct numerical model, by minimizing a cost function which measures the misfit between the forecast values and some actual observations. The minimization, which is based on a gradient method, requires the computation of the adjoint model (product of the transpose Jacobian matrix and the derivative vector of the cost function at the observation points). In order to perform the 4DVar technique, we have to cope with complex program implementations, in particular concerning the adjoint model, the parallelization of the code and an efficient memory management. To address these difficulties and to facilitate the implementation of 4D-Var applications, LOCEAN is developing the YAO framework. YAO proposes to represent a direct model with a computation flow graph called modular graph. Modules depict computation units and edges between modules represent data transfer. Description directives proper to YAO allow a user to describe its direct model and to generate the modular graph associated to this model. YAO contains two core algorithms. The first one is a forward propagation algorithm on the graph that computes the output of the numerical model; the second one is a back propagation algorithm on the graph that computes the adjoint model. The main advantage of the YAO framework, is that the direct and adjoint model programming codes are automatically generated once the modular graph has been conceived by the user. Moreover, YAO allows to cope with many scenarios for running different data assimilation sessions.This thesis introduces a computer science research on the YAO framework. In a first step, we have formalized in a more general way the existing YAO specifications. Then algorithms allowing the automatization of some tasks have been proposed such as the automatic generation of an “optimal” computational ordering and the automatic parallelization of the generated code on shared memory architectures using OpenMP directives. This thesis permits to lay the foundations which, at medium term, will make of YAO a general and operational platform for data assimilation 4D-Var, allowing to process applications of high dimensions.
9

Assimilation variationnelle de données de télédétection dans des modèles de fonctionnement des couverts végétaux et du paysage agricole / Variational data assimilation of remote sensing data into operational models of plant canopies and the agricultural landscape

Kpemlie, Emmanuel Kwashi 18 December 2009 (has links)
La connaissance du microclimat et de l’évapotranspiration ou flux de chaleur latente qui représente la consommation réelle en eau de la culture à l’échelle des parcelles agricoles est une donnée importante pour comprendre le développement des cultures. La plupart des modèles permettant d’estimer l’évapotranspiration sont utilisés sur des surfaces homogènes sans tenir compte des interactions surface - atmosphère et de la variabilité spatiale du domaine agricole. Nous avons utilisé un modèle de couche limite atmosphérique afin de prendre en compte ces interactions. Une approche dite « patchée » permet d’introduire la variabilité spatiale des surfaces dans le modèle à partir des diverses proportions et des caractéristiques des principaux couverts végétaux qui composent le paysage. Une méthode d’assimilation variationnelle a été implémentée afin d’estimer certains paramètres du modèle difficile à connaître précisément. La méthode est basée sur le calcul de l’adjoint du modèle et utilise une température de surface observée par télédétection. L’approche développée est comparée à des approches plus simples considérant chaque type de surface indépendamment, mettant en évidence le rôle de la prise en compte de la variabilité spatiale de la surface sur la simulation du microclimat et des flux de surface / Knowledge of climate at regional scale and evapotranspiration (or latent heat flux which represents the actual water consumption of culture) is a key to understand the development of crops. Most of the methods aiming at estimating evapotranspiration assume homogeneous or decoupled atmospheric variables over the modelling domain without accounting for the feedback between surface and atmosphere. In order to analyse such dependencies and to predict microclimate and land surface fluxes we have developed a coupled atmospheric boundary layer - land surface model which accounts for the landscape heterogeneity using a tiled approach. We have implemented appropriate procedures (variational data assimilation) for assimilating remote sensing data into the model allowing to retrieve some input parameters difficult to estimate spatially (soil moisture and aerodynamic roughness). The developed method is compared to classical approaches considering each type of surface independently. Results are discussed in this paper
10

Assimilation de données radar satellitaires dans un modèle de métamorphisme de la neige / Assimilation of satellite radar data into a snowpack metamorphisme model

Phan, Xuan Vu 21 March 2014 (has links)
La caractérisation de la neige est un enjeu important pour la gestion des ressources en eau et pour la prévision des risques d'avalanche. L'avènement des nouveaux satellites Radar de Synthèse d'Ouverture (RSO) bande X à haute résolution permet d'acquérir des données de résolution métrique avec une répétitivité journalière. Dans ce travail, un modèle de rétrodiffusion des ondes électromagnétiques de la neige sèche est adapté à la bande X et aux fréquences plus élevées. L'algorithme d'assimilation de données 3D-VAR est ensuite implémenté pour contraindre le modèle d'évolution de la neige SURFEX/Crocus à l'aide des observations satellitaires. Enfin, l'ensemble de ces traitements sont évalué à partir de données du satellite TerraSAR-X acquises sur le glacier d'Argentière dans la vallée de Chamonix. Cette première comparaison montre le fort potentiel de l'assimilation des données RSO bande X pour la caractérisation du manteau neigeux. / Characterization of snowpack structure is an important issue for the management of water resources and the prediction of avalanche risks. New Synthetic Aperture Radar (SAR) satellites in X-band at high-resolution allow us to acquire image data with metric resolution and daily observations. In this work, an electromagnetic backscattering model applicable for dry snow is adapted for X-band and higher frequencies. The 3D-VAR data assimilation algorithm is then implemented to constrain the evolution of the snow metamorphisme model SURFEX/Crocus using satellite observations. Finally, the algorithm is evaluated using image data acquired from TerraSAR-X satellite on the Argentiere glacier in the Chamonix Valley of the French Alps. This first comparison shows the high potential of the data assimilation assimilation method using X-band SAR data for characterization of the snowpack.

Page generated in 0.452 seconds