• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 57
  • 13
  • 12
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 212
  • 212
  • 75
  • 38
  • 35
  • 34
  • 34
  • 24
  • 24
  • 18
  • 18
  • 18
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Estimation de la dynamique à partir des structures observées dans une séquence d'images / Estimation of motion from observed objects in image sequences

Lepoittevin, Yann 03 December 2015 (has links)
Cette thèse traite de l'estimation du mouvement à partir d'une séquence d'images par des méthodes d'assimilation de données. Les travaux ont porté sur la prise en compte des objets dans les processus d'estimation, afin de corréler en espace les résultats obtenus. Les deux composantes méthodologiques que sont approche variationnelle et approche séquentielle sont traitées. L'algorithme variationnel repose sur une équation d'évolution, une équation d'ébauche et une équation d'observation. L'estimation s'obtient comme le minimum d'une fonction de coût. Dans une première étape, l'objet est décrit par sa courbe frontière. Le modèle dynamique caractérise l'évolution des images et déplace les objets afin que leurs positions correspondent à celles observées dans les acquisitions image. Cette approche impacte fortement les temps de calculs, mais permet une amélioration de l'estimation du mouvement. Deuxièmement, les valeurs de la matrice de covariance des erreurs d'ébauche sont modifiées afin de corréler, à moindre coût, les pixels de l'image. L'algorithme séquentiel présenté repose sur la création d'un ensemble de vecteurs d'état ainsi que sur des approches de localisation. Pour modéliser les objets, un nouveau critère de localisation portant sur l'intensité de niveau de gris des pixels a été défini. Cependant, la localisation, si elle est appliquée directement sur la matrice de covariance d'erreur, rend la méthode inutilisable pour de grandes images. Une approche consistant à découper le domaine global en sous-domaines indépendants, avant d'estimer le mouvement, a été mise au point. La prise en compte des objets intervient lors du découpage du domaine d'analyse global. / This thesis describes approaches estimating motion from image sequences with data assimilation methods. A particular attention is given to include representations of the displayed objects in the estimation process. Variational and sequential implementations are discussed in the document.The variational methods rely on an evolution equation, a background equation and an observation equation, which characterize the studied system and the observations. The motion estimation is obtained as the minimum of a cost function. In a first approach, the structures are modeled by their boundaries. The image model describes both the evolution of the gray level function and the displacement of the structures. The resulting motion field should allow the position of the structures in the model to match their observed position. The use of structures betters the result. A second approach, less expensive regarding the computational costs, is designed, where the structures are modeled by the values of the background error covariance matrix.The sequential approach, described in the thesis, relies on the creation of an ensemble of state vectors and on the use of localization methods. In order to model the structures, a new localization criteria based on the gray level values is defined. However, the localization method, if directly applied on the background error covariance matrix, renders the approach inoperable on large images. Therefore, another localization method is designed, which consists to decompose the image domain into independent subdomains before the estimation. Here, the structures representation intervenes while decomposing the global domain.
132

Apport des données polarimétriques radar pour un modèle atmosphérique à échelle convective / Interest of polarimetric radar observations for convective scale numerical weather prediction models

Augros, Clotilde 19 May 2016 (has links)
Cette thèse a permis d'explorer l'apport des variables polarimétriques radar (aux longueurs d'onde centimétriques), sensibles aux propriétés microphysiques des hydrométéores, pour les modèles de prévision numérique à échelle convective. Dans la première partie de la thèse, un opérateur d'observation radar polarimétrique, cohérent avec les paramétrisations microphysiques à 1 moment couramment utilisées par les modèles opérationnels à échelle convective a été développé. Des comparaisons entre données simulées et observées pour tous les types de radar (S, C et X) ont été réalisées pour deux cas d'étude convectifs, et ont permis de valider l'opérateur d'observation. La deuxième partie de cette thèse a été consacrée à la conception et au test d'une méthode d'assimilation des variables polarimétriques, s'appuyant sur la méthode opérationnelle 1D+3D-Var, d'assimilation des réflectivités radar dans le modèle AROME. La méthode de restitution bayésienne 1D des profils d'humidité a été adaptée, afin d'inclure la phase différentielle spécifique et la réflectivité différentielle, en plus de la réflectivité, dans le vecteur d'observation. Plusieurs options de la méthode de restitution ont été testées et évaluées par des comparaisons aux observations radar et GPS. Des expériences d'assimilation menées sur deux cas convectifs ont ensuite été réalisées et ont permis d'évaluer l'impact des observations polarimétriques sur les champs analysés d'humidité ainsi que sur les prévisions de réflectivité et de cumuls de précipitation. / This PhD has explored the benefits of polarimetric variables (for centimeter wavelength radars), which are sensitive to the microphysical properties of hydrometeors, for convective scale numerical prediction models. In the first part of the PhD, a radar forward operator, consistent with the bulk 1 moment microphysical schemes typically used by the operational convective scale models, has been designed. Comparisons between observed and simulated variables for all radar types (S, C, X) have been performed for two convective cases, and helped validate the forward operator. Following these comparisons, quality controls have been specified so as to limitate the errors on the polarimetric variables before using them for assimilation. In the second part of the PhD, an assimilation method for polarimetric variables, based on the operational 1D+3D-Var assimilation method used for radar reflectivities in AROME model has been designed. The Bayesian retrieval of 1D humidity profiles has been adapted in order to include differential reflectivity and specific differential phase within the observation vector. Different options of the methodology have been tested and evaluated by comparisons with radar and GPS observations. Assimilation experiments conducted for two convective cases demonstrated an impact on analysed humidity fields. The effect of the assimilation of polarimetric variables on forecasted reflectivities and precipitation accumulations was also evaluated.
133

Ensemblový Kalmanův filtr na prostorech velké a nekonečné dimenze / Ensemble Kalman filter on high and infinite dimensional spaces

Kasanický, Ivan January 2017 (has links)
Title: Ensemble Kalman filter on high and infinite dimensional spaces Author: Mgr. Ivan Kasanický Department: Department of Probability and Mathematical Statistics Supervisor: doc. RNDr. Daniel Hlubinka, Ph.D., Department of Probability and Mathematical Statistics Consultant: prof. RNDr. Jan Mandel, CSc., Department of Mathematical and Statistical Sciences, University of Colorado Denver Abstract: The ensemble Kalman filter (EnKF) is a recursive filter, which is used in a data assimilation to produce sequential estimates of states of a hidden dynamical system. The evolution of the system is usually governed by a set of di↵erential equations, so one concrete state of the system is, in fact, an element of an infinite dimensional space. In the presented thesis we show that the EnKF is well defined on a infinite dimensional separable Hilbert space if a data noise is a weak random variable with a covariance bounded from below. We also show that this condition is su cient for the 3DVAR and the Bayesian filtering to be well posed. Additionally, we extend the already known fact that the EnKF converges to the Kalman filter in a finite dimension, and prove that a similar statement holds even in a infinite dimension. The EnKF su↵ers from a low rank approximation of a state covariance, so a covariance localization is required in...
134

Využití nekonvenčních pozorování v asimilaci dat do numerického předpovědního modelu počasí ve vysokém rozlišení spojení se studiem pomalého podprostoru řešení modelu / Non-conventional data assimilation in high resolution numerical weather prediction model with study of the slow manifold of the model

Benáček, Patrik January 2019 (has links)
Satellite instruments currently provide the largest source of infor- mation to today's data assimilation (DA) systems for numerical weather predic- tion (NWP). With the development of high-resolution models, the efficient use of observations at high density is essential to improve small-scale information in the weather forecast. However, a large amount of satellite radiances has to be removed from DA by horizontal data thinning due to uncorrelated observation error assumptions. Moreover, satellite radiances include systematic errors (biases) that may be even larger than the observation signal itself, and must be properly removed prior to DA. Although the Variational Bias Correction (VarBC) scheme is widely used by global NWP centers, there are still open questions regarding its use in Limited-Area Models (LAMs). This thesis aims to tackle the obser- vation error difficulties in assimilating polar satellite radiances in the meso-scale ALADIN system. Firstly, we evaluate spatial- and inter-channel error correla- tions to enhance the positive effect of data thinning. Secondly, we study satellite radiance bias characteristics with the key aspects of the VarBC in LAMs, and we compare the different VarBC configurations with regards to forecast performance. This work is a step towards improving the...
135

Remotely Sensed Data Assimilation Technique to Develop Machine Learning Models for Use in Water Management

Zaman, Bushra 01 May 2010 (has links)
Increasing population and water conflicts are making water management one of the most important issues of the present world. It has become absolutely necessary to find ways to manage water more efficiently. Technological advancement has introduced various techniques for data acquisition and analysis, and these tools can be used to address some of the critical issues that challenge water resource management. This research used learning machine techniques and information acquired through remote sensing, to solve problems related to soil moisture estimation and crop identification on large spatial scales. In this dissertation, solutions were proposed in three problem areas that can be important in the decision making process related to water management in irrigated systems. A data assimilation technique was used to build a learning machine model that generated soil moisture estimates commensurate with the scale of the data. The research was taken further by developing a multivariate machine learning algorithm to predict root zone soil moisture both in space and time. Further, a model was developed for supervised classification of multi-spectral reflectance data using a multi-class machine learning algorithm. The procedure was designed for classifying crops but the model is data dependent and can be used with other datasets and hence can be applied to other landcover classification problems. The dissertation compared the performance of relevance vector and the support vector machines in estimating soil moisture. A multivariate relevance vector machine algorithm was tested in the spatio-temporal prediction of soil moisture, and the multi-class relevance vector machine model was used for classifying different crop types. It was concluded that the classification scheme may uncover important data patterns contributing greatly to knowledge bases, and to scientific and medical research. The results for the soil moisture models would give a rough idea to farmers/irrigators about the moisture status of their fields and also about the productivity. The models are part of the framework which is devised in an attempt to provide tools to support irrigation system operational decisions. This information could help in the overall improvement of agricultural water management practices for large irrigation systems. Conclusions were reached based on the performance of these machines in estimating soil moisture using remotely sensed data, forecasting spatial and temporal variation of soil moisture and data classification. These solutions provide a new perspective to problem–solving techniques by introducing new methods that have never been previously attempted.
136

Vers une assimilation des données de déformation en volcanologie / Towards assimilation of deformation measurements in volcanology

Bato, Mary Grace 02 July 2018 (has links)
Le suivi de la mise en place du magma à faible profondeur et de sa migration vers la surface est crucial pour prévoir les éruptions volcaniques.Avec les progrès récents de l'imagerie SAR et le nombre croissant de réseaux GNSS continus sur les volcans, il est maintenant possible de fournir une évolution continue et spatialement étendue des déplacements de surface pendant les périodes inter-éruptives. Pour les volcans basaltiques, ces mesures combinées à des modèles dynamiques simples peuvent être exploitées pour caractériser et contraindre la mise en pression d'un ou de plusieurs réservoirs magmatiques, ce qui fournit une meilleure information prédictive sur l'emplacement du magma à faible profondeur. L'assimilation de données—un processus séquentiel qui combine au mieux les modèles et les observations, en utilisant parfois une information a priori basée sur les statistiques des erreurs, pour prédire l'état d'un système dynamique—a récemment gagné en popularité dans divers domaines des géosciences. Dans cette thèse, je présente la toute première application de l'assimilation de données en volcanologie en allant des tests synthétiques à l’utilisation de données géodésiques réelles.La première partie de ce travail se concentre sur le développement de stratégies afin d'évaluer le potentiel de l’assimilation de données. En particulier, le Filtre de Kalman d'Ensemble a été utilisé avec un modèle dynamique simple à deux chambres et de données géodésiques synthétiques pour aborder les points suivants : 1) suivi de l'évolution de la pression magmatique en profondeur et des déplacements de surface et estimation des paramètres statiques incertains du modèle, 2) assimilation des données GNSS et InSAR, 3) mise en évidence des avantages ou des inconvénients de l'EnKF par rapport à une technique d'inversion bayésienne. Les résultats montrent que l’EnKF fonctionne de manière satisfaisante et que l'assimilation de données semble prometteuse pour la surveillance en temps réel des volcans.La deuxième partie de la thèse est dédiée à l'application de la stratégie mise au point précédemment à l’exploitation des données GNSS inter-éruptives enregistrées de 2004 à 2011 au volcan Grímsvötn en Islande, afin de tester notre capacité à prédire la rupture d'une chambre magmatique en temps réel. Nous avons introduit ici le concept de ``niveau critique'' basé sur l’estimation de la probabilité d'une éruption à chaque pas de temps. Cette probabilité est définie à partir de la proportion d'ensembles de modèles qui dépassent un seuil critique, initialement assigné selon une distribution donnée. Nos résultats montrent que lorsque 25 +/- 1 % des ensembles du modèle ont dépassé la surpression critique une éruption est imminente. De plus, dans ce chapitre, nous élargissons également les tests synthétiques précédents en améliorant la stratégie EnKF d'assimilation des données géodésiques pour l'adapter à l’utilisation de données réelles en nombre limité. Les outils de diagnostiques couramment utilisés en assimilation de données sont mis en oeuvre et présentés.Enfin, je démontre qu'en plus de son intérêt pour prédire les éruptions volcaniques, l'assimilation séquentielle de données géodésiques basée sur l'utilisation de l'EnKF présente un potentiel unique pour apporter une information sur l'alimentation profonde du système volcanique. En utilisant le modèle dynamique à deux réservoirs pour le système de plomberie de Grímsvötn et en supposant une géométrie fixe et des propriétés magmatiques invariantes, nous mettons en évidence que l'apport basal en magma sous Grímsvötn diminue de 85 % au cours des 10 mois précédant le début de l'événement de rifting de Bárdarbunga. La perte d'au moins 0.016 km3 dans l'approvisionnement en magma de Grímsvötn est interprétée comme une conséquence de l'accumulation de magma sous Bárdarbunga et de l'alimentation consécutive de l'éruption Holuhraun à 41 km de distance. / Tracking magma emplacement at shallow depth as well as its migration towards the Earth's surface is crucial to forecast volcanic eruptions.With the recent advances in Interferometric Synthetic Aperture Radar (InSAR) imaging and the increasing number of continuous Global Navigation Satellite System (GNSS) networks recorded on volcanoes, it is now possible to provide continuous and spatially extensive evolution of surface displacements during inter-eruptive periods. For basaltic volcanoes, these measurements combined with simple dynamical models can be exploited to characterise and to constrain magma pressure building within one or several magma reservoirs, allowing better predictive information on the emplacement of magma at shallow depths. Data assimilation—a sequential time-forward process that best combines models and observations, sometimes a priori information based on error statistics, to predict the state of a dynamical system—has recently gained popularity in various fields of geoscience (e.g. ocean-weather forecasting, geomagnetism and natural resources exploration). In this dissertation, I present the very first application of data assimilation in volcanology from synthetic tests to analyzing real geodetic data.The first part of this work focuses on the development of strategies in order to test the applicability and to assess the potential of data assimilation, in particular, the Ensemble Kalman Filter (EnKF) using a simple two-chamber dynamical model (Reverso2014) and artificial geodetic data. Synthetic tests are performed in order to address the following: 1) track the magma pressure evolution at depth and reconstruct the synthetic ground surface displacements as well as estimate non-evolving uncertain model parameters, 2) properly assimilate GNSS and InSAR data, 3) highlight the strengths and weaknesses of EnKF in comparison with a Bayesian-based inversion technique (e.g. Markov Chain Monte Carlo). Results show that EnKF works well with the synthetic cases and there is a great potential in utilising data assimilation for real-time monitoring of volcanic unrest.The second part is focused on applying the strategy that we developed through synthetic tests in order to forecast the rupture of a magma chamber in real time. We basically explored the 2004-2011 inter-eruptive dataset at Grímsvötn volcano in Iceland. Here, we introduced the concept of “eruption zones” based on the evaluation of the probability of eruption at each time step estimated as the percentage of model ensembles that exceeded their failure overpressure values initially assigned following a given distribution. Our results show that when 25 +/- 1% of the model ensembles exceeded the failure overpressure, an actual eruption is imminent. Furthermore, in this chapter, we also extend the previous synthetic tests by further enhancing the EnKF strategy of assimilating geodetic data in order to adapt to real world problems such as, the limited amount of geodetic data available to monitor ice-covered active volcanoes. Common diagnostic tools in data assimilation are presented.Finally, I demonstrate that in addition to the interest of predicting volcanic eruptions, sequential assimilation of geodetic data on the basis of EnKF shows a unique potential to give insights into volcanic system roots. Using the two-reservoir dynamical model for Grímsvötn 's plumbing system and assuming a fixed geometry and constant magma properties, we retrieve the temporal evolution of the basal magma inflow beneath Grímsvötn that drops up to 85% during the 10 months preceding the initiation of the Bárdarbunga rifting event. The loss of at least 0.016 km3 in the magma supply of Grímsvötn is interpreted as a consequence of magma accumulation beneath Bárdarbunga and subsequent feeding of the Holuhraun eruption 41 km away.
137

Wildfire Modeling with Data Assimilation

Johnston, Andrew 14 December 2022 (has links)
Wildfire modeling is a complex, computationally costly endeavor, but with droughts worsening and fires burning across the western United States, obtaining accurate wildfire predictions is more important than ever. In this paper, we present a novel approach to wildfire modeling using data assimiliation. We model wildfire spread with a modification of the partial differential equation model described by Mandel et al. in their 2008 paper. Specifically, we replace some constant parameter values with geospatial functions of fuel type. We combine deep learning and remote sensing to obtain real-time data for the model and employ the Nelder-Mead method to recover optimal model parameters with data assimilation. We demonstrate the efficacy of this approach on computer-generated fires, as well as real fire data from the 2021 Dixie Fire in California. On generated fires, this approach resulted in an average Jaccard index of 0.996 between the predicted and actual fire perimeters and an average Kulczynski measure of 0.997. On data from the Dixie Fire, the average Jaccard index achieved was 0.48, and the average Kulczynski measure was 0.66.
138

Machine Learning for Improvement of Ocean Data Resolution for Weather Forecasting and Climatological Research

Huda, Md Nurul 18 October 2023 (has links)
Severe weather events like hurricanes and tornadoes pose major risks globally, underscoring the critical need for accurate forecasts to mitigate impacts. While advanced computational capabilities and climate models have improved predictions, lack of high-resolution initial conditions still limits forecast accuracy. The Atlantic's "Hurricane Alley" region sees most storms arise, thus needing robust in-situ ocean data plus atmospheric profiles to enable precise hurricane tracking and intensity forecasts. Examining satellite datasets reveals radio occultation (RO) provides the most accurate 5-25 km altitude atmospheric measurements. However, below 5 km accuracy remains insufficient over oceans versus land areas. Some recent benchmark study e.g. Patil Iiyama (2022), and Wei Guan (2022) in their work proposed the use of deep learning models for sea surface temperature (SST) prediction in the Tohoku region with very low errors ranging from 0.35°C to 0.75°C and the root-mean-square error increases from 0.27°C to 0.53°C over the over the China seas respectively. The approach we have developed remains unparalleled in its domain as of this date. This research is divided into two parts and aims to develop a data driven satellite-informed machine learning system to combine high-quality but sparse in-situ ocean data with more readily available low-quality satellite data. In the first part of the work, a novel data-driven satellite-informed machine learning algorithm was implemented that combines High-Quality/Low-Coverage in-situ point ocean data (e.g. ARGO Floats) and Low-Quality/High-Coverage Satellite ocean Data (e.g. HYCOM, MODIS-Aqua, G-COM) and generated high resolution data with a RMSE of 0.58◦C over the Atlantic Ocean.The second part of the work a novel GNN algorithm was implemented on the Gulf of Mexico and showed it can successfully capture the complex interactions between the ocean and mimic the path of a ARGO floats with a RMSE of 1.40◦C. / Doctor of Philosophy / Severe storms like hurricanes and tornadoes are a major threat around the world. Accurate weather forecasts can help reduce their impacts. While climate models have improved predictions, lacking detailed initial conditions still limits forecast accuracy. The Atlantic's "Hurricane Alley" sees many storms form, needing good ocean and atmospheric data for precise hurricane tracking and strength forecasts. Studying satellite data shows radio occultation provides the most accurate 5-25 km high altitude measurements over oceans. But below 5 km accuracy remains insufficient versus over land. Recent research proposed using deep learning models for sea surface temperature prediction with low errors. Our approach remains unmatched in this area currently. This research has two parts. First, we developed a satellite-informed machine learning system combining limited high-quality ocean data with more available low-quality satellite data. This generated high resolution Atlantic Ocean data with an error of 0.58°C. Second, we implemented a new algorithm on the Gulf of Mexico, successfully modeling complex ocean interactions and hurricane paths with an error of 1.40°C. Overall, this research advances hurricane forecasting by combining different data sources through innovative machine learning techniques. More accurate predictions can help better prepare communities in hurricane-prone regions.
139

Relative Role of Uncertainty for Predictions of Future Southeastern U.S. Pine Carbon Cycling

Jersild, Annika Lee 06 July 2016 (has links)
Predictions of how forest productivity and carbon sequestration will respond to climate change are essential for making forest management decisions and adapting to future climate. However, current predictions can include considerable uncertainty that is not well quantified. To address the need for better quantification of uncertainty, we calculated and compared ecosystem model parameter, ecosystem model process, climate model, and climate scenario uncertainty for predictions of Southeastern U.S. pine forest productivity. We applied a data assimilation using Metropolis-Hastings Markov Chain Monte Carlo to fuse diverse datasets with the Physiological Principles Predicting Growth model. The spatially and temporally diverse data sets allowed for novel constraints on ecosystem model parameters and allowed for the quantification of uncertainty associated with parameterization and model structure (process). Overall, we found that the uncertainty is higher for parameter and process model uncertainty than the climate model uncertainty. We determined that climate change will result in a likely increase in terrestrial carbon storage and that higher emission scenarios increase the uncertainty in our predictions. In addition, we determined regional variations in biomass accumulation due to a response to the change in frost days, temperature, and vapor pressure deficit. Since the uncertainty associated with ecosystem model parameter and process uncertainty was larger than the uncertainty associated with climate predictions, our results indicate that better constraining parameters in ecosystem models and improving the mathematical structure of ecosystem models can improve future predictions of forest productivity and carbon sequestration. / Master of Science
140

Efficient Computational Tools for Variational Data Assimilation and Information Content Estimation

Singh, Kumaresh 23 August 2010 (has links)
The overall goals of this dissertation are to advance the field of chemical data assimilation, and to develop efficient computational tools that allow the atmospheric science community benefit from state of the art assimilation methodologies. Data assimilation is the procedure to combine data from observations with model predictions to obtain a more accurate representation of the state of the atmosphere. As models become more complex, determining the relationships between pollutants and their sources and sinks becomes computationally more challenging. The construction of an adjoint model ( capable of efficiently computing sensitivities of a few model outputs with respect to many input parameters ) is a difficult, labor intensive, and error prone task. This work develops adjoint systems for two of the most widely used chemical transport models: Harvard's GEOS-Chem global model and for Environmental Protection Agency's regional CMAQ regional air quality model. Both GEOS-Chem and CMAQ adjoint models are now used by the atmospheric science community to perform sensitivity analysis and data assimilation studies. Despite the continuous increase in capabilities, models remain imperfect and models alone cannot provide accurate long term forecasts. Observations of the atmospheric composition are now routinely taken from sondes, ground stations, aircraft, and satellites, etc. This work develops three and four dimensional variational data assimilation capabilities for GEOS-Chem and CMAQ which allow to estimate chemical states that best fit the observed reality. Most data assimilation systems to date use diagonal approximations of the background covariance matrix which ignore error correlations and may lead to inaccurate estimates. This dissertation develops computationally efficient representations of covariance matrices that allow to capture spatial error correlations in data assimilation. Not all observations used in data assimilation are of equal importance. Erroneous and redundant observations not only affect the quality of an estimate but also add unnecessary computational expense to the assimilation system. This work proposes techniques to quantify the information content of observations used in assimilation; information-theoretic metrics are used. The four dimensional variational approach to data assimilation provides accurate estimates but requires an adjoint construction, and uses considerable computational resources. This work studies versions of the four dimensional variational methods (Quasi 4D-Var) that use approximate gradients and are less expensive to develop and run. Variational and Kalman filter approaches are both used in data assimilation, but their relative merits and disadvantages in the context of chemical data assimilation have not been assessed. This work provides a careful comparison on a chemical assimilation problem with real data sets. The assimilation experiments performed here demonstrate for the first time the benefit of using satellite data to improve estimates of tropospheric ozone. / Ph. D.

Page generated in 0.1184 seconds