• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 57
  • 13
  • 13
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 216
  • 216
  • 75
  • 39
  • 36
  • 35
  • 35
  • 26
  • 25
  • 19
  • 18
  • 18
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Combined Use of Models and Measurements for Spatial Mapping of Concentrations and Deposition of Pollutants

Ambachtsheer, Pamela January 2004 (has links)
When modelling pollutants in the atmosphere, it is nearly impossible to get perfect results as the chemical and mechanical processes that govern pollutant concentrations are complex. Results are dependent on the quality of the meteorological input as well as the emissions inventory used to run the model. Also, models cannot currently take every process into consideration. Therefore, the model may get results that are close to, or show the general trend of the observed values, but are not perfect. However, due to the lack of observation stations, the resolution of the observational data is poor. Furthermore, the chemistry over large bodies of water is different from land chemistry, and in North America, there are no stations located over the great lakes or the ocean. Consequently, the observed values cannot accurately cover these regions. Therefore, we have combined model output and observational data when studying ozone concentrations in north eastern North America. We did this by correcting model output at observational sites with local data. We then interpolated those corrections across the model grid, using a Kriging procedure, to produce results that have the resolution of model results with the local accuracy of the observed values. Results showed that the corrected model output is much improved over either model results or observed values alone. This improvement was observed both for sites that were used in the correction process as well as sites that were omitted from the correction process.
22

Simulation-inversion des diagraphies / Simulation-inversion of logs

Vandamme, Thibaud 12 November 2018 (has links)
L’évaluation des formations géologiques consiste en l’analyse et la synthèse de données de différentes sources, de différentes échelles (microscopique à kilométrique) et acquises à des dates très variables. Le processus conventionnel de caractérisation des formations relève alors de l’interprétation physique spécialisée de chacune de ces sources de données et leur mise en cohérence par des processus de synthèse essentiellement d’ordre statistique (corrélation, apprentissage, up-scaling…). Il s’avère cependant qu’une source de données présente un caractère central : les diagraphies. Ces mesures physiques de différentes natures (nucléaires, acoustiques, électromagnétiques…) sont réalisées le long de la paroi d’un puits à l’aide de différentes sondes. Elles sont sensibles aux propriétés in situ des roches, et ce, sur une gamme d’échelle centimétrique à métrique intermédiaire aux carottes et données de test de production. De par leur profondeur d’investigation, les données diagraphiques sont particulièrement sensibles au phénomène d’invasion de boue se produisant lors du forage dans l’abord puits. Traditionnellement, l’invasion est modélisée de façon frustre au moment de l’interprétation diagraphiques par un simple effet piston. Ce modèle simple permet d’honorer le bilan de volume mais ne prend aucunement en compte la physique réelle d’invasion et prive, de fait, les diagraphies de toute portée dynamique. Des essais de modélisation de l’historique d’invasion couplés aux données diagraphiques ont déjà été élaborés par différents laboratoires et une abondante littérature sur le sujet est disponible. Les limitations majeures de ces approches résident dans le caractère sous déterminé des problèmes inverses issus de ces modèles physiques et dans le fait que la donnée diagraphique est réalisée en général sur un intervalle de temps inadaptée au regard du développement de l’invasion. Nous proposons une approche différente qui s’attèle non pas à décrire la physique de l’écoulement mais celle de l’équilibre radial des fluides dans le domaine envahi lorsque les diagraphies sont acquises. Nous montrons qu’en introduisant quelques contraintes pétrophysiques supplémentaires, il est possible d’inverser efficacement la distribution des propriétés dynamiques pour chaque faciès géologique. L’inversion prend en compte le phénomène d’invasion radial dans la zone à eau ainsi que l’équilibre capillaire vertical caractérisant le profil de saturation dans le réservoir pour chaque facies. A chaque profondeur du puits, sont ainsi obtenues perméabilités, pressions capillaires et facteurs de cimentation avec leurs incertitudes ainsi que les lois pétrophysiques propres à chaque faciès. Cette méthode a été appliquée à deux puits réels. En guise de validation, les résultats d’inversion ont été comparés aux mesures laboratoire faites sur carotte. De plus, les perméabilités inversées ont été comparées aux transitoires de pression de mini-tests. La cohérence des résultats montre que, d’une part, les hypothèses de base du modèle sont validées et que, d’autre part, l’approche fournit une estimation fiable de grandeurs dynamiques à toute échelle pour chaque faciès réservoir, et ce, dès l’acquisition des données diagraphiques. L’approche d’inversion proposée a permis de lever une limitation majeure des précédentes tentatives de prédiction des propriétés dynamiques par les diagraphies en reconsidérant la problématique non pas sous l’angle d’une modélisation phénoménologique exacte mais en l’abordant de manière globale à l’échelle d’une chaîne d’étude complète. Cette approche permet de fait une mise en cohérence très précoce des données, d’identifier les faciès d’intérêt et de qualifier les besoins véritables en données. Cet outil s’avère très puissant pour qualifier et caractériser les hétérogénéités pétrophysiques des formations et aider ainsi à résoudre le problème de mise à l’échelle des grandeurs dynamiques / The current geological formation evaluation process is built on a workflow using data from differentsources, different scales (microscopic to kilometric) and acquired at different times. Theconventional process of formation evaluation belongs to the dedicated study of each of thesesource of data and their reconciliation through a synthesis step, often based on statisticalconsideration (correlation, learning, up-scaling …). It turns out that there exists a source of datawhich is of considerable importance: logs. These physical measurements of different nature(nuclear, acoustic, electro-magnetic…) are acquired all across the well thanks to multiple probes.They are sensitive to the in situ properties of the rock on an intermediate scale between core dataand well tests (from centimeters to several meters). Because of their depth of investigation, logsare particularly sensitive to the mud filtrate invasion, a phenomenon which occurs during thedrilling in the near well-bore environment. The invasion is conventionally modeled in a rough waywith a piston effect hypothesis. This simple model allows to ensure the volume balance but doesnot take into account the physical processes of the invasion and thus prevent any estimation ofdynamic properties from log interpretation. Several attempts of simulating the complete history ofinvasion have been made by different laboratories in the past, and a rich literature is available onthis topic. The major pitfalls of these approaches come from the under-determination of theinverse problems derived from such models. Furthermore, logs are generally made in a time lapsewhich does not allow to fully characterize the process of invasion. We propose a differentapproach which does not fully describe the physics of the invasion but considers that a radialequilibrium has been reached between the fluids in the invaded zone when logs are acquired. Weshow that it is possible to efficiently invert the distribution of dynamical properties for eachgeological facies by adding some petrophysical constraints. The inversion takes into account thephenomenon of radial invasion in the water zone and the vertical capillary equilibrium describingthe water saturation profile in the reservoir for each facies. At each depth, permeabilities, capillarypressures and cementation factors are thus obtained, along with their uncertainties and thepetrophysical laws specific to each facies. This method has been applied to two wells. Weobtained good results when comparing inverted parameters to the measurements made on coresamples in laboratory. Furthermore, inverted permeabilities have also been compared topermeabilities derived from mini-tests. The consistency of the results shows that, on the one hand,the hypothesis behind our model are valid and, on the other hand, this approach can provide areliable estimation of dynamical parameters at different scales for each reservoir facies as soon asthe logs are acquired. The proposed approach allows to overcome a major limitation of theprevious attempts of the dynamical properties estimation from log interpretation. It allows areconciliation of different data and a facies recognition at an early stage of interpretation, and canindicate the real contribution of each source of data. The technique can even help in identifying theformation heterogeneities and for the petrophysical upscaling.
23

Ionospheric modelling and data assimilation

Da Dalt, Federico January 2015 (has links)
A New Ionospheric Model (ANIMo) based upon the physics of production, loss, and vertical transport has been developed. The model is driven by estimates of neutral composition, temperature and solar flux and is applicable to the mid-latitude regions of the Earth under quiet and moderate geomagnetic conditions. This model was designed to exhibit specific features that were not easy to find all together in other existing ionospheric models. ANIMo needed to be simple to use and interact with, relatively accurate, reliable, robust and computationally efficient. The definition of these characteristics was mostly driven by the intention to use ANIMo in a Data Assimilation (DA) scheme. DA or data ingestion can be described as a technique where observations and model realizations, called background information, are combined together to achieve a level of accuracy that is higher than the accuracy of the two elements taken separately. In this project ANIMo was developed to provide a robust and reliable background contribution. The observations are given by the Global Positioning System (GPS) ionospheric measurements, collected from several networks of GPS ground-station receivers and are available on on-line repositories. The research benefits from the Multi-Instrument Data Analysis System (MIDAS) [Mitchell and Spencer, 2003; Spencer and Mitchell, 2007], which is an established ionospheric tomography software package that produces three dimensional reconstructions of the ionosphere starting from GPS measurements. Utilizing ANIMo in support of MIDAS has therefore the potential to generate a very stable set-up for monitoring and study the ionosphere. In particular, the model is expected to compensate some of the typical limitations of ionospheric tomography techniques described by Yeh and Raymund [1991] and Raymund et al. [1994]. These are associated with the lack of data due to the uneven distribution of ground-based receivers and limitations to viewing angles. Even in regions of good receiver coverage there is a need to compensate for information on the vertical profile of ionisation. MIDAS and other tomography techniques introduce regularization factors that can assure the achievement of a unique solution in the inversion operation. These issues could be solved by aiding the operation with external information provided by a physical model, like ANIMo, through a data ingestion scheme; this ensures that the contribution is completely independent and there is an effective accuracy improvement. Previously, the limitation in vertical resolution has been solved by applying vertical orthonormal functions based upon empirical models in different ways [Fougere, 1995; Fremouw et al., 1992; Sutton and Na, 1994]. The potential for the application of a physical model, such ANIMo is that it can provide this information according to the current ionospheric conditions. During the project period ANIMo has been developed and incorporated with MIDAS. The result is A New Ionospheric Data Assimilation System (ANIDAS); its name suggests that the system is the implementation of ANIMo in MIDAS. Because ANIDAS is a data ingestion scheme, it has the potential to be used to perform not only more accurate now-casting but also forecasting. The outcomes of ANIDAS at the current time can be used to initialise ANIMo for the next time step and therefore trigger another assimilation turn. In future, it is intended that ANIMo will form the basis to a new system to predict the electron density of the ionosphere – ionospheric forecasting.
24

Calibration of plant functional type parameters using the adJULES system

Raoult, Nina January 2017 (has links)
Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed to search for locally optimum parameters by calibrating against observations. This thesis describes adJULES in a data assimilation framework and demonstrates its ability to improve the model-data fit using eddy-covariance measurements of gross primary productivity (GPP) and latent heat (LE) fluxes. The adJULES system is extended to have the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85% of the sites used in the study, at both the calibration and evaluation stages. The new improved parameters for JULES are presented along with the associated uncertainties for each parameter. The results of the calibrations are compared to structural changes and used in a cluster analysis in order to challenge the PFT definitions in JULES. This thesis concludes with simple sensitivity studies which assess how the calibration of JULES has affected the sensitivity of the model to CO2-induced climate change.
25

Apports de données Argo pour caractériser les erreurs modèles et contraindre les systèmes d'assimilation / Contribution of argo data to characterize model errors and data assimilation systems

Ninove, Floriane 17 November 2015 (has links)
Le programme international Argo a révolutionné l'observation globale des océans. Une flotte de plus de 3000 instruments sous-marins autonomes est en place, programmés pour fournir des mesures globales de profils de température et de salinité sur les 2000 premiers mètres de l'océan. Ces mesures sont assimilées dans des modèles océaniques conjointement aux observations satellitaires afin de décrire et prévoir l'océan. Dans cette thèse nous proposons une analyse permettant de caractériser les erreurs d'un modèle global océanique par comparaison aux données du réseau Argo. Ces erreurs sont décrites via leurs amplitudes, leurs variations régionales et temporelles ainsi que via les échelles spatiales associées. Une caractérisation des échelles spatiales à la fois pour la variabilité océanique et pour les erreurs modèles est, en particulier, menée permettant de relier la structure des erreurs modèles à celle du signal. Enfin, des techniques basées sur le contenu en information sont testées afin de pouvoir à terme mieux quantifier l'impact des observations Argo sur les systèmes d'assimilation de Mercator Océan. / The international Argo program has revolutionized the observation of the global ocean. An array of more than 3000 profiling floats is in place and provides global measurements of temperature and salinity on the first 2000 meters of the ocean. These measurements are assimilated into ocean models together with satellite observations to describe and forecast the ocean state. We propose here to characterize model errors using Argo observations. Model errors are described through their amplitude, geographical and temporal variations as well as their spatial scales. Spatial scales of both model errors and ocean signals are, in particular, estimated. This allows a comparison of model errors and ocean variability structure. Finally, techniques based on information content are tested in the longer run quantifying the impact of Argo observations in the Mercator Ocean data assimilation systems.
26

Impact of Assimilating Airborne Doppler Radar Winds on the Inner-Core Structure and Intensity of Hurricane Ike (2008)

Gordon, Ronald Walter 26 July 2011 (has links)
Accurate prediction of Tropical Cyclones (TC) is vital for the protection of life and property in areas that are prone to their destructive forces. While significant improvements have been made in forecasting TC track, intensity remains a challenge. It is hypothesized that accurate TC intensity forecast requires, among other things, an adequate initial description of their inner-core region. Therefore, there must be reliable observations of the inner-core area of the TC and effective data assimilation (DA) methods to ingest these data into the Numerical Weather Prediction (NWP) models. However, these requirements are seldom met at the relatively low resolution of operational global prediction models and the lack of routine observations assimilated in the TC inner-core. This study tests the impacts of assimilating inner-core Airborne Doppler Radar (ADR) winds on the initial structure and subsequent intensity forecast of Hurricane Ike (2008). The 4-dimensional variational (4DVar) and the 3-dimensional variational (3DVar) methods are used to perform DA while the Weather Research and Forecasting (WRF) model is used to perform forecasts. It is found that assimilating data helps to initialize a more realistic inner-core structure using both DA methods. Additionally, the resulting short-term and long-term intensity forecasts are more accurate when data is assimilated versus cases when there is no DA. Additionally, it is found that in some cases the impact of DA lasts up to 12 hours longer with 4DVar versus 3DVar. It is shown that this is because the flow-dependent 4DVar method produces more dynamically and balanced analysis increments compared to the static and isotropic increments of 3DVar. However, the impact of using both methods is minimal in the long-range. The analyses show that at longer forecast range the dynamics of hurricane Ike was influenced more by outer environment features than the inner-core winds.
27

Dynamical aspects of atmospheric data assimilation in the tropics

Žagar, Nedjeljka January 2004 (has links)
A faithful depiction of the tropical atmosphere requires three-dimensional sets of observations. Despite the increasing amount of observations presently available, these will hardly ever encompass the entire atmosphere and, in addition, observations have errors. Additional (background) information will always be required to complete the picture. Valuable added information comes from the physical laws governing the flow, usually mediated via a numerical weather prediction (NWP) model. These models are, however, never going to be error-free, why a reliable estimate of their errors poses a real challenge since the whole truth will never be within our grasp. The present thesis addresses the question of improving the analysis procedures for NWP in the tropics. Improvements are sought by addressing the following issues: - the efficiency of the internal model adjustment, - the potential of the reliable background-error information, as compared to observations, - the impact of a new, space-borne line-of-sight wind measurements, and - the usefulness of multivariate relationships for data assimilation in the tropics. Most NWP assimilation schemes are effectively univariate near the equator. In this thesis, a multivariate formulation of the variational data assimilation in the tropics has been developed. The proposed background-error model supports the mass-wind coupling based on convectively-coupled equatorial waves. The resulting assimilation model produces balanced analysis increments and hereby increases the efficiency of all types of observations. Idealized adjustment and multivariate analysis experiments highlight the importance of direct wind measurements in the tropics. In particular, the presented results confirm the superiority of wind observations compared to mass data, in spite of the exact multivariate relationships available from the background information. The internal model adjustment is also more efficient for wind observations than for mass data. In accordance with these findings, new satellite wind observations are expected to contribute towards the improvement of NWP and climate modeling in the tropics. Although incomplete, the new wind-field information has the potential to reduce uncertainties in the tropical dynamical fields, if used together with the existing satellite mass-field measurements. The results obtained by applying the new background-error representation to the tropical short-range forecast errors of a state-of-art NWP model suggest that achieving useful tropical multivariate relationships may be feasible within an operational NWP environment.
28

Combined Use of Models and Measurements for Spatial Mapping of Concentrations and Deposition of Pollutants

Ambachtsheer, Pamela January 2004 (has links)
When modelling pollutants in the atmosphere, it is nearly impossible to get perfect results as the chemical and mechanical processes that govern pollutant concentrations are complex. Results are dependent on the quality of the meteorological input as well as the emissions inventory used to run the model. Also, models cannot currently take every process into consideration. Therefore, the model may get results that are close to, or show the general trend of the observed values, but are not perfect. However, due to the lack of observation stations, the resolution of the observational data is poor. Furthermore, the chemistry over large bodies of water is different from land chemistry, and in North America, there are no stations located over the great lakes or the ocean. Consequently, the observed values cannot accurately cover these regions. Therefore, we have combined model output and observational data when studying ozone concentrations in north eastern North America. We did this by correcting model output at observational sites with local data. We then interpolated those corrections across the model grid, using a Kriging procedure, to produce results that have the resolution of model results with the local accuracy of the observed values. Results showed that the corrected model output is much improved over either model results or observed values alone. This improvement was observed both for sites that were used in the correction process as well as sites that were omitted from the correction process.
29

Streamline Assisted Ensemble Kalman Filter - Formulation and Field Application

Devegowda, Deepak 2009 August 1900 (has links)
The goal of any data assimilation or history matching algorithm is to enable better reservoir management decisions through the construction of reliable reservoir performance models and the assessment of the underlying uncertainties. A considerable body of research work and enhanced computational capabilities have led to an increased application of robust and efficient history matching algorithms to condition reservoir models to dynamic data. Moreover, there has been a shift towards generating multiple plausible reservoir models in recognition of the significance of the associated uncertainties. This provides for uncertainty analysis in reservoir performance forecasts, enabling better management decisions for reservoir development. Additionally, the increased deployment of permanent well sensors and downhole monitors has led to an increasing interest in maintaining 'live' models that are current and consistent with historical observations. One such data assimilation approach that has gained popularity in the recent past is the Ensemble Kalman Filter (EnKF) (Evensen 2003). It is a Monte Carlo approach to generate a suite of plausible subsurface models conditioned to previously obtained measurements. One advantage of the EnKF is its ability to integrate different types of data at different scales thereby allowing for a framework where all available dynamic data is simultaneously or sequentially utilized to improve estimates of the reservoir model parameters. Of particular interest is the use of partitioning tracer data to infer the location and distribution of target un-swept oil. Due to the difficulty in differentiating the relative effects of spatial variations in fractional flow and fluid saturations and partitioning coefficients on the tracer response, interpretation of partitioning tracer responses is particularly challenging in the presence of mobile oil saturations. The purpose of this research is to improve the performance of the EnKF in parameter estimation for reservoir characterization studies without the use of a large ensemble size so as to keep the algorithm efficient and computationally inexpensive for large, field-scale models. To achieve this, we propose the use of streamline-derived information to mitigate problems associated with the use of the EnKF with small sample sizes and non-linear dynamics in non-Gaussian settings. Following this, we present the application of the EnKF for interpretation of partitioning tracer tests specifically to obtain improved estimates of the spatial distribution of target oil.
30

Inverse modelling to forecast enclosure fire dynamics

Jahn, Wolfram January 2010 (has links)
Despite advances in the understanding of fire dynamics over the past decades and despite the advances in computational capacity, our ability to predict the behaviour of fires in general and building fires in particular remains very limited. This thesis proposes and studies a method to use measurements of the real event in order to steer and accelerate fire simulations. This technology aims at providing forecasts of the fire development with a positive lead time, i.e. the forecast of future events is ready before those events take place. A simplified fire spread model is implemented, and sensor data are assimilated into the model in order to estimate the parameters that characterize the spread model and thus recover information lost by approximations. The assimilation process is posed as an inverse problem, which is solved minimizing a non linear cost function that measures the distance between sensor data and the forward model. In order to accelerate the optimization procedure, the ‘tangent linear model’ is implemented, i.e. the forward model is linearized around the initial guess of the governing parameters that are to be estimated, thus approximating the cost function by a quadratic function. The methodology was tested first with a simple two-zone forward model, and then with a coarse grid Computational Fluid Dynamics (CFD) fire model as forward model. Observations for the inverse modelling were generated using a fine grid CFD simulation in order to illustrate the methodology. A test case with observations from a real scale fire test is presented at the end of this document. In the two-zone model approach the spread rate, entrainment coefficient and gas transport time are the governing invariant parameters that are estimated. The parameters could be estimated correctly and the temperature and the height of the hot layer were reproduced satisfactorily. Moreover, the heat release rate and growth rate were estimated correctly with a positive lead time of up to 30 s. The results showed that the simple mass and heat balances and plume correlation of the zone model were enough to satisfactorily forecast the main features of the fire, and that positive lead times are possible. With the CFD forward model the growth rate, fuel mass loss rate and other parameters of a fire were estimated by assimilating measurements from the fire into the model. It was shown that with a field type forward model it is possible to estimate the growth rates of several different spread rates simultaneously. A coarse grid CFD model with very short computation times was used to assimilate measurements and it was shown that spatially resolved forecasts can be obtained in reasonable time, when combined with observations from the fire. The assimilation of observations from a real scale fire test into a coarse grid CFD model showed that the estimation of a fire growth parameter is possible in complicated scenarios in reasonable time, and that the resulting forecasts at localized level present good levels of accuracy. The proposed methodology is still subject to ongoing research. The limited capability of the forward model to represent the true fire has to be addressed with more detail, and the additional information that has to be provided in order to run the simulations has to be investigated. When using a CFD type forward model, additional to the detailed geometry, it is necessary to establish the location of the fire origin and the potential fuel load before starting the assimilation cycle. While the fire origin can be located easily (as a first approximation the location of the highest temperature reading can be used), the fuel load is potentially very variable and its exact distribution might be impractical to continually keep track of. It was however shown that for relatively small compartments the exact fuel distribution is not essential in order to produce an adequate forecast, and the fuel load could for example be established based on a statistical analysis of typical compartment layouts.

Page generated in 0.149 seconds