• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 57
  • 13
  • 12
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 212
  • 212
  • 75
  • 38
  • 35
  • 34
  • 34
  • 24
  • 24
  • 18
  • 18
  • 18
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Computational Tools for Chemical Data Assimilation with CMAQ

Gou, Tianyi 15 February 2010 (has links)
The Community Multiscale Air Quality (CMAQ) system is the Environmental Protection Agency's main modeling tool for atmospheric pollution studies. CMAQ-ADJ, the adjoint model of CMAQ, offers new analysis capabilities such as receptor-oriented sensitivity analysis and chemical data assimilation. This thesis presents the construction, validation, and properties of new adjoint modules in CMAQ, and illustrates their use in sensitivity analyses and data assimilation experiments. The new module of discrete adjoint of advection is implemented with the aid of automatic differentiation tool (TAMC) and is fully validated by comparing the adjoint sensitivities with finite difference values. In addition, adjoint sensitivity with respect to boundary conditions and boundary condition scaling factors are developed and validated in CMAQ. To investigate numerically the impact of the continuous and discrete advection adjoints on data assimilation, various four dimensional variational (4D-Var) data assimilation experiments are carried out with the 1D advection PDE, and with CMAQ advection using synthetic and real observation data. The results show that optimization procedure gives better estimates of the reference initial condition and converges faster when using gradients computed by the continuous adjoint approach. This counter-intuitive result is explained using the nonlinearity properties of the piecewise parabolic method (the numerical discretization of advection in CMAQ). Data assimilation experiments are carried out using real observation data. The simulation domain encompasses Texas and the simulation period is August 30 to September 1, 2006. Data assimilation is used to improve both initial and boundary conditions. These experiments further validate the tools developed in this thesis. / Master of Science
42

Large-Scale Simulations Using First and Second Order Adjoints with Applications in Data Assimilation

Zhang, Lin 23 July 2007 (has links)
In large-scale air quality simulations we are interested in the influence factors which cause changes of pollutants, and optimization methods which improve forecasts. The solutions to these problems can be achieved by incorporating adjoint models, which are efficient in computing the derivatives of a functional with respect to a large number of model parameters. In this research we employ first order adjoints in air quality simulations. Moreover, we explore theoretically the computation of second order adjoints for chemical transport models, and illustrate their feasibility in several aspects. We apply first order adjoints to sensitivity analysis and data assimilation. Through sensitivity analysis, we can discover the area that has the largest influence on changes of ozone concentrations at a receptor. For data assimilation with optimization methods which use first order adjoints, we assess their performance under different scenarios. The results indicate that the L-BFGS method is the most efficient. Compared with first order adjoints, second order adjoints have not been used to date in air quality simulation. To explore their utility, we show the construction of second order adjoints for chemical transport models and demonstrate several applications including sensitivity analysis, optimization, uncertainty quantification, and Hessian singular vectors. Since second order adjoints provide second order information in the form of Hessian-vector product instead of the entire Hessian matrix, it is possible to implement applications for large-scale models which require second order derivatives. Finally, we conclude that second order adjoints for chemical transport models are computationally feasible and effective. / Master of Science
43

Data Assimilation Experiments Using An Indian Ocean General Circulation Model

Aneesh, C S 08 1900 (has links)
Today, ocean modeling is fast developing as a versatile tool for the study of earth’s climate, local marine ecosystems and coastal engineering applications. Though the field of ocean modeling began in the early 1950s along with the development of climate models and primitive computers, even today, the state-of-the-art ocean models have their own limitations. Many issues still remain such as the uncertainity in the parameterisation of essential processes that occur on spatial and temporal scales smaller than that can be resolved in model calculations, atmospheric forcing of the ocean and the boundary and initial conditions. The advent of data assimilation into ocean modeling has heralded a new era in the field of ocean modeling and oceanic sciences. “Data assimilation” is a methodology in which observations are used to improve the forecasting skill of operational meteorological models. The study in the present thesis mainly focuses on obtaining a four dimensional realization (the spatial description coupled with the time evolution) of the oceanic flow that is simultaneously consistent with the observational evidence and with the dynamical equations of motion and to provide initial conditions for predictions of oceanic circulation and tracer distribution. A good implementation of data assimilation can be achieved with the availability of large number of good quality observations of the oceanic fields as both synoptic and in-situ data. With the technology in satellite oceanography and insitu measurements advancing by leaps over the past two decades, good synoptic and insitu observations of oceanic fields have been achieved. The current and expected explosion in remotely sensed and insitu measured oceanographic data is ushering a new age of ocean modeling and data assimilation. The thesis presents results of analysis of the impact of data assimilation in an ocean general circulation model of the North Indian Ocean. In this thesis we have studied the impact of assimilation of temperature and salinity profiles from Argo floats and Sea Surface height anomalies from satellite altimeters in a Sigma-coordinate Indian Ocean model. An ocean data assimilation system based on the Regional Ocean Modeling System (ROMS) for the Indian Ocean is used. This model is implemented, validated and applied in a climatological simulation experiment to study the circulation in the Indian Ocean. The validated model is then used for the implementation of the data assimilation system for the Indian Ocean region. This dissertation presents the qualitative and quantitative comparisons of the model simulations with and without subsurface temperature and salinity profiles and sea surface height anamoly data assimilation for the Indian Ocean region. This is the first ever reported data assimilation studies of the Argo subsurface temperature and salinity profile data with ROMS in the Indian Ocean region.
44

Efficient formulation and implementation of ensemble based methods in data assimilation

Nino Ruiz, Elias David 11 January 2016 (has links)
Ensemble-based methods have gained widespread popularity in the field of data assimilation. An ensemble of model realizations encapsulates information about the error correlations driven by the physics and the dynamics of the numerical model. This information can be used to obtain improved estimates of the state of non-linear dynamical systems such as the atmosphere and/or the ocean. This work develops efficient ensemble-based methods for data assimilation. A major bottleneck in ensemble Kalman filter (EnKF) implementations is the solution of a linear system at each analysis step. To alleviate it an EnKF implementation based on an iterative Sherman Morrison formula is proposed. The rank deficiency of the ensemble covariance matrix is exploited in order to efficiently compute the analysis increments during the assimilation process. The computational effort of the proposed method is comparable to those of the best EnKF implementations found in the current literature. The stability analysis of the new algorithm is theoretically proven based on the positiveness of the data error covariance matrix. In order to improve the background error covariance matrices in ensemble-based data assimilation we explore the use of shrinkage covariance matrix estimators from ensembles. The resulting filter has attractive features in terms of both memory usage and computational complexity. Numerical results show that it performs better that traditional EnKF formulations. In geophysical applications the correlations between errors corresponding to distant model components decreases rapidly with the distance. We propose a new and efficient implementation of the EnKF based on a modified Cholesky decomposition for inverse covariance matrix estimation. This approach exploits the conditional independence of background errors between distant model components with regard to a predefined radius of influence. Consequently, sparse estimators of the inverse background error covariance matrix can be obtained. This implies huge memory savings during the assimilation process under realistic weather forecast scenarios. Rigorous error bounds for the resulting estimator in the context of data assimilation are theoretically proved. The conclusion is that the resulting estimator converges to the true inverse background error covariance matrix when the ensemble size is of the order of the logarithm of the number of model components. We explore high-performance implementations of the proposed EnKF algorithms. When the observational operator can be locally approximated for different regions of the domain, efficient parallel implementations of the EnKF formulations presented in this dissertation can be obtained. The parallel computation of the analysis increments is performed making use of domain decomposition. Local analysis increments are computed on (possibly) different processors. Once all local analysis increments have been computed they are mapped back onto the global domain to recover the global analysis. Tests performed with an atmospheric general circulation model at a T-63 resolution, and varying the number of processors from 96 to 2,048, reveal that the assimilation time can be decreased multiple fold for all the proposed EnKF formulations.Ensemble-based methods can be used to reformulate strong constraint four dimensional variational data assimilation such as to avoid the construction of adjoint models, which can be complicated for operational models. We propose a trust region approach based on ensembles in which the analysis increments are computed onto the space of an ensemble of snapshots. The quality of the resulting increments in the ensemble space is compared against the gains in the full space. Decisions on whether accept or reject solutions rely on trust region updating formulas. Results based on a atmospheric general circulation model with a T-42 resolution reveal that this methodology can improve the analysis accuracy. / Ph. D.
45

LAND SURFACE-ATMOSPHERE INTERACTIONS IN REGIONAL MODELING OVER SOUTH AMERICA

Goncalves de Goncalves, Luis Gustavo January 2005 (has links)
Land surface processes play an important role when modeling weather and climate, and understanding and representing such processes in South America is a particular challenge because of the large variations in regional climate and surface features such as vegetation and soil. Numerical models have been used to explore the climate and weather of continental South America, but without appropriate initiation of land surface conditions model simulations can rapidly diverge from reality. This initiation problem is exacerbated by the fact that conventional surface observations over South America are scarce and biased towards the urban centers and coastal areas. This dissertation explores issues related to the apt representation of land surface processes and their impacts in numerical simulations with a regional atmospheric model (specifically the Eta model) over South America. The impacts of vegetation heterogeneity in regional weather forecast were first investigated. A South American Land Data Assimilation System (SALDAS) was then created analogous to that currently used in North America to estimate soil moisture fields for initializing regional atmospheric models. The land surface model (LSM) used in this SALDAS is the Simplified Simple Biosphere (SSiB). Precipitation fields are critical when calculating soil moisture and, because conventional surface observations are scarce in South America, some of the most important remote sensed precipitation products were evaluated as potential precipitation forcing for the SALDAS. Spin up states for SSiB where then compared with climatological estimates of land surface fields and significant differences found. Finally, an assessment was made of the value of SALDAS-derived soil moisture fields on Eta model forecasts. The primary result was that model performance is enhanced over the entire continent in up to 72h forecasts using SALDAS surface fields
46

Simulating the carbon cycling of croplands : model development, diagnosis, and regional application through data assimilation

Sus, Oliver January 2012 (has links)
In the year 2000, croplands covered about 12% of the Earth’s ice-free land surface. Through cropland management, humankind momentarily appropriates about 25% of terrestrial ecosystem productivity. Not only are croplands a key element of human food supply, but also bear potential in increased carbon (C) uptake when best-practice land management approaches are adopted. A detailed assessment of the impact of land use on terrestrial ecosystems can be achieved by modelling, but the simulation of crop C cycling itself is a relatively new discipline. Observational data on crop net ecosystem exchange (NEE) are available only recently, and constitute an important tool for model development, diagnosis, and validation. Before crop functional types (CFT) had been introduced, however, large-scale biogeochemical models (BGCM) lacked crop-specific patterns of phenology, C allocation, and land management. As a consequence, the influence of cropland C cycling on biosphere-atmosphere C exchange seasonality and magnitude is currently poorly known. To date, no regional assessment of crop C cycling and yield formation exists that specifically accounts for spatially and temporally varying patterns of sowing dates within models. In this thesis, I present such an assessment for the first time. In the first step (chapter 2), I built a crop C mass balance model (SPAc) that models crop development and C allocation as a response to ambient meteorological conditions. I compared model outputs against C flux and stock observations of six different sites in Europe, and found a high degree of agreement between simulated and measured fluxes (R2 = 0.83). However, the model tended to overestimate leaf area index (LAI), and underestimate final yield. In a model comparison study (chapter 3), I found in cooperation with further researchers that SPAc best reproduces observed fluxes of C and water (owed to the model’s high temporal and process resolution), but is deficient due to a lack in simulating full crop rotations. I then conducted a detailed diagnosis of SPAc through the assimilation of C fluxes and biometry with the Ensemble Kalman Filter (EnKF, chapter 4), and identified potential model weaknesses in C allocation fractions and plant hydraulics. Further, an overestimation of plant respiration and seasonal leaf thickness variability were evident. Temporal parameter variability as a response to C flux data assimilation (DA) is indicative of ecosystem processes that are resolved in NEE data but are not captured by a model’s structure. Through DA, I gained important insights into model shortcomings in a quantitative way, and highlighted further needs for model improvement and future field studies. Finally, I developed a framework allowing for spatio-temporally resolved simulation of cropland C fluxes under observational constraints on land management and canopy greenness (chapter 5). MODIS (Moderate Resolution Imaging Spectroradiometer) data were assimilated both variationally (for sowing date estimation) and sequentially (for improved model state estimation, using the EnKF) into SPAc. In doing so, I was able to accurately quantify the multiannual (2000-2006) regional C flux and biometry seasonality of maize-soybean crop rotations surrounding the Bondville Ameriflux eddy covariance (EC) site, averaged over 104 pixel locations within the wider area. Results show that MODIS-derived sowing dates and the assimilation of LAI data allow for highly accurate simulations of growing season C cycling at locations for which groundtruth sowing dates are not available. Through quantification of the spatial variability in biometry, NEE, and net biome productivity (NBP), I found that regional patterns of land management are important drivers of agricultural C cycling and major sources of uncertainty if not appropriately accounted for. Observing C cycling at one single field with its individual sowing pattern is not sufficient to constrain large-scale agroecosystem behaviour. Here, I developed a framework that enables modellers to accurately simulate current (i.e. last 10 years) C cycling of major agricultural regions and their contribution to atmospheric CO2 variability. Follow-up studies can provide crucial insights into testing and validating large-scale applications of biogeochemical models.
47

Data Assimilation for Spatial Temporal Simulations Using Localized Particle Filtering

Long, Yuan 15 December 2016 (has links)
As sensor data becomes more and more available, there is an increasing interest in assimilating real time sensor data into spatial temporal simulations to achieve more accurate simulation or prediction results. Particle Filters (PFs), also known as Sequential Monte Carlo methods, hold great promise in this area as they use Bayesian inference and stochastic sampling techniques to recursively estimate the states of dynamic systems from some given observations. However, PFs face major challenges to work effectively for complex spatial temporal simulations due to the high dimensional state space of the simulation models, which typically cover large areas and have a large number of spatially dependent state variables. As the state space dimension increases, the number of particles must increase exponentially in order to converge to the true system state. The purpose of this dissertation work is to develop localized particle filtering to support PFs-based data assimilation for large-scale spatial temporal simulations. We develop a spatially dependent particle-filtering framework that breaks the system state and observation data into sub-regions and then carries out localized particle filtering based on these spatial regions. The developed framework exploits the spatial locality property of system state and observation data, and employs the divide-and-conquer principle to reduce state dimension and data complexity. Within this framework, we propose a two-level automated spatial partitioning method to provide optimized and balanced spatial partitions with less boundary sensors. We also consider different types of data to effectively support data assimilation for spatial temporal simulations. These data include both hard data, which are measurements from physical devices, and soft data, which are information from messages, reports, and social network. The developed framework and methods are applied to large-scale wildfire spread simulations and achieved improved results. Furthermore, we compare the proposed framework to existing particle filtering based data assimilation frameworks and evaluate the performance for each of them.
48

Leveraging the information content of process-based models using Differential Evolution and the Extended Kalman Filter

Howard, Lucas 01 January 2016 (has links)
Process-based models are used in a diverse array of fields, including environmental engineering to provide supporting information to engineers, policymakers and stakeholdes. Recent advances in remote sensing and data storage technology have provided opportunities for improving the application of process-based models and visualizing data, but also present new challenges. The availability of larger quantities of data may allow models to be constructed and calibrated in a more thorough and precise manner, but depending on the type and volume of data, it is not always clear how to incorporate the information content of these data into a coherent modeling framework. In this context, using process-based models in new ways to provide decision support or to produce more complete and flexible predictive tools is a key task in the modern data-rich engineering world. In standard usage, models can be used for simulating specific scenarios; they can also be used as part of an automated design optimization algorithm to provide decision support or in a data-assimilation framework to incorporate the information content of ongoing measurements. In that vein, this thesis presents and demonstrates extensions and refinements to leverage the best of what process-based models offer using Differential Evolution (DE) the Extended Kalman Filter (EKF). Coupling multi-objective optimization to a process-based model may provide valuable information provided an objective function is constructed appropriately to reflect the multi-objective problem and constraints. That, in turn, requires weighting two or more competing objectives in the early stages of an analysis. The methodology proposed here relaxes that requirement by framing the model optimization as a sensitivity analysis. For demonstration, this is implemented using a surface water model (HEC-RAS) and the impact of floodplain access up and downstream of a fixed bridge on bridge scour is analyzed. DE, an evoutionary global optimization algorithm, is wrapped around a calibrated HEC-RAS model. Multiple objective functions, representing different relative weighting of two objectives, are used; the resulting rank-orders of river reach locations by floodplain access sensitivity are consistent across these multiple functions. To extend the applicability of data assimilation methods, this thesis proposes relaxing the requirement that the model be calibrated (provided the parameters are still within physically defensible ranges) before performing assimilation. The model is then dynamically calibrated to new state estimates, which depend on the behavior of the model. Feasibility is demonstrated using the EKF and a synthetic dataset of pendulum motion. The dynamic calibration method reduces the variance of prediction errors compared to measurement errors using an initially uncalibrated model and produces estimates of calibration parameters that converge to the true values. The potential application of the dynamic calibration method to river sediment transport modeling is proposed in detail, including a method for automated calibration using sediment grain size distribution as a calibration parameter.
49

Programové prostředí pro asimilační metody v radiační ochraně / Software environment for data assimilation in radiation protection

Majer, Peter January 2015 (has links)
In this work we apply data assimilation onto meteorological model WRF for local domain. We use bayesian statistics, namely Sequential Monte Carlo method combined with particle filtering. Only surface wind data are considered. An application written in Python programming language is also part of this work. This application forms interface with WRF, performs data assimilation and provides set of charts as output of data assimilation. In case of stable wind conditions, wind predictions of assimilated WRF are significantly closer to measured data than predictions of non-assimilated WRF. In this kind of conditions, this assimilated model can be used for more accurate short-term local weather predictions. Powered by TCPDF (www.tcpdf.org)
50

Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with Uncertainties

Sawlan, Zaid A 10 November 2018 (has links)
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here. Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques. In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.

Page generated in 0.0886 seconds