31 |
Applications of Adjoint Modelling in Chemical Composition: Studies of Tropospheric Ozone at Middle and High Northern LatitudesWalker, Thomas 01 September 2014 (has links)
Ozone is integral to tropospheric chemistry, and understanding the processes controlling its distribution is important in climate and air pollution contexts. The GEOS-Chem global chemical transport model and its adjoint are used to interpret the impacts of midlatitude precursor emissions and atmospheric transport on the tropospheric ozone distribution at middle and high northern latitudes.
In the Arctic, the model reproduces seasonal cycles of peroxyacetyl nitrate (PAN) and ozone measured at the surface, and observed ozone abundances in the summer free troposphere. Source attribution analysis suggests that local photochemical production, ≤ 0.25 ppbv/day, driven by PAN decomposition accounts for more than 50% of ozone in the summertime Arctic boundary layer. In the mid-troposphere, photochemical production accounts for 30-40% of ozone, while ozone transported from midlatitudes contributes 25-35%. Adjoint sensitivity studies link summertime ozone production to anthropogenic, biomass burning, soil, and lightning emissions between 50N-70N. Over Alert, Nunavut, the sensitivity of mid-tropospheric ozone to lightning emissions sometimes exceeds that to anthropogenic emissions.
Over the eastern U.S., numerous models overestimate ozone in the summertime boundary layer. An inversion analysis, using the GEOS-Chem four-dimensional variational data assimilation system, optimizes emissions of NOx and isoprene. Inversion results suggest the model bias cannot be explained by discrepancies in these precursor emissions. A separate inversion optimizes rates of key chemical reactions including ozone deposition rates, which are parameterized and particularly uncertain. The inversion suggests a factor of 2-3 increase in deposition rates in the northeastern U.S., decreasing the ozone bias from 17.5 ppbv to 6.0 ppbv. This analysis, however, is sensitive to the model boundary layer mixing scheme.
Several inversion analyses are conducted to estimate lightning NOx emissions over North America in August 2006, using ozonesonde data. The high-resolution nested version of GEOS-Chem is used to better capture variability in the ozonesonde data. The analyses suggest North American lightning NOx totals between 0.076-0.204 Tg N. A major challenge is that the vertical distribution of the lightning source is not optimized, but the results suggest a bias in the vertical distribution. Reliably optimizing the three-dimensional distribution of lightning NOx emissions requires more information than the ozonesonde dataset contains.
|
32 |
Constraining the carbon budgets of croplands with Earth observation dataRevill, Andrew January 2016 (has links)
Cropland management practices have traditionally focused on maximising the production of food, feed and fibre. However, croplands also provide valuable regulating ecosystem services, including carbon (C) storage in soil and biomass. Consequently, management impacts the extents to which croplands act as sources or sinks of atmospheric carbon dioxide (CO2). And so, reliable information on cropland ecosystem C fluxes and yields are essential for policy-makers concerned with climate change mitigation and food security. Eddy-covariance (EC) flux towers can provide observations of net ecosystem exchanges (NEE) of CO2 within croplands, however the tower sites are temporally and spatially sparse. Process-based crop models simulate the key biophysical mechanisms within cropland ecosystems, including the management impacts, crop cultivar, soil and climate on crop C dynamics. The models are therefore a powerful tool for diagnosing and forecasting C fluxes and yield. However, crop model spatial upscaling is often limited by input data (including meteorological drivers and management), parameter uncertainty and model complexity. Earth observation (EO) sensors can provide regular estimates of crop condition over large extents. Therefore, EO data can be used within data assimilation (DA) schemes to parameterise and constrain models. Research presented in this thesis explores the key challenges associated with crop model upscaling. First, fine-scale (20-50 m) EO-derived data, from optical and radar sensors, is assimilated into the Soil-Plant-Atmosphere crop (SPAc) model. Assimilating all EO data enhanced the simulation of daily C exchanges at multiple European crop sites. However, the individually assimilation of radar EO data (as opposed to combined with optical data) resulted in larger improvements in the C fluxes simulation. Second, the impacts of reduced model complexity and driver resolution on crop photosynthesis estimates are investigated. The simplified Aggregated Canopy Model (ACM) – estimating daily photosynthesis using coarse-scale (daily) drivers – was calibrated using the detailed SPAc model, which simulates leaf to canopy processes at half-hourly time-steps. The calibrated ACM photosynthesis had a high agreement with SPAc and local EC estimates. Third, a model-data fusion framework was evaluated for multi-annual and regional-scale estimation of UK wheat yields. Aggregated model yield estimates were negatively biased when compared to official statistics. Coarse-scale (1 km) EO data was also used to constrain the model simulation of canopy development, which was successful in reducing the biases in the yield estimates. And fourth, EO spatial and temporal resolution requirements for crop growth monitoring at UK field-scales was investigated. Errors due to spatial resolution are quantified by sampling aggregated fine scale EO data on a per-field basis; whereas temporal resolution error analysis involved re-sampling model estimates to mimic the observational frequencies of current EO sensors and likely cloud cover. A minimum EO spatial resolution of around 165 m is required to resolve the field-scale detail. Monitoring crop growth using EO sensors with a 26-day temporal resolution results in a mean error of 5%; however, accounting for likely cloud cover increases this error to 63%.
|
33 |
Predicting Crop Yield Using Crop Models and High-Resolution Remote Sensing TechnologiesZiliani, Matteo Giuseppe 01 1900 (has links)
By 2050, food consumption and agricultural water use will increase as a result
of a global population that is projected to reach 9 billion people. To address this food
and water security challenge, there has been increased attention towards the concept
of sustainable agriculture, which has a broad aim of securing food and water
resources while preserving the environment for future generations. An element of
this is the use of precision agriculture, which is designed to provide the right inputs,
at the right time and in the right place. In order to optimize nutrient application, water
intakes, and the profitability of agricultural areas, it is necessary to improve our
understating and predictability of agricultural systems at high spatio-temporal scales.
The underlying goal of the research presented herein is to advance the
monitoring of croplands and crop yield through high-resolution satellite data. In
addressing this, we explore the utility of daily CubeSat imagery to produce the highest
spatial resolution (3 m) estimates of leaf area index and crop water use ever retrieved
from space, providing an enhanced capacity to provide new insights into precision
agriculture. The novel insights on crop health and conditions derived from CubeSat
data are combined with the predictive ability of crop models, with the aim of
improving crop yield predictions. To explore the latter, a sensitivity analysis-linked
Bayesian inference framework was developed, offering a tool for calibrating crop
models while simultaneously quantifying the uncertainty in input parameters. The
effect of integrating higher spatio-temporal resolution data in crop models was tested
by developing an approach that assimilates CubeSat imagery into a crop model for
early season yield prediction at the within-field scale. In addition to satellite data, the
utility of even higher spatial resolution products from unmanned aerial vehicles was
also examined in the last section of the thesis, where future research avenues are
outlined. Here, an assessment of crop height is presented, which is linked to field
biomass through the use of structure from motion techniques. These results offer
further insights into small-scale field variabilities from an on-demand basis, and
represent the cutting-edge of precision agricultural advances.
|
34 |
Estimation of frictional parameters in afterslip areas by assimilating GPS data: Application to the 2003 Tokachi-oki earthquake / GPSデータの同化による余効すべり域の摩擦パラメータの推定 : 2003年十勝沖地震への適用Kano, Masayuki 24 March 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第18081号 / 理博第3959号 / 新制||理||1571(附属図書館) / 30939 / 京都大学大学院理学研究科地球惑星科学専攻 / (主査)准教授 宮﨑 真一, 教授 福田 洋一, 教授 平原 和朗 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DGAM
|
35 |
Variational data assimilation for the shallow water equations with applications to tsunami wave predictionKhan, Ramsha January 2020 (has links)
Accurate prediction of tsunami waves requires complete boundary and initial condition
data, coupled with the appropriate mathematical model. However, necessary
data is often missing or inaccurate, and may not have sufficient resolution
to capture the dynamics of such nonlinear waves accurately. In this thesis we
demonstrate that variational data assimilation for the continuous shallow water
equations (SWE) is a feasible approach for recovering both initial conditions and
bathymetry data from sparse observations. Using a Sadourny finite-difference finite
volume discretisation for our numerical implementation, we show that convergence
to true initial conditions can be achieved for sparse observations arranged in multiple
configurations, for both isotropic and anisotropic initial conditions, and with
realistic bathymetry data in two dimensions. We demonstrate that for the 1-D
SWE, convergence to exact bathymetry is improved by including a low-pass filter
in the data assimilation algorithm designed to remove scale-scale noise, and with
a larger number of observations. A necessary condition for a relative L2 error less
than 10% in bathymetry reconstruction is that the amplitude of the initial conditions
be less than 1% of the bathymetry height. We perform Second Order Adjoint
Sensitivity Analysis and Global Sensitivity Analysis to comprehensively assess the
sensitivity of the surface wave to errors in the bathymetry and perturbations in
the observations. By demonstrating low sensitivity of the surface wave to the reconstruction
error, we found that reconstructing the bathymetry with a relative
error of about 10% is sufficiently accurate for surface wave modelling in most cases.
These idealised results with simplified 2-D and 1-D geometry are intended to be
a first step towards more physically realistic settings, and can be used in tsunami
modelling to (i) maximise accuracy of tsunami prediction through sufficiently accurate
reconstruction of the necessary data, (ii) attain a priori knowledge of how
different bathymetry and initial conditions can affect the surface wave error, and
(iii) provide insight on how these can be mitigated through optimal configuration
of the observations. / Thesis / Candidate in Philosophy
|
36 |
Intelligent Planning and Assimilation of AUV-Obtained Measurements Within a ROMS-Based Ocean Modeling SystemDavini, Benjamin J 01 December 2010 (has links) (PDF)
Efforts to learn more about the oceans that surround us have increased dramatically as the technological ability to do so grows. Autonomous Underwater Vehicles (AUVs) are one such technological advance. They allow for rapid deployment and can gather data quickly in places and ways that traditional measurement systems (bouys, profilers, etc.) cannot. A ROMS-based data assimilation method was developed that intelligently plans for and integrates AUV measurements with the goal of minimizing model standard deviation. An algorithm developed for this system is first described that optimizes paths for AUVs that seeks to improve the model by gathering data in high-interest locations. This algorithm and its effect on the ocean model are tested by comparing the results of missions made with the algorithm and missions created by hand. The results of the experiments demonstrate that the system is successful in improving the ROMS ocean model. Also shown are results comparing optimized missions and unoptimized missions.
|
37 |
ADVANCING SEQUENTIAL DATA ASSIMILATION METHODS FOR ENHANCED HYDROLOGIC FORECASTING IN SEMI-URBAN WATERSHEDSLeach, James January 2019 (has links)
Accurate hydrologic forecasting is vital for proper water resource management. Practices that are impacted by these forecasts include power generation, reservoir management, agricultural water use, and flood early warning systems. Despite these needs, the models largely used are simplifications of the real world and are therefore imperfect. The forecasters face other challenges in addition to the model uncertainty, which includes imperfect observations used for model calibration and validation, imperfect meteorological forecasts, and the ability to effectively communicate forecast results to decision-makers. Bayesian methods are commonly used to address some of these issues, and this thesis will be focused on improving methods related to recursive Bayesian estimation, more commonly known as data assimilation.
Data assimilation is a means to optimally account for the uncertainties in observations, models, and forcing data. In the literature, data assimilation for urban hydrologic and flood forecasting is rare; therefore the main areas of study in this thesis are urban and semi-urban watersheds. By providing improvements to data assimilation methods, both hydrologic and flood forecasting can be enhanced in these areas. This work explored the use of alternative data products as a type of observation that can be assimilated to improve hydrologic forecasting in an urban watershed. The impact of impervious surfaces in urban and semi-urban watersheds was also evaluated in regards to its impact on remotely sensed soil moisture assimilation. Lack of observations is another issue when it comes to data assimilation, particularly in semi- or fully-distributed models; because of this, an improved method for updating locations which do not have observations was developed which utilizes information theory’s mutual information. Finally, we explored extending data assimilation into the short-term forecast by using prior knowledge of how a model will respond to forecasted forcing data.
Results from this work found that using alternative data products such as those from the Snow Data Assimilation System or the Soil Moisture and Ocean Salinity mission, can be effective at improving hydrologic forecasting in urban watersheds. They also were effective at identifying a limiting imperviousness threshold for soil moisture assimilation into urban and semi-urban watersheds. Additionally, the inclusion of mutual information between gauged and ungauged locations in a semi-distributed hydrologic model was able to provide better state updates in models. Finally, by extending data assimilation into the short-term forecast, the reliability of the forecasts could be improved substantially. / Dissertation / Doctor of Philosophy (PhD) / The ability to accurately model hydrological systems is essential, as that allows for better planning and decision making in water resources management. The better we can forecast the hydrologic response to rain and snowmelt events, the better we can plan and manage our water resources. This includes better planning and usage of water for agricultural purposes, better planning and management of reservoirs for power generation, and better preparing for flood events. Unfortunately, hydrologic models primarily used are simplifications of the real world and are therefore imperfect. Additionally, our measurements of the physical system responses to atmospheric forcing can be prone to both systematic and random errors that need to be accounted for. To address these limitations, data assimilation can be used to improve hydrologic forecasts by optimally accounting for both model and observation uncertainties. The work in this thesis helps to further advance and improve data assimilation, with a focus on enhancing hydrologic forecasting in urban and semi-urban watersheds. The research presented herein can be used to provide better forecasts, which allow for better planning and decision making.
|
38 |
Adaptive Numerical Methods for Large Scale Simulations and Data AssimilationConstantinescu, Emil Mihai 07 July 2008 (has links)
Numerical simulation is necessary to understand natural phenomena, make assessments and predictions in various research and engineering fields, develop new technologies, etc. New algorithms are needed to take advantage of the increasing computational resources and utilize the emerging hardware and software infrastructure with maximum efficiency.
Adaptive numerical discretization methods can accommodate problems with various physical, scale, and dynamic features by adjusting the resolution, order, and the type of method used to solve them. In applications that simulate real systems, the numerical accuracy of the solution is typically just one of the challenges. Measurements can be included in the simulation to constrain the numerical solution through a process called data assimilation in order to anchor the simulation in reality.
In this thesis we investigate adaptive discretization methods and data assimilation approaches for large-scale numerical simulations. We develop and investigate novel multirate and implicit-explicit methods that are appropriate for multiscale and multiphysics numerical discretizations. We construct and explore data assimilation approaches for, but not restricted to, atmospheric chemistry applications. A generic approach for describing the structure of the uncertainty in initial conditions that can be applied to the most popular data assimilation approaches is also presented.
We show that adaptive numerical methods can effectively address the discretization of large-scale problems. Data assimilation complements the adaptive numerical methods by correcting the numerical solution with real measurements. Test problems and large-scale numerical experiments validate the theoretical findings. Synergistic approaches that use adaptive numerical methods within a data assimilation framework need to be investigated in the future. / Ph. D.
|
39 |
Large-Scale Simulations Using First and Second Order Adjoints with Applications in Data AssimilationZhang, Lin 23 July 2007 (has links)
In large-scale air quality simulations we are interested in the influence factors which cause changes of pollutants, and optimization methods which improve forecasts. The solutions to these problems can be achieved by incorporating adjoint models, which are efficient in computing the derivatives of a functional with respect to a large number of model parameters. In this research we employ first order adjoints in air quality simulations. Moreover, we explore theoretically the computation of second order adjoints for chemical transport models, and illustrate their feasibility in several aspects.
We apply first order adjoints to sensitivity analysis and data assimilation.
Through sensitivity analysis, we can discover the area that has the largest influence on changes of ozone concentrations at a receptor. For data assimilation with optimization methods which use first order adjoints, we assess their performance under different scenarios. The results indicate that the L-BFGS method is the most efficient.
Compared with first order adjoints, second order adjoints have not been used to date in air quality simulation. To explore their utility, we show the construction of second order adjoints for chemical transport models and demonstrate several applications including sensitivity analysis, optimization, uncertainty quantification, and Hessian singular vectors. Since second order adjoints provide second order information in the form of Hessian-vector product instead of the entire Hessian matrix, it is possible to implement applications for large-scale models which require second order derivatives. Finally, we conclude that second order adjoints for chemical transport models are computationally feasible and effective. / Master of Science
|
40 |
Combining Data-driven and Theory-guided Models in Ensemble Data AssimilationPopov, Andrey Anatoliyevich 23 August 2022 (has links)
There once was a dream that data-driven models would replace their theory-guided counterparts. We have awoken from this dream. We now know that data cannot replace theory. Data-driven models still have their advantages, mainly in computational efficiency but also providing us with some special sauce that is unreachable by our current theories. This dissertation aims to provide a way in which both the accuracy of theory-guided models, and the computational efficiency of data-driven models can be combined. This combination of theory-guided and data-driven allows us to combine ideas from a much broader set of disciplines, and can help pave the way for robust and fast methods. / Doctor of Philosophy / As an illustrative example take the problem of predicting the weather. Typically a supercomputer will run a model several times to generate predictions few days into the future. Sensors such as those on satellites will then pick up observations about a few points on the globe, that are not representative of the whole atmosphere. These observations are combined, ``assimilated'' with the computer model predictions to create a better representation of our current understanding of the state of the earth. This predict-assimilate cycle is repeated every day, and is called (sequential) data assimilation. The prediction step traditional was performed by a computer model that was based on rigorous mathematics. With the advent of big-data, many have wondered if models based purely on data would take over. This has not happened. This thesis is concerned with taking traditional mathematical models and running them alongside data-driven models in the prediction step, then building a theory in which both can be used in data assimilation at the same time in order to not have a drop in accuracy and have a decrease in computational cost.
|
Page generated in 0.1003 seconds