Spelling suggestions: "subject:"data assimilation"" "subject:"data ssimilation""
61 |
Data Assimilation in the Boussinesq Approximation for Mantle ConvectionMcQuarrie, Shane Alexander 01 July 2018 (has links)
Many highly developed physical models poorly approximate actual physical systems due to natural random noise. For example, convection in the earth's mantle—a fundamental process for understanding the geochemical makeup of the earth's crust and the geologic history of the earth—exhibits chaotic behavior, so it is difficult to model accurately. In addition, it is impossible to directly measure temperature and fluid viscosity in the mantle, and any indirect measurements are not guaranteed to be highly accurate. Over the last 50 years, mathematicians have developed a rigorous framework for reconciling noisy observations with reasonable physical models, a technique called data assimilation. We apply data assimilation to the problem of mantle convection with the infinite-Prandtl Boussinesq approximation to the Navier-Stokes equations as the model, providing rigorous conditions that guarantee synchronization between the observational system and the model. We validate these rigorous results through numerical simulations powered by a flexible new Python package, Dedalus. This methodology, including the simulation and post-processing code, may be generalized to many other systems. The numerical simulations show that the rigorous synchronization conditions are not sharp; that is, synchronization may occur even when the conditions are not met. These simulations also cast some light on the true relationships between the system parameters that are required in order to achieve synchronization. To conclude, we conduct experiments for two closely related data assimilation problems to further demonstrate the limitations of the rigorous results and to test the flexibility of data assimilation for mantle-like systems.
|
62 |
Optimal interpolation schemes to constrain Pm2.5 In Regional Modeling Over The United StatesSousan, Sinan Dhia Jameel 01 July 2012 (has links)
This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 μm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD.
Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that scaled the observation error by land use (i.e. urban or rural locations). In theory, urban locations should have less effect on surrounding areas than rural sites, which can be controlled using site representation error. The annual evaluations showed substantial improvements in model performance with increases in the correlation coefficient from 0.36 (prior) to 0.76 (posterior), and decreases in the fractional error from 0.43 (prior) to 0.15 (posterior). In addition, the normalized mean error decreased from 0.36 (prior) to 0.13 (posterior), and the RMSE decreased from 5.39 μg m-3 (prior) to 2.32 μg m-3 (posterior). OI decreased model bias for both large spatial areas and point locations, and could be extended to more advanced data assimilation methods.
The current work will be applied to a five year (2000-2004) CMAQ simulation aimed at improving aerosol model estimates. The posterior model concentrations will be used to inform exposure studies over the U.S. that relate aerosol exposure to mortality and morbidity rates. Future improvements for the OI techniques used in the current study will include combining both surface and satellite data to improve posterior model estimates. Satellite data have high spatial and temporal resolutions in comparison to surface measurements, which are scarce but more accurate than model estimates. The satellite data are subject to noise affected by location and season of retrieval. The implementation of OI to combine satellite and surface data sets has the potential to improve posterior model estimates for locations that have no direct measurements.
|
63 |
Observation adaptative : limites de la prévision et du contrôle des incertitudes / Adaptive Observation : limits of the forecast and monitoring of the uncertaintiesOger, Niels 02 July 2015 (has links)
L'observation adaptative (OA) est une pratique de prévision numérique du temps (PNT) qui cherche à prévoir quel jeu (ou réseau) d'observations supplémentaires à déployer et à assimiler dans le futur améliorera les prévisions. L'objectif est d'accroître la qualité des prévisions météorologiques en ajoutant des observations là où elles auront le meilleur impact (optimal). Des méthodes numériques d'OA apportent des réponses objectives mais partielles. Elles prennent en compte à la fois les aspects dynamiques de l'atmosphère à travers le modèle adjoint, et aussi le système d'assimilation de données. Le système d'assimilation de données le plus couramment utilisé pour l'OA est le 4D-Var. Ces méthodes linéaires (technologie de l'adjoint) reposent cependant sur une réalisation déterministe (ou trajectoire) unique. Cette trajectoire est entachée d'une incertitude qui affecte l'efficacité de l'OA. Le point de départ de ce travail est d'évaluer l'impact de l'incertitude associée au choix de cette trajectoire sur une technique: la KFS. Un ensemble de prévisions est utilisé pour étudier cette sensibilité. Les expériences réalisées dans un cadre simplifié montrent que les solutions de déploiement peuvent changer en fonction de la trajectoire choisie. Il est d'autant plus nécessaire de prendre cette incertitude en considération que le système d'assimilation utilisé n'est pas vraiment optimal du fait de simplifications liées à sa mise en oeuvre. Une nouvelle méthode d'observation adaptative, appelée Variance Reduction Field (VRF), a été développée dans le cadre de cette thèse. Cette méthode permet de déterminer la réduction de variance de la fonction score attendue en assimilant une pseudo-observation supplémentaire pour chaque point de grille. Deux approches de la VRF sont proposées, la première est basée sur une simulation déterministe. Et la seconde utilise un ensemble d'assimilations et de prévisions. Les deux approches de la VRF ont été implémentées et étudiées dans le modèle de Lorenz 96. Le calcul de la VRF à partir d'un ensemble est direct si l'on dispose déjà des membres de l'ensemble. Le modèle adjoint n'est pas nécessaire pour le calcul.L'implémentation de la VRF dans un système de prévision du temps de grande taille, tel qu'un système opérationnel, n'a pas pu être réalisée dans le cadre de cette thèse. Cependant, l'étude de faisabilité de la construction de la VRF dans l'environnement OOPS a été menée. Une description de OOPS (version 2013) est d'abord présentée dans le manuscrit, car cet environnement est une nouveauté en soi. Elle est suivie de la réflexion sur les développements à introduire pour l'implémentation de la VRF. / The purpose of adaptive observation (AO) strategies is to design optimal observation networks in a prognostic way to provide guidance on how to deploy future observations. The overarching objective is to improve forecast skill. Most techniques focus on adding observations. Some AO techniques account for the dynamical aspects of the atmosphere using the adjoint model and for the data assimilation system (DAS), which is usually either a 3D or 4D-Var (ie. solved by the minimization of a cost function). But these techniques rely on a single (linearisation) trajectory. One issue is to estimate how the uncertainty related to the trajectory affects the efficiency of one technique in particular: the KFS. An ensemble-based approach is used to assess the sensitivity to the trajectory within this deterministic approach (ie. with the adjoint model). Experiments in a toy model show that the trajectory uncertainties can lead to significantly differing deployments of observations when using a deterministic AO method (with adjoint model and VDAS). This is especially true when we lack knowledge on the VDAS component. During this work a new tool for observation targeting called Variance Reduction Field (VRF)has been developed. This technique computes the expected variance reduction of a forecast Score function that quantifies forecast quality. The increase of forecast quality that is a reduction of variance of that function is linked to the location of an assimilated test probe. Each model grid point is tested as a potential location. The VRF has been implemented in a Lorenz 96 model using two approaches. The first one is based on a deterministic simulation. The second approach consists of using an ensemble data assimilation and prediction system. The ensemble approach can be easily implemented when we already have an assimilation ensemble and a forecast ensemble. It does not need the use of the adjoint model. The implementation in real NWP system of the VRF has not been conducted during this work. However a preliminary study has been done to implement the VRF within OOPS (2013 version). After a description of the different components of OOPS, the elements required for the implementation of the VRF are described.
|
64 |
Advancing Streamflow Forecasts Through the Application of a Physically Based Energy Balance Snowmelt Model With Data Assimilation and Cyberinfrastructure ResourcesGichamo, Tseganeh Zekiewos 01 May 2019 (has links)
The Colorado Basin River Forecast Center (CBRFC) provides forecasts of streamflow for purposes such as flood warning and water supply. Much of the water in these basins comes from spring snowmelt, and the forecasters at CBRFC currently employ a suite of models that include a temperature-index snowmelt model. While the temperature-index snowmelt model works well for weather and land cover conditions that do not deviate from those historically observed, the changing climate and alterations in land use necessitate the use of models that do not depend on calibrations based on past data. This dissertation reports work done to overcome these limitations through using a snowmelt model based on physically invariant principles that depends less on calibration and can directly accommodate weather and land use changes. The first part of the work developed an ability to update the conditions represented in the model based on observations, a process referred to as data assimilation, and evaluated resulting improvements to the snowmelt driven streamflow forecasts. The second part of the research was the development of web services that enable automated and efficient access to and processing of input data to the hydrological models as well as parallel processing methods that speed up model executions. These tasks enable the more detailed models and data assimilation methods to be more efficiently used for streamflow forecasts.
|
65 |
Modèles numériques personnalisés de la fibrillation auriculaire / Numerical patient-specific model of atrial-fibrillationGerard, Antoine 10 July 2019 (has links)
Les arythmies auriculaires constituent une pathologie majeure en cardiologie, et leur étude constitue un vaste sujet de recherche. Pour les étudier, de nombreux modèles mathématiques de la propagation du potentiel d'action dans les oreillettes ont été développés. La plupart de ces modèles génériques permettent de reproduire des séquences d'activations typiques des oreillettes. De tels modèles peuvent avoir un intérêt expérimental, voir clinique, par exemple dans l'aide à la localisation des foyers arythmiques ou encore dans l'analyse des échecs du traitement de ces arythmies. Néanmoins, pour atteindre ce but, il faut être capable de recaler au mieux le modèle, dans ses dimensions géométriques ou fonctionnelles, sur des données individuelles. L'assimilation de données, discipline mathématique dans laquelle nous cherchons à combiner de manière optimale théorie et observations, est alors un bon candidat à la personnalisation des modèles de la propagation du potentiel d'action. Dans cette thèse, nous proposons d'étudier différentes méthodes d'assimilation de données -- séquentielles et variationnelles -- dans le but de combiner les modèles de propagation avec des données électroanatomiques. Plus précisément, nous nous intéressons à deux applications possible de l'assimilation de données que sont l'estimation d'état et l'estimation de paramètres. Dans un premier temps, nous étudions un observateur d'état permettant de corriger la position du front de propagation simulé en se basant sur la position du front observé. Cet observateur est alors utilisé afin de compléter une carte d'activation obtenue lors d'une procédure clinique. Ensuite, ce même observateur est combiné à un filtre de Kalman d'ordre réduit afin d'estimer les paramètres de conductivités du modèle mathématique de propagation du potentiel d'action. Une étude de la stratégie d'estimation liée état-paramètre est ensuite réalisée pour voir comment la méthode se comporte face aux erreurs de modélisation. La méthode est ensuite testée sur un jeu de données acquis cliniquement. Puis, nous regardons du côté des méthodes d'assimilation de données variationnelles qui permettent l'estimation de paramètres spatialement distribués. Plusieurs problèmes de minimisation, permettant d'estimer un paramètre de conductivité distribué dans l'espace, sont alors introduits et analysés. Nous montrons alors que la discrétisation de ces problèmes de minimisation, dans le but d'obtenir des méthodes numériques de résolution, peut s'avérer complexe. Une méthode numérique est ensuite mise en place pour un des problèmes de minimisation étudié, et trois cas tests unidimensionnels sont analysés.Enfin, nous démontrons l'existence d'un minimum pour une des fonctions objectif étudiées en nous basant sur des résultats d'analyse fonctionnelle de la littérature. / Atrial arrhythmias are a major pathology in cardiology, and their study is alarge research topic. To study them, many mathematical models of the actionpotential propagation in atria have been developed. Most of those generic models can be used to reproduce typical activation sequences of the atria. Such models may have an experimental or even clinical interest, for example in helping the location of arrhythmic foci or in the analysis of treatment failures for these arrhythmias. Nevertheless, to achieve this goal, it isnecessary to be able to adjust the model at best, based on experimental orclinical data. Data assimilation, a mathematical discipline in which we seek to optimally combine theory and observations, is then a good candidate for the customization of action potential propagation models.In this thesis, we propose to study different data assimilation methods-- sequential and variational -- in order to adjust action potential propagation model on electroanatomical data. More precisely, we are interested in two possible applications of data assimilation: state estimation and parameter estimation.First, we study a state observer which is able to correct the simulatedpropagation front localization based on the observed front localization. Thisobserver is then used to complete an activation map obtained during a clinical procedure.Then, this observer is combined with a reduced order Kalman filterin order to estimate the conductivity parameters of the action potentialpropagation model. A study of the joint state-parameter estimationstrategy is then realized to see how the method behaves faced with modelingerrors. The method is then tested on a clinically acquired dataset.Then, we look at variational data assimilation methods that allow the estimation of spatially distributed parameters. Several minimization problems, allowing to estimate a conductivity parameter distributed in space, are then introduced and analyzed. We then show that the discretization of these minimization problems, in order to obtain numerical methods of resolution, can be complex. A numerical method is then implemented for one of the studied minimization problems, and three 1D test cases are analyzed.Finally, we demonstrate the existence of a minimum for one of the studiedobjective function based on functional analysis results from theliterature.
|
66 |
Ensemble Statistics and Error Covariance of a Rapidly Intensifying HurricaneRigney, Matthew C. 16 January 2010 (has links)
This thesis presents an investigation of ensemble Gaussianity, the effect of non-
Gaussianity on covariance structures, storm-centered data assimilation techniques, and
the relationship between commonly used data assimilation variables and the underlying
dynamics for the case of Hurricane Humberto. Using an Ensemble Kalman Filter
(EnKF), a comparison of data assimilation results in Storm-centered and Eulerian
coordinate systems is made. In addition, the extent of the non-Gaussianity of the model
ensemble is investigated and quantified. The effect of this non-Gaussianity on
covariance structures, which play an integral role in the EnKF data assimilation scheme,
is then explored. Finally, the correlation structures calculated from a Weather Research
Forecast (WRF) ensemble forecast of several state variables are investigated in order to
better understand the dynamics of this rapidly intensifying cyclone.
Hurricane Humberto rapidly intensified in the northwestern Gulf of Mexico from
a tropical disturbance to a strong category one hurricane with 90 mph winds in 24 hours.
Numerical models did not capture the intensification of Humberto well. This could be
due in large part to initial condition error, which can be addressed by data assimilation schemes. Because the EnKF scheme is a linear theory developed on the assumption of
the normality of the ensemble distribution, non-Gaussianity in the ensemble distribution
used could affect the EnKF update. It is shown that multiple state variables do indeed
show significant non-Gaussianity through an inspection of statistical moments.
In addition, storm-centered data assimilation schemes present an alternative to
traditional Eulerian schemes by emphasizing the centrality of the cyclone to the
assimilation window. This allows for an update that is most effective in the vicinity of
the storm center, which is of most concern in mesoscale events such as Humberto.
Finally, the effect of non-Gaussian distributions on covariance structures is
examined through data transformations of normal distributions. Various standard
transformations of two Gaussian distributions are made. Skewness, kurtosis, and
correlation between the two distributions are taken before and after the transformations.
It can be seen that there is a relationship between a change in skewness and kurtosis and
the correlation between the distributions. These effects are then taken into consideration
as the dynamics contributing to the rapid intensification of Humberto are explored
through correlation structures.
|
67 |
Data assimilation for parameter estimation in coastal ocean hydrodynamics modelingMayo, Talea Lashea 25 February 2014 (has links)
Coastal ocean models are used for a vast array of applications. These
applications include modeling tidal and coastal flows, waves, and extreme
events, such as tsunamis and hurricane storm surges. Tidal and coastal flows are the primary application of this work as they play a critical role in many practical research areas such as contaminant transport, navigation through intracoastal waterways, development of coastal structures (e.g. bridges, docks,
and breakwaters), commercial fishing, and planning and execution of military operations in marine environments, in addition to recreational aquatic activities. Coastal ocean models are used to determine tidal amplitudes, time intervals between low and high tide, and the extent of the ebb and flow of tidal waters, often at specific locations of interest. However, modeling tidal flows can be quite complex, as factors such as the configuration of the coastline,
water depth, ocean floor topography, and hydrographic and meteorological
impacts can have significant effects and must all be considered.
Water levels and currents in the coastal ocean can be modeled by solv-
ing the shallow water equations. The shallow water equations contain many
parameters, and the accurate estimation of both tides and storm surge is dependent on the accuracy of their specification. Of particular importance are the parameters used to define the bottom stress in the domain of interest [50]. These parameters are often heterogeneous across the seabed of the domain. Their values cannot be measured directly and relevant data can be expensive
and difficult to obtain. The parameter values must often be inferred and the
estimates are often inaccurate, or contain a high degree of uncertainty [28].
In addition, as is the case with many numerical models, coastal ocean
models have various other sources of uncertainty, including the approximate
physics, numerical discretization, and uncertain boundary and initial conditions. Quantifying and reducing these uncertainties is critical to providing more reliable and robust storm surge predictions. It is also important to reduce the resulting error in the forecast of the model state as much as possible.
The accuracy of coastal ocean models can be improved using data assimilation methods. In general, statistical data assimilation methods are used to estimate the state of a model given both the original model output and observed data. A major advantage of statistical data assimilation methods is
that they can often be implemented non-intrusively, making them relatively
straightforward to implement. They also provide estimates of the uncertainty in the predicted model state. Unfortunately, with the exception of the estimation of initial conditions, they do not contribute to the information contained in the model. The model error that results from uncertain parameters is reduced, but information about the parameters in particular remains unknown.
Thus, the other commonly used approach to reducing model error is parameter estimation. Historically, model parameters such as the bottom stress terms have been estimated using variational methods. Variational methods formulate a cost functional that penalizes the difference between the modeled and observed state, and then minimize this functional over the unknown parameters. Though variational methods are an effective approach to solving inverse problems, they can be computationally intensive and difficult to code as they generally require the development of an adjoint model. They also are not formulated to estimate parameters in real time, e.g. as a hurricane approaches landfall. The goal of this research is to estimate parameters defining
the bottom stress terms using statistical data assimilation methods.
In this work, we use a novel approach to estimate the bottom stress
terms in the shallow water equations, which we solve numerically using the
Advanced Circulation (ADCIRC) model. In this model, a modified form of the 2-D shallow water equations is discretized in space by a continuous Galerkin finite element method, and in time by finite differencing. We use the Manning’s n formulation to represent the bottom stress terms in the model, and estimate various fields of Manning’s n coefficients by assimilating synthetic water elevation data using a square root Kalman filter. We estimate three types of fields
defined on both an idealized inlet and a more realistic spatial domain. For the
first field, a Manning’s n coefficient is given a constant value over the entire domain. For the second, we let the Manning’s n coefficient take two distinct values, letting one define the bottom stress in the deeper water of the domain and the other define the bottom stress in the shallower region. And finally, because bottom stress terms are generally spatially varying parameters, we consider the third field as a realization of a stochastic process. We represent a realization of the process using a Karhunen-Lo`ve expansion, and then seek to estimate the coefficients of the expansion.
We perform several observation system simulation experiments, and
find that we are able to accurately estimate the bottom stress terms in most of our test cases. Additionally, we are able to improve forecasts of the model state in every instance. The results of this study show that statistical data assimilation is a promising approach to parameter estimation. / text
|
68 |
Understanding the Coupled Surface-Groundwater System from Event to Decadal Scale using an Un-calibrated Hydrologic Model and Data AssimilationTao, Jing January 2015 (has links)
<p>In this dissertation, a Hydrologic Data Assimilation System (HDAS) relying on the Duke Coupled surface-groundwater Hydrology Model (DCHM) and various data assimilation techniques including EnKF (Ensemble Kalman Filter), the fixed-lag EnKS (Ensemble Kalman Smoother) and the Asynchronous EnKF (AEnKF) was developed to 1) investigate the hydrological predictability of precipitation-induced natural hazards (i.e. floods and landslides) in the Southern Appalachians in North Carolina, USA, and 2) to characterize the seasonal (wet/dry) and inter-annual variability of surface-groundwater interactions with implications for water resource management in the Upper Zambezi River Basin (UZRB) in southern Africa. The overarching research objective is to improve hydrologic predictability of precipitation-induced natural hazards and water resources in regions of complex terrain. The underlying research hypothesis is that hydrologic response in mountainous regions is governed by surface-subsurface interaction mechanisms, specifically interflow in soil-mantled slopes, surface-groundwater interactions in recharge areas, and wetland dynamics in alluvial floodplains at low elevations. The research approach is to investigate the modes of uncertainty propagation from atmospheric forcing and hydrologic states on processes at multiple scales using a parsimonious uncalibrated hydrologic model (i.e. the DCHM), and Monte Carlo and Data-Assimilation methods. In order to investigate the coupled surface-groundwater system and assess the predictability of precipitation-induced natural hazards (i.e. floods and landslides) in headwater basins, including the propagation of uncertainty in QPE/QPF (Quantitative Precipitation Estimates/Forecasts) to QFE/QFF (Quantitative Flood Estimates/Forecasts), the DCHM model was implemented first at high spatial resolution (250m) in the Southern Appalachian Mountains (SAM) in North Carolina, USA. The DCHM modeling system was implemented subsequently at coarse resolution (5 km) in the Upper Zambezi River Basin (UZRB) in southern Africa for decadal-scale simulations (i.e. water years from 2002 to 2012). </p><p>The research in the SAM showed that joint QPE-QFF distributions for flood response at the headwater catchment scale are highly non-linear with respect to the space-time structure of rainfall, exhibiting strong dependence on basin physiography, initial soil moisture conditions (transient basin storage capacity), the space-time organization of runoff generation and conveyance mechanisms, and in particular interflow dynamics. The errors associated with QPEs and QPFs were characterized using rainfall observations from a dense raingauge network in the Pigeon River Basin, resulting in a simple linear regression model for adjusting/improving QPEs. Deterministic QFEs simulated by the DCHM agree well with observations, with Nash–Sutcliffe (NS) coefficients of 0.8~0.9. Limitations with state-of-the-science operational QPF and the impact of even limited improvements in rainfall forcing was demonstrated through an experiment consisting of nudging satellite-like observations (i.e. Adjusted QPEs) into operational QPE/QPF that showed significant improvement in QFF performance, especially when the timing of satellite overpass is such that it captures transient episodes of heavy rainfall during the event. The research further showed that the dynamics of subsurface hydrologic processes play an important role as a trigger mechanism of shallow landslides through soil moisture redistribution by interflow. Specifically, transient mass fluxes associated with the temporal-spatial dynamics of interflow govern the timing of shallow landslide initiation, and subsequent debris flow mobilization, independently of storm characteristics such as precipitation intensity and duration. Interflow response was shown to be dominant at high elevations in the presence of deep soils as well as in basins with large alluvial fans or unconsolidated debris flow deposits. In recharge areas and where subsurface flow is an important contribution to streamflow, subsurface-groundwater interactions determine initial hydrologic conditions (e.g. soil moisture states and water table position), which in turn govern the timing and magnitude of flood response at the event scale. More generally, surface-groundwater interactions are essential to capture low flows in the summer season, and generally during persistent dry weather and drought conditions. Future advances in QFF and landslide monitoring remain principally constrained by progress in QPE and QPF at the spatial resolution necessary to resolve rainfall-interflow dynamics in mountainous regions.</p><p>The predictability of QFE/QFF was further scrutinized in a complete operational environment during the Intense Observing Period (IOP) of the Integrated Precipitation and Hydrology Experiment (IPHEx-IOP), in order to investigate the predictability of floods (and flashfloods) in headwater catchments in the Southern Appalachians with various drainage sizes. With the DCHM, a variety of operational QPEs were used to produce hydrological hindcasts for the previous day, from which the final states were used as initial conditions in the hydrological forecast for the current day. Although the IPHEx operational testbed results were promising in terms of not having missed any of the flash flood events during the IOP with large lead times of up to 6 hours, significant errors of overprediction or underprediction were identified that could be traced back to the QPFs and subgrid-scale variability of radar QPEs. Furthermore, the added value of improving QFE/QFF through assimilating discharge observations into the DCHM was investigated for advancing flood forecasting skills in the operational mode. Both the flood hindcast/forecast results were significantly improved by assimilating the discharge observations into the DCHM using the EnKF (Ensemble Kalman Filter), the fixed-lag EnKS (Ensemble Kalman Smoother) and Asynchronous EnKF (AEnKF). The results not only demonstrate the utility of discharge assimilation in operational forecasts, but also reveal the importance of initial water storage in the basin for issuing flood forecasts. Specifically, hindcast NSEs as high as 0.98, 0.71 and 0.99 at 15-min time-scales were attained for three headwater catchments in the inner mountain region, demonstrating that assimilation of discharge observations at the basin’s outlet can reduce the errors and uncertainties in soil moisture. Success in operational flood forecasting at lead times of 6, 9, 12 and 15hrs was also achieved through discharge assimilation, with NSEs of 0.87, 0.78, 0.72 and 0.51, respectively. The discharge assimilation experiments indicate that the optimal assimilating time window not only depends on basin properties but also on the storm-specific space-time-structure of rainfall within the basin, and therefore adaptive, context-aware configurations of the data assimilation system should prove useful to address the challenges of flood prediction in headwater basins.</p><p>A physical parameterization of wetland hydrology was incorporated in the DCHM for water resource assessment studies in the UZRB. The spatial distribution of wetlands was introduced in the model using probability occurrence maps generated by logistic regression models using MODIS reflectance-based indices as predictor variables. Continuous model simulations for the 2002-2012 period show that the DCHM with wetland parameterization was able to reproduce wetland hydrology processes adequately, including surface-groundwater interactions. The modelled regional terrestrial water storage anomaly (TWSA) captured very well the inter- and intra-annual variability of the system water storage changes in good agreement with the NASA’s GRACE (Gravity Recovery and Climate Experiment) TWSA observations. Specifically, the positive trend of TWSA documented by GRACE was simulated independently by the DCHM. Furthermore, it was determined that the TSWA positive trend results from cumulative water storage in the sandy soils of the Cuando-Luana sub-basin when shifts in storm tracks move rainfall to the western sector of the Angolan High Plateau. </p><p>Overall, the dissertation study demonstrates the capability of the DCHM in predicting specific characteristics of hydrological response to extreme events and also the inter- and intra-annual variability of surface-groundwater interactions at a decadal scale. The DCHM, coupled with slope stability module and wetland module featuring surface-groundwater interaction mechanism, not only is of great potential in the context of developing a regional warning system for natural hazards (i.e. flashfloods and landslides), but also is promising in investigating regional water budgets at decadal scale. In addition, the DCHM-HDAS demonstrated the ability to reduce forecasting uncertainty and errors associated with forcing data and the model proper, thus significantly improving the predictability of natural hazards. The HDAS could also be used to investigate the regional water resource assessment especially in poorly-gauged regions (e.g. southern Africa), taking advantage of satellite observations.</p> / Dissertation
|
69 |
Diagnostics and Generalizations for Parametric State EstimationNearing, Grey Stephen January 2013 (has links)
This dissertation is comprised of a collection of five distinct research projects which apply, evaluate and extend common methods for land surface data assimilation. The introduction of novel diagnostics and extensions of existing algorithms is motivated by an example, related to estimating agricultural productivity, of failed application of current methods. We subsequently develop methods, based on Shannon's theory of communication, to quantify the contributions from all possible factors to the residual uncertainty in state estimates after data assimilation, and to measure the amount of information contained in observations which is lost due to erroneous assumptions in the assimilation algorithm. Additionally, we discuss an appropriate interpretation of Shannon information which allows us to measure the amount of information contained in a model, and use this interpretation to measure the amount of information introduced during data assimilation-based system identification. Finally, we propose a generalization of the ensemble Kalman filter designed to alleviate one of the primary assumptions - that the observation function is linear.
|
70 |
Data Assimilation In Systems With Strong Signal FeaturesRosenthal, William Steven January 2014 (has links)
Filtering problems in high dimensional geophysical applications often require spatially continuous models to interpolate spatially and temporally sparse data. Many applications in numerical weather and ocean state prediction are concerned with tracking and assessing the uncertainty in the position of large scale vorticity features, such as storm fronts, jets streams, and hurricanes. Quantifying the amplitude variance in these features is complicated by the fact that both height and lateral perturbations in the feature geometry are represented in the same covariance estimate. However, when there are sufficient observations to detect feature information like spatial gradients, the positions of these features can be used to further constrain the filter, as long as the statistical model (cost function) has provisions for both height perturbations and lateral displacements. Several authors since the 1990s have proposed various formalisms for the simultaneous modeling of position and amplitude errors, and the typical approaches to computing the generalized solutions in these applications are variational or direct optimization. The ensemble Kalman filter is often employed in large scale nonlinear filtering problems, but its predication on Gaussian statistics causes its estimators suffer from analysis deflation or collapse, as well as the usual curse of dimensionality in high dimensional Monte Carlo simulations. Moreover, there is no theoretical guarantee of the performance of the ensemble Kalman filter with nonlinear models. Particle filters which employ importance sampling to focus attention on the important regions of the likelihood have shown promise in recent studies on the control of particle size. Consider an ensemble forecast of a system with prominent feature information. The correction of displacements in these features, by pushing them into better agreement with observations, is an application of importance sampling, and Monte Carlo methods, including particle filters, and possibly the ensemble Kalman filter as well, are well suited to applications of feature displacement correction. In the present work, we show that the ensemble Kalman filter performs well in problems where large features are displaced both in amplitude and position, as long as it is used on a statistical model which includes both function height and local position displacement in the model state. In a toy model, we characterize the performance-degrading effect that untracked displacements have on filters when large features are present. We then employ tools from classical physics and fluid dynamics to statistically model displacements by area-preserving coordinate transformations. These maps preserve the area of contours in the displaced function, and using strain measures from continuum mechanics, we regularize the statistics on these maps to ensure they model smooth, feature-preserving displacements. The position correction techniques are incorporated into the statistical model, and this modified ensemble Kalman filter is tested on a system of vortices driven by a stochastically forced barotropic vorticity equation. We find that when the position correction term is included in the statistical model, the modified filter provides estimates which exhibit substantial reduction in analysis error variance, using a much smaller ensemble than what is required when the position correction term is removed from the model.
|
Page generated in 0.1054 seconds