• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 57
  • 13
  • 13
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 216
  • 216
  • 75
  • 39
  • 36
  • 35
  • 35
  • 26
  • 25
  • 19
  • 18
  • 18
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

O impacto do uso da técnica de assimilação de dados 3DVAR nos prognósticos do modelo WRF

Macedo, Luana Ribeiro January 2014 (has links)
O uso da técnica de assimilação de dados meteorológicos é extremamente importante para a correção de imprecisões nos dados que compõem as condições iniciais e de fronteira dos modelos de previsão do tempo. Neste trabalho, faz-se uso da técnica de assimilação de dados 3DVAR contida no modelo de mesoescala WRF (Weather Research and Forecasting), o objetivo principal do trabalho é analisar o impacto da assimilação de dados meteorológicos de diversas fontes de dados (GTS – Sistema Global de Telecomunicações, estações automáticas, dados radar) no modelo WRF. Para analisar a consistência da assimilação de dados no WRF verificou-se a diferença entre a análise com e sem assimilação de dados. Confirmando a consistência da mesma, foram realizados os procedimentos necessários para gerar os prognósticos com assimilação de dados para cada caso individualmente. Os experimentos com assimilação de dados foram realizados para cada tipo de dado e em conjunto, possibilitando assim fazer uma análise do impacto que cada dado tem na previsão. Os resultados foram comparados entre si espacialmente utilizando dados do modelo global GFS (Global Forecast System) e satélite da Missão de Medida da Chuva Tropical (TRMM). A variável da precipitação acumulada foi comparada e validada espacialmente com os dados do TRMM, constatou-se para o caso do mês de janeiro uma superestimação dos valores acumulados para algumas regiões e para o caso do mês de abril uma subestimação, isso se deve ao fato da frequência temporal dos dados do satélite TRMM, pois provavelmente elas não foram compatíveis com o horário das precipitações. Quando comparado com o volume de chuva pontual com os dados da estação automática a maioria dos processamentos mostrou-se eficaz. Também no estudo de caso ocorrido no mês de janeiro a inserção de dados assimilados possibilitou uma melhora na intensidade e localização da célula convectiva. As variáveis da temperatura e do vento foram comparadas espacialmente com as análises do modelo GFS. A variável da temperatura ora apresentou valores superiores, ora inferiores ao modelo GFS, mesmo assim os resultados foram satisfatórios, uma vez que, foi possível simular temperaturas superiores antes da passagem do sistema e inferiores após a passagem do mesmo. Para o campo de vento houve uma pequena discrepância em todas as simulações em relação a magnitude, porém a direção do vento foi plotada de forma coerente, simulando até o ciclone presente no caso do mês de abril. Para o perfil vertical da temperatura e temperatura do ponto de orvalho o impacto da assimilação de dados foi pequeno, porém ambas as simulações representaram de forma coesa os perfis quando comparados com o perfil observado. Em suma, o estudo comprova que, embora se tenha algumas incoerências assimilação 3DVAR contribui de modo significativo nas previsões do tempo do modelo WRF. / The use of meteorological data assimilation technique is extremely important for the correction of the imprecisions of observational data for the initial and boundary conditions of weather forecasting models. In the present work it is used the 3DVAR data assimilation technique of the mesoscale model WRF system (Weather Research and Forecasting) aiming the analysis of the impact of the assimilation of meteorological data from several data sources (GTS - Global Telecommunication System, automatic surface stations network and radar) in the WRF model. To analysis the consistency of the data in the WRF assimilation it has been gathered the difference between analysis, with and without data assimilation. Confirming its consistency the procedures required, to generate predictions with data assimilation for each individual case were performed. The data assimilation experiments were performed for each data type as well as including all of them allowing, therefore, the analysis of the impact of each over the forecast. The results were compared and validated using data from the spatially global forecasting model GFS (Global Forecast System), satellite and the mission of the Tropical Rain Measurement (TRMM) data. The cumulative rainfall variable was compared spatially with data from TRMM, where it has been observed, in the case of January, an overestimation of the accumulated values for some regions and an underestimation for the case of April. These have been occurred because of temporal frequency of the TRMM satellite data - which probably because were not compatible with the precipitation time occurrence. Comparison between the accumulated precipitation with data from automatic station presented mostly effective results. Also, in the case study of the January with assimilated data, produced an improvement in the intensity as well as in the location of the convective cell. The wind and temperature variables were compared with the spatially GFS’s analysis. The higher temperature variable values presented alternated, from higher and lower values compared to the GFS results. The results were nevertheless unsatisfactory, because the simulated temperatures presented prior to passing the frontal system and after passing it. For the wind field there was a small discrepancy in all simulations regarding the magnitude, but the wind direction was plotted consistently simulating up to the present in the case of April cyclone. For the vertical profiles of temperature and dew point temperature the impact of data assimilation was small, but both simulations made represented good profiles, compared with the observed values. In summary, the study shows that, although there were some inconsistencies, compared with the observations, the 3DVAR assimilation contributes significantly to WRF model forecasts.
12

Predictive modelling and uncertainty quantification of UK forest growth

Lonsdale, Jack Henry January 2015 (has links)
Forestry in the UK is dominated by coniferous plantations. Sitka spruce (Picea sitchensis) and Scots pine (Pinus sylvestris) are the most prevalent species and are mostly grown in single age mono-culture stands. Forest strategy for Scotland, England, and Wales all include efforts to achieve further afforestation. The aim of this afforestation is to provide a multi-functional forest with a broad range of benefits. Due to the time scale involved in forestry, accurate forecasts of stand productivity (along with clearly defined uncertainties) are essential to forest managers. These can be provided by a range of approaches to modelling forest growth. In this project model comparison, Bayesian calibration, and data assimilation methods were all used to attempt to improve forecasts and understanding of uncertainty therein of the two most important conifers in UK forestry. Three different forest growth models were compared in simulating growth of Scots pine. A yield table approach, the process-based 3PGN model, and a Stand Level Dynamic Growth (SLeDG) model were used. Predictions were compared graphically over the typical productivity range for Scots pine in the UK. Strengths and weaknesses of each model were considered. All three produced similar growth trajectories. The greatest difference between models was in volume and biomass in unthinned stands where the yield table predicted a much larger range compared to the other two models. Future advances in data availability and computing power should allow for greater use of process-based models, but in the interim more flexible dynamic growth models may be more useful than static yield tables for providing predictions which extend to non-standard management prescriptions and estimates of early growth and yield. A Bayesian calibration of the SLeDG model was carried out for both Sitka spruce and Scots pine in the UK for the first time. Bayesian calibrations allow both model structure and parameters to be assessed simultaneously in a probabilistic framework, providing a model with which forecasts and their uncertainty can be better understood and quantified using posterior probability distributions. Two different structures for including local productivity in the model were compared with a Bayesian model comparison. A complete calibration of the more probable model structure was then completed. Example forecasts from the calibration were compatible with existing yield tables for both species. This method could be applied to other species or other model structures in the future. Finally, data assimilation was investigated as a way of reducing forecast uncertainty. Data assimilation assumes that neither observations nor models provide a perfect description of a system, but combining them may provide the best estimate. SLeDG model predictions and LiDAR measurements for sub-compartments within Queen Elizabeth Forest Park were combined with an Ensemble Kalman Filter. Uncertainty was reduced following the second data assimilation in all of the state variables. However, errors in stand delineation and estimated stand yield class may have caused observational uncertainty to be greater thus reducing the efficacy of the method for reducing overall uncertainty.
13

Short-Term Irradiance Forecasting Using an Irradiance Monitoring Network, Satellite Imagery, and Data Assimilation

Lorenzo, Antonio Tomas, Lorenzo, Antonio Tomas January 2017 (has links)
Solar and other renewable power sources are becoming an integral part of the electrical grid in the United States. In the Southwest US, solar and wind power plants already serve over 20% of the electrical load during the daytime on sunny days in the Spring. While solar power produces fewer emissions and has a lower carbon footprint than burning fossil fuels, solar power is only generated during the daytime and it is variable due to clouds blocking the sun. Electric utilities that are required to maintain a reliable electricity supply benefit from anticipating the schedule of power output from solar power plants. Forecasting the irradiance reaching the ground, the primary input to a solar power forecast, can help utilities understand and respond to the variability. This dissertation will explore techniques to forecast irradiance that make use of data from a network of sensors deployed throughout Tucson, AZ. The design and deployment of inexpensive sensors used in the network will be described. We will present a forecasting technique that uses data from the sensor network and outperforms a reference persistence forecast for one minute to two hours in the future. We will analyze the errors of this technique in depth and suggest ways to interpret these errors. Then, we will describe a data assimilation technique, optimal interpolation, that combines estimates of irradiance derived from satellite images with data from the sensor network to improve the satellite estimates. These improved satellite estimates form the base of future work that will explore generating forecasts while continuously assimilating new data.
14

Data and Dynamics Driven Approaches for Modelling and Forecasting the Red Sea Chlorophyll

Dreano, Denis 31 May 2017 (has links)
Phytoplankton is at the basis of the marine food chain and therefore play a fundamental role in the ocean ecosystem. However, the large-scale phytoplankton dynamics of the Red Sea are not well understood yet, mainly due to the lack of historical in situ measurements. As a result, our knowledge in this area relies mostly on remotely-sensed observations and large-scale numerical marine ecosystem models. Models are very useful to identify the mechanisms driving the variations in chlorophyll concentration and have practical applications for fisheries operation and harmful algae blooms monitoring. Modelling approaches can be divided between physics- driven (dynamical) approaches, and data-driven (statistical) approaches. Dynamical models are based on a set of differential equations representing the transfer of energy and matter between different subsets of the biota, whereas statistical models identify relationships between variables based on statistical relations within the available data. The goal of this thesis is to develop, implement and test novel dynamical and statistical modelling approaches for studying and forecasting the variability of chlorophyll concentration in the Red Sea. These new models are evaluated in term of their ability to efficiently forecast and explain the regional chlorophyll variability. We also propose innovative synergistic strategies to combine data- and physics-driven approaches to further enhance chlorophyll forecasting capabilities and efficiency.
15

Assimilation and Forecast Studies on Localized Heavy Rainfall Events Using a Cloud-Resolving 4-Dimensional Variational Data Assimilation System / 雲解像4次元変分法データ同化システムを用いた局地豪雨に関するデータ同化および予報に関する研究

Kawabata, Takuya 23 May 2014 (has links)
京都大学 / 0048 / 新制・論文博士 / 博士(理学) / 乙第12830号 / 論理博第1541号 / 新制||理||1578(附属図書館) / 31368 / (主査)教授 余田 成男, 教授 石川 裕彦, 教授 向川 均 / 学位規則第4条第2項該当 / Doctor of Science / Kyoto University / DGAM
16

REGULARIZATION OF THE BACKWARDS KURAMOTO-SIVASHINSKY EQUATION

Gustafsson, Jonathan January 2007 (has links)
<p>We are interested in backward-in-time solution techniques for evolutionary PDE problems arising in fluid mechanics. In addition to their intrinsic interest, such techniques have applications in recently proposed retrograde data assimilation. As our model system we consider the terminal value problem for the Kuramoto-Sivashinsky equation in a l D periodic domain. The Kuramoto-Sivashinsky equation, proposed as a model for interfacial and combustion phenomena, is often also adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Such backward problems are typical examples of ill-posed problems, where any disturbances are amplified exponentially during the backward march. Hence, regularization is required to solve such problems efficiently in practice. We consider regularization approaches in which the original ill-posed problem is approximated with a less ill-posed problem, which is achieved by adding a regularization term to the original equation. While such techniques are relatively well-understood for linear problems, it is still unclear what effect these techniques may have in the nonlinear setting. In addition to considering regularization terms with fixed magnitudes, we also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.</p> / Thesis / Master of Science (MSc)
17

Adjoint based solution and uncertainty quantification techniques for variational inverse problems

Hebbur Venkata Subba Rao, Vishwas 25 September 2015 (has links)
Variational inverse problems integrate computational simulations of physical phenomena with physical measurements in an informational feedback control system. Control parameters of the computational model are optimized such that the simulation results fit the physical measurements.The solution procedure is computationally expensive since it involves running the simulation computer model (the emph{forward model}) and the associated emph {adjoint model} multiple times. In practice, our knowledge of the underlying physics is incomplete and hence the associated computer model is laden with emph {model errors}. Similarly, it is not possible to measure the physical quantities exactly and hence the measurements are associated with emph {data errors}. The errors in data and model adversely affect the inference solutions. This work develops methods to address the challenges posed by the computational costs and by the impact of data and model errors in solving variational inverse problems. Variational inverse problems of interest here are formulated as optimization problems constrained by partial differential equations (PDEs). The solution process requires multiple evaluations of the constraints, therefore multiple solutions of the associated PDE. To alleviate the computational costs we develop a parallel in time discretization algorithm based on a nonlinear optimization approach. Like in the emph{parareal} approach, the time interval is partitioned into subintervals, and local time integrations are carried out in parallel. Solution continuity equations across interval boundaries are added as constraints. All the computational steps - forward solutions, gradients, and Hessian-vector products - involve only ideally parallel computations and therefore are highly scalable. This work develops a systematic mathematical framework to compute the impact of data and model errors on the solution to the variational inverse problems. The computational algorithm makes use of first and second order adjoints and provides an a-posteriori error estimate for a quantity of interest defined on the inverse solution (i.e., an aspect of the inverse solution). We illustrate the estimation algorithm on a shallow water model and on the Weather Research and Forecast model. Presence of outliers in measurement data is common, and this negatively impacts the solution to variational inverse problems. The traditional approach, where the inverse problem is formulated as a minimization problem in $L_2$ norm, is especially sensitive to large data errors. To alleviate the impact of data outliers we propose to use robust norms such as the $L_1$ and Huber norm in data assimilation. This work develops a systematic mathematical framework to perform three and four dimensional variational data assimilation using $L_1$ and Huber norms. The power of this approach is demonstrated by solving data assimilation problems where measurements contain outliers. / Ph. D.
18

On the 3 M's of Epidemic Forecasting: Methods, Measures, and Metrics

Tabataba, Farzaneh Sadat 06 December 2017 (has links)
Over the past few decades, various computational and mathematical methodologies have been proposed for forecasting seasonal epidemics. In recent years, the deadly effects of enormous pandemics such as the H1N1 influenza virus, Ebola, and Zika, have compelled scientists to find new ways to improve the reliability and accuracy of epidemic forecasts. The improvement and variety of these prediction methods are undeniable. Nevertheless, many challenges remain unresolved in the path of forecasting the outbreaks using surveillance data. Obtaining the clean real-time data has always been an obstacle. Moreover, the surveillance data is usually noisy and handling the uncertainty of the observed data is a major issue for forecasting algorithms. Correct modeling assumptions regarding the nature of the infectious disease is another dilemma. Oversimplified models could lead to inaccurate forecasts, whereas more complicated methods require additional computational resources and information. Without those, the model may not be able to converge to a unique optimum solution. Through the last decade, there has been a significant effort towards achieving better epidemic forecasting algorithms. However, the lack of standard, well-defined evaluating metrics impedes a fair judgment on the proposed methods. This dissertation is divided into two parts. In the first part, we present a Bayesian particle filter calibration framework integrated with an agent-based model to forecast the epidemic trend of diseases like flu and Ebola. Our approach uses Bayesian statistics to estimate the underlying disease model parameters given the observed data and handle the uncertainty in the reasoning. An individual-based model with different intervention strategies could result in a large number of unknown parameters that should be properly calibrated. As particle filter could collapse in very large-scale systems (curse-of-dimensionality problem), achieving the optimum solution becomes more challenging. Our proposed particle filter framework utilizes machine learning concepts to restrain the intractable search space. It incorporates a smart analyzer in the state dynamics unit that examines the predicted and observed data using machine learning techniques to guide the direction and amount of perturbation of each parameter in the searching process. The second part of this dissertation focuses on providing standard evaluation measures for evaluating epidemic forecasts. We present an end-to-end framework that introduces epidemiologically relevant features (Epi-features), error measures, and ranking schema as the main modules of the evaluation process. Lastly, we provide the evaluation framework as a software package named Epi-Evaluator and demonstrate the potentials and capabilities of the framework by applying it to the output of different forecasting methods. / PHD / Epidemics impose substantial costs to societies by deteriorating the public health and disrupting economic trends. In recent years, the deadly effects of wide-spread pandemics such as H1N1, Ebola, and Zika, have compelled scientists to find new ways to improve the reliability and accuracy of epidemic forecasts. The reliable prediction of future pandemics and providing efficient intervention plans for health care providers could prevent or control disease propagations. Over the last decade, there has been a significant effort towards achieving better epidemic forecasting algorithms. The mission, however, is far from accomplished. Moreover, there has been no significant leap towards standard, well-defined evaluating metrics and criteria for a fair performance judgment between the proposed methods. This dissertation is divided into two parts. In the first part, we present a Bayesian particle filter calibration framework integrated with an agent-based model to forecast the epidemic trend of diseases like flu and Ebola. We model the disease propagation via a large scale agent-based model that simulates the disease spread across the contact network of people. The contact network consists of millions of nodes and is constructed based on demographic information of individuals achieved from the census data. The agent-based model’s configurations are mostly unknown parameters that should be properly calibrated. We present a Bayesian particle filter calibration approach to estimate the underlying disease model parameters given the observed data and handle the uncertainty in the reasoning. As particle filter could collapse in very large-scale systems, achieving the optimum solution becomes more challenging. Our proposed particle filter framework utilizes machine learning concepts to restrain the intractable search space. It incorporates a smart analyzer unit that examines the predicted and observed data using machine learning techniques to guide the direction and amount of perturbation of each parameter in the searching process. The second part of this dissertation focuses on providing standard evaluation measures for evaluating and comparing epidemic forecasts. We present a framework that introduces epidemiologically relevant features (Epi-features), error measures, and ranking schema as the main modules of the evaluation process. Lastly, we provide the evaluation framework as a software package named Epi-Evaluator and demonstrate the potentials and capabilities of the framework by applying it to the output of different forecasting methods.
19

Large-scale snowpack estimation using ensemble data assimilation methodologies, satellite observations and synthetic datasets

Su, Hua 03 June 2010 (has links)
This work focuses on a series of studies that contribute to the development and test of advanced large-scale snow data assimilation methodologies. Compared to the existing snow data assimilation methods and strategies, which are limited in the domain size and landscape coverage, the number of satellite sensors, and the accuracy and reliability of the product, the present work covers the continental domain, compares single- and multi-sensor data assimilations, and explores uncertainties in parameter and model structure. In the first study a continental-scale snow water equivalent (SWE) data assimilation experiment is presented, which incorporates Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover fraction (SCF) data to Community Land Model (CLM) estimates via the ensemble Kalman filter (EnKF). The greatest improvements of the EnKF approach are centered in the mountainous West, the northern Great Plains, and the west and east coast regions, with the magnitude of corrections (compared to the use of model only) greater than one standard deviation (calculated from SWE climatology) at given areas. Relatively poor performance of the EnKF, however, is found in the boreal forest region. In the second study, snowpack related parameter and model structure errors were explicitly considered through a group of synthetic EnKF simulations which integrate synthetic datasets with model estimates. The inclusion of a new parameter estimation scheme augments the EnKF performance, for example, increasing the Nash-Sutcliffe efficiency of season-long SWE estimates from 0.22 (without parameter estimation) to 0.96. In this study, the model structure error is found to significantly impact the robustness of parameter estimation. In the third study, a multi-sensor snow data assimilation system over North America was developed and evaluated. It integrates both Gravity Recovery and Climate Experiment (GRACE) Terrestrial water storage (TWS) and MODIS SCF information into CLM using the ensemble Kalman filter (EnKF) and smoother (EnKS). This GRACE/MODIS data assimilation run achieves a significantly better performance over the MODIS only run in Saint Lawrence, Fraser, Mackenzie, Churchill & Nelson, and Yukon river basins. These improvements demonstrate the value of integrating complementary information for continental-scale snow estimation. / text
20

Dynamical aspects of atmospheric data assimilation in the tropics

Žagar, Nedjeljka January 2004 (has links)
<p>A faithful depiction of the tropical atmosphere requires three-dimensional sets of observations. Despite the increasing amount of observations presently available, these will hardly ever encompass the entire atmosphere and, in addition, observations have errors. Additional (background) information will always be required to complete the picture. Valuable added information comes from the physical laws governing the flow, usually mediated via a numerical weather prediction (NWP) model. These models are, however, never going to be error-free, why a reliable estimate of their errors poses a real challenge since the whole truth will never be within our grasp. </p><p>The present thesis addresses the question of improving the analysis procedures for NWP in the tropics. Improvements are sought by addressing the following issues:</p><p>- the efficiency of the internal model adjustment, </p><p>- the potential of the reliable background-error information, as compared to observations,</p><p>- the impact of a new, space-borne line-of-sight wind measurements, and</p><p>- the usefulness of multivariate relationships for data assimilation in the tropics.</p><p>Most NWP assimilation schemes are effectively univariate near the equator. In this thesis, a multivariate formulation of the variational data assimilation in the tropics has been developed. The proposed background-error model supports the mass-wind coupling based on convectively-coupled equatorial waves. The resulting assimilation model produces balanced analysis increments and hereby increases the efficiency of all types of observations.</p><p>Idealized adjustment and multivariate analysis experiments highlight the importance of direct wind measurements in the tropics. In particular, the presented results confirm the superiority of wind observations compared to mass data, in spite of the exact multivariate relationships available from the background information. The internal model adjustment is also more efficient for wind observations than for mass data. </p><p>In accordance with these findings, new satellite wind observations are expected to contribute towards the improvement of NWP and climate modeling in the tropics. Although incomplete, the new wind-field information has the potential to reduce uncertainties in the tropical dynamical fields, if used together with the existing satellite mass-field measurements.</p><p>The results obtained by applying the new background-error representation to the tropical short-range forecast errors of a state-of-art NWP model suggest that achieving useful tropical multivariate relationships may be feasible within an operational NWP environment.</p>

Page generated in 0.1 seconds