Spelling suggestions: "subject:"dataassimilation"" "subject:"datenassimilation""
51 |
Leveraging the information content of process-based models using Differential Evolution and the Extended Kalman FilterHoward, Lucas 01 January 2016 (has links)
Process-based models are used in a diverse array of fields, including environmental engineering to provide supporting information to engineers, policymakers and stakeholdes. Recent advances in remote sensing and data storage technology have provided opportunities for improving the application of process-based models and visualizing data, but also present new challenges. The availability of larger quantities of data may allow models to be constructed and calibrated in a more thorough and precise manner, but depending on the type and volume of data, it is not always clear how to incorporate the information content of these data into a coherent modeling framework. In this context, using process-based models in new ways to provide decision support or to produce more complete and flexible predictive tools is a key task in the modern data-rich engineering world. In standard usage, models can be used for simulating specific scenarios; they can also be used as part of an automated design optimization algorithm to provide decision support or in a data-assimilation framework to incorporate the information content of ongoing measurements. In that vein, this thesis presents and demonstrates extensions and refinements to leverage the best of what process-based models offer using Differential Evolution (DE) the Extended Kalman Filter (EKF).
Coupling multi-objective optimization to a process-based model may provide valuable information provided an objective function is constructed appropriately to reflect the multi-objective problem and constraints. That, in turn, requires weighting two or more competing objectives in the early stages of an analysis. The methodology proposed here relaxes that requirement by framing the model optimization as a sensitivity analysis. For demonstration, this is implemented using a surface water model (HEC-RAS) and the impact of floodplain access up and downstream of a fixed bridge on bridge scour is analyzed. DE, an evoutionary global optimization algorithm, is wrapped around a calibrated HEC-RAS model. Multiple objective functions, representing different relative weighting of two objectives, are used; the resulting rank-orders of river reach locations by floodplain access sensitivity are consistent across these multiple functions.
To extend the applicability of data assimilation methods, this thesis proposes relaxing the requirement that the model be calibrated (provided the parameters are still within physically defensible ranges) before performing assimilation. The model is then dynamically calibrated to new state estimates, which depend on the behavior of the model. Feasibility is demonstrated using the EKF and a synthetic dataset of pendulum motion. The dynamic calibration method reduces the variance of prediction errors compared to measurement errors using an initially uncalibrated model and produces estimates of calibration parameters that converge to the true values. The potential application of the dynamic calibration method to river sediment transport modeling is proposed in detail, including a method for automated calibration using sediment grain size distribution as a calibration parameter.
|
52 |
Programové prostředí pro asimilační metody v radiační ochraně / Software environment for data assimilation in radiation protectionMajer, Peter January 2015 (has links)
In this work we apply data assimilation onto meteorological model WRF for local domain. We use bayesian statistics, namely Sequential Monte Carlo method combined with particle filtering. Only surface wind data are considered. An application written in Python programming language is also part of this work. This application forms interface with WRF, performs data assimilation and provides set of charts as output of data assimilation. In case of stable wind conditions, wind predictions of assimilated WRF are significantly closer to measured data than predictions of non-assimilated WRF. In this kind of conditions, this assimilated model can be used for more accurate short-term local weather predictions. Powered by TCPDF (www.tcpdf.org)
|
53 |
Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with UncertaintiesSawlan, Zaid A 10 November 2018 (has links)
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here.
Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques.
In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.
|
54 |
Uncertainty Quantification and Assimilation for Efficient Coastal Ocean ForecastingSiripatana, Adil 21 April 2019 (has links)
Bayesian inference is commonly used to quantify and reduce modeling uncertainties in coastal ocean models by computing the posterior probability distribution function (pdf) of some uncertain quantities to be estimated conditioned on available observations. The posterior can be computed either directly, using a Markov Chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation (DA) approach. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without a significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach often due to restricted Gaussian prior and noise assumptions.
This thesis aims to develop, implement and test novel efficient Bayesian inference techniques to quantify and reduce modeling and parameter uncertainties of coastal ocean models. Both state and parameter estimations will be addressed within the framework of a state of-the-art coastal ocean model, the Advanced Circulation (ADCIRC) model. The first part of the thesis proposes efficient Bayesian inference techniques for uncertainty quantification (UQ) and state-parameters estimation. Based on a realistic framework of observation system simulation experiments (OSSEs), an ensemble Kalman filter (EnKF) is first evaluated against a Polynomial Chaos (PC)-surrogate MCMC method under identical scenarios. After demonstrating the relevance of the EnKF for parameters estimation, an iterative EnKF is introduced and validated for the estimation of a spatially varying Manning’s n coefficients field. Karhunen-Lo`eve (KL) expansion is also tested for dimensionality reduction and conditioning of the parameter search space. To further enhance the performance of PC-MCMC for estimating spatially varying parameters, a coordinate transformation of a Gaussian process with parameterized prior covariance function is next incorporated into the Bayesian inference framework to account for the uncertainty in covariance model hyperparameters. The second part of the thesis focuses on the use of UQ and DA on adaptive mesh models. We developed new approaches combining EnKF and multiresolution analysis, and demonstrated significant reduction in the cost of data assimilation compared to the traditional EnKF implemented on a non-adaptive mesh.
|
55 |
Detection and localisation of pipe bursts in a district metered area using an online hydraulic modelOkeya, Olanrewaju Isaac January 2018 (has links)
This thesis presents a research work on the development of new methodology for near-real-time detection and localisation of pipe bursts in a Water Distribution System (WDS) at the District Meters Area (DMA) level. The methodology makes use of online hydraulic model coupled with a demand forecasting methodology and several statistical techniques to process the hydraulic meters data (i.e., flows and pressures) coming from the field at regular time intervals (i.e. every 15 minutes). Once the detection part of the methodology identifies a potential burst occurrence in a system it raises an alarm. This is followed by the application of the burst localisation methodology to approximately locate the event within the District Metered Area (DMA). The online hydraulic model is based on data assimilation methodology coupled with a short-term Water Demand Forecasting Model (WDFM) based on Multi-Linear Regression. Three data assimilation methods were tested in the thesis, namely the iterative Kalman Filter method, the Ensemble Kalman Filter method and the Particle Filter method. The iterative Kalman Filter (i-KF) method was eventually chosen for the online hydraulic model based on the best overall trade-off between water system state prediction accuracy and computational efficiency. The online hydraulic model created this way was coupled with the Statistical Process Control (SPC) technique and a newly developed burst detection metric based on the moving average residuals between the predicted and observed hydraulic states (flows/pressures). Two new SPC-based charts with associated generic set of control rules for analysing burst detection metric values over consecutive time steps were introduced to raise burst alarms in a reliable and timely fashion. The SPC rules and relevant thresholds were determined offline by performing appropriate statistical analysis of residuals. The above was followed by the development of the new methodology for online burst localisation. The methodology integrates the information on burst detection metric values obtained during the detection stage with the new sensitivity matrix developed offline and hydraulic model runs used to simulate potential bursts to identify the most likely burst location in the pipe network. A new data algorithm for estimating the ‘normal’ DMA demand and burst flow during the burst period is developed and used for localisation. A new data algorithm for statistical analysis of flow and pressure data was also developed and used to determine the approximate burst area by producing a list of top ten suspected burst location nodes. The above novel methodologies for burst detection and localisation were applied to two real-life District Metred Areas in the United Kingdom (UK) with artificially generated flow and pressure observations and assumed bursts. The results obtained this way show that the developed methodology detects pipe bursts in a reliable and timely fashion, provides good estimate of a burst flow and accurately approximately locates the burst within a DMA. In addition, the results obtained show the potential of the methodology described here for online burst detection and localisation in assisting Water Companies (WCs) to conserve water, save energy and money. It can also enhance the UK WCs’ profile customer satisfaction, improve operational efficiency and improve the OFWAT’s Service Incentive Mechanism (SIM) scores.
|
56 |
Numerical model error in data assimilationJenkins, Siân January 2015 (has links)
In this thesis, we produce a rigorous and quantitative analysis of the errors introduced by finite difference schemes into strong constraint 4D-Variational (4D-Var) data assimilation. Strong constraint 4D-Var data assimilation is a method that solves a particular kind of inverse problem; given a set of observations and a numerical model for a physical system together with a priori information on the initial condition, estimate an improved initial condition for the numerical model, known as the analysis vector. This method has many forms of error affecting the accuracy of the analysis vector, and is derived under the assumption that the numerical model is perfect, when in reality this is not true. Therefore it is important to assess whether this assumption is realistic and if not, how the method should be modified to account for model error. Here we analyse how the errors introduced by finite difference schemes used as the numerical model, affect the accuracy of the analysis vector. Initially the 1D linear advection equation is considered as our physical system. All forms of error, other than those introduced by finite difference schemes, are initially removed. The error introduced by `representative schemes' is considered in terms of numerical dissipation and numerical dispersion. A spectral approach is successfully implemented to analyse the impact on the analysis vector, examining the effects on unresolvable wavenumber components and the l2-norm of the error. Subsequently, a similar also successful analysis is conducted when observation errors are re-introduced to the problem. We then explore how the results can be extended to weak constraint 4D-Var. The 2D linear advection equation is then considered as our physical system, demonstrating how the results from the 1D problem extend to 2D. The linearised shallow water equations extend the problem further, highlighting the difficulties associated with analysing a coupled system of PDEs.
|
57 |
Assimilation variationnelle d'observations multi-échelles : Application à la fusion de données hétérogènes pour l'étude de la dynamique micro et macrophysique des systèmes précipitants / Variationnal assimilation of multi-scale observations : fusion of heterogeneous data for the study of dynamics of rainfall at macro and microscopic scalesMercier, Francois 05 July 2016 (has links)
D’une part, les instruments permettant de mesurer les précipitations (pluviomètres, radars, etc.) effectuent des mesures de natures différentes et à différentes échelles. Leurs données sont difficilement comparables. D’autre part, les modèles décrivant l’évolution des précipitations sont eux complexes et difficiles à paramétrer et à valider. Dans cette thèse, nous utilisons l’assimilation de données afin de coupler des observations hétérogènes des précipitations et des modèles, pour étudier les précipitations et leur variabilité spatiotemporelle à différentes échelles (macrophysique, qui s’intéresse aux cellules de pluie, et microphysique, qui s’intéresse à la distribution en taille des gouttes – DSD – qui les composent). Tout d’abord, nous développons un algorithme permettant de restituer des cartes de précipitations à partir de mesures de l’atténuation causée par la pluie à des ondes provenant de satellites de télévision. Nos restitutions sont validées par rapport à des données radar et pluviomètres sur un cas d’étude dans le sud de la France. Ensuite, nous restituons, toujours par assimilation de données, des profils verticaux de DSD et de vents verticaux à partir de mesures de flux de gouttes au sol (par disdromètres) et de spectres Doppler en altitude (par radar). Nous utilisons ces restitutions sur 3 cas d’étude pour étudier les phénomènes physiques agissant sur les gouttes de pluie durant leur chute et pour évaluer la paramétrisation de ces phénomènes dans les modèles. / On the one hand, the instruments designed to measure rainfall (rain gages, radars, etc.) perform measurements at different scales and of different natures. Their data are hard to compare. On the other hand, models simulating the evolution of rainfall are complex. It is not an easy task to parameterize and to validate them. In this thesis, we use data assimilation in order to couple heterogeneous observations of rainfall and models for studying rain and its spatiotemporal variability at different scales (macrophysical scale, which is interested in rain cells, as well as microphysical scale, which is interested in the drop size distribution – DSD). First, we develop an algorithm able to retrieve rain maps from measurements of attenuation of waves coming from TV satellites due to rainfall. Our retrievals are validated by comparison with radar and rain gages data for a case study in south of France. Second, we retrieve – again with data assimilation – vertical profiles of DSD and vertical winds from measurements of rain drop fluxes on the ground (using a disdrometer) and of Doppler spectra aloft (using a radar). We use these retrievals for 3 case studies to study the physical phenomena acting on rain drops during their fall and to evaluate the parameterization of these phenomena in models.
|
58 |
Modélisation du zooplancton et du micronecton marins / Modeling marine zooplankton and micronektonConchon, Anna 20 June 2016 (has links)
Le zooplancton et le micronecton sont les deux premiers échelons animaux de la chaine trophique marine. Bien que de tailles très différentes (200μm à 2mm pour le zooplancton, 2 à 20cm pour le micronecton), ces deux groupes d'espèces variées partagent un comportement singulier : les migrations nycthémérales. Ces migrations journalières entre la profondeur de jour et la surface de nuit induisent des flux de matière organique très importants entre les différentes profondeurs de l'océan. L'étude des cycles biogéochimiques océaniques a une grande importance pour l'étude du changement climatique. Cette étude est notamment conduite à travers le développement de modèles globaux de circulation océanique et de biogéochimie. La suite logique de ces développements est donc la modélisation du zooplancton et du micronecton. La gamme de modèles SEAPODYM modélise avec parcimonie la chaine trophique depuis le zooplancton jusqu'aux prédateurs supérieurs à l'aide de trois modèles. Cette thèse présente le modèle de biomasse de zooplancton SEAPODYM-LTL (pour lower trophic level, niveau trophique bas), ainsi qu'une analyse de sa sensibilité aux forçages. En effet, la particularité de ces modèles est leur forçage offline par des champs de courants, température et production primaire produits par d'autres modèles. Le modèle SEAPODYM-LTL est également comparé au modèle PISCES (NPZD), et présente des performances similaires à ce dernier dans le cas testé. Afin d'améliorer les prédictions du modèle SEAPODYM-MTL (mid-trophic level, i.e. le modèle de biomasse de micronecton), une méthodologie d'assimilation de données a été mise en place pour affiner la paramétrisation utilisée. Des données d'acoustique active (38kHz) sont donc utilisées pour enrichir le modèle. Cette méthodologie a été conçue autour d'un cas test présenté dans cette thèse. L'extension du jeu de données acoustiques assimilées au modèle a permis de mettre en évidence le besoin de mieux modéliser les profondeurs des couches verticales de SEAPODYM. Cela a été réalisé à l'aide du jeu de données acoustiques évoqué précédemment. Cette étude est également présentée dans cette thèse. / Zooplankton and micronecton are the first marine trophic levels. Different by their size (200μm to 2mm for zooplankton, 2 to 20cm for micronekton), this two groups undergo diel vertical migration from depth by day to the surface during the night. These migrations create major organic matter fluxes between the deep ocean and the surface. Biogeochemical cycles are of great importance for climate change studies. These studies are conducted with ocean global circulation model and biogeochemical model. The way to go is develop low and mid-trophic level modelling approaches. SEAPODYM ensemble of models are three parsimonious model of biomass at diverse level of the trophic chain, from zooplankton to top predators. This thesis introduce the zooplankton biomass model SEAPODYM-LTL (lower trophic level) and a forcing fields sensitivity analysis. Indeed, these model are forced off line by currents, temperature and primary production fields produced by other models. SEAPODYM-LTL has also been compared to PISCES (NPZD) and both have similar performance score in this study. In order to improve SEAPODYM-MTL (mid trophic level) predictions, a data assimilation framework has been developed to find a better parameterisation. 38kHz active acoustic data have been used to improve the model. This methodology has been develop thanks to a test case that we present in this thesis. The gathered acoustic dataset permitted to show the need of a better definition of vertical layer depths. It has been developed using the acoustic dataset. The related study is presented in this thesis.
|
59 |
Turbulent complex flows reconstruction via data assimilation in large eddy models / Reconstruction d’écoulements turbulents complexes par assimilation de données images dans des modèles grandes échellesChandramouli, Pranav 19 October 2018 (has links)
L'assimilation de données en tant qu'outil pour la mécanique des fluides a connu une croissance exponentielle au cours des dernières décennies. La possibilité de combiner des mesures précises mais partielles avec un modèle dynamique complet est précieuse et a de nombreuses applications dans des domaines allant de l'aérodynamique, à la géophysique et à l’aéraulique. Cependant, son utilité reste limitée en raison des contraintes imposées par l'assimilation de données notamment en termes de puissance de calcul, de besoins en mémoire et en informations préalables. Cette thèse tente de remédier aux différentes limites de la procédure d'assimilation pour faciliter plus largement son utilisation en mécanique des fluides. Un obstacle majeur à l'assimilation des données est un coût de calcul prohibitif pour les écoulements complexes. Une modélisation de la turbulence à grande échelle est intégrée à la procédure d'assimilation afin de réduire considérablement la coût de calcul et le temps requis. La nécessité d'une information volumétrique préalable pour l'assimilation est abordée à l'aide d'une nouvelle méthodologie de reconstruction développée et évaluée dans cette thèse. L'algorithme d'optimisation reconstruit les champs 3D à partir d'observations dans deux plans orthogonaux en exploitant l'homogénéité directionnelle. La méthode et ses variantes fonctionnent bien avec des ensembles de données synthétiques et expérimentaux fournissant des reconstructions précises. La méthodologie de reconstruction permet également d’estimer la matrice de covariance d’ébauche indispensable à un algorithme d’assimilation efficace. Tous les ingrédients sont combinés pour effectuer avec succès l'assimilation de données variationnelles d'un écoulement turbulent dans le sillage d'un cylindre à un nombre de Reynolds transitoire. L'algorithme d'assimilation est validé pour des observations volumétriques synthétiques et est évalué sur des observations expérimentales dans deux plans orthogonaux. / Data assimilation as a tool for fluid mechanics has grown exponentially over the last few decades. The ability to combine accurate but partial measurements with a complete dynamical model is invaluable and has numerous applications to fields ranging from aerodynamics, geophysics, and internal ventilation. However, its utility remains limited due to the restrictive requirements for performing data assimilation in the form of computing power, memory, and prior information. This thesis attempts at redressing various limitations of the assimilation procedure in order to facilitate its wider use in fluid mechanics. A major roadblock for data assimilation is the computational cost which is restrictive for all but the simplest of flows. Following along the lines of Joseph Smagorinsky, turbulence modelling through large-eddy simulation is incorporated in to the assimilation procedure to significantly reduce computing power and time required. The requirement for prior volumetric information for assimilation is tackled using a novel reconstruction methodology developed and assessed in this thesis. The snapshot optimisation algorithm reconstructs 3D fields from 2D cross- planar observations by exploiting directional homogeneity. The method and its variants work well with synthetic and experimental data-sets providing accurate reconstructions. The reconstruction methodology also provides the means to estimate the background covariance matrix which is essential for an efficient assimilation algorithm. All the ingredients are combined to perform variational data assimilation of a turbulent wake flow around a cylinder successfully at a transitional Reynolds number. The assimilation algorithm is validated with synthetic volumetric observation and assessed on 2D cross-planar observations emulating experimental data.
|
60 |
Dispersion atmosphérique et modélisation inverse pour la reconstruction de sources accidentelles de polluants / Atmospheric dispersion and inverse modelling for the reconstruction of accidental sources of pollutantsWiniarek, Victor 04 March 2014 (has links)
Les circonstances pouvant conduire à un rejet incontrôlé de polluants dans l'atmosphère sont variées : il peut s'agir de situations accidentelles, par exemples des fuites ou explosions sur un site industriel, ou encore de menaces terroristes : bombe sale, bombe biologique, notamment en milieu urbain. Face à de telles situations, les objectifs des autorités sont multiples : prévoir les zones impactées à court terme, notamment pour évacuer les populations concernées ; localiser la source pour pouvoir intervenir directement sur celle-ci ; enfin déterminer les zones polluées à plus long terme, par exemple par le dépôt de polluants persistants, et soumises à restriction de résidence ou d'utilisation agricole. Pour atteindre ces objectifs, des modèles numériques peuvent être utilisés pour modéliser la dispersion atmosphérique des polluants. Après avoir rappelé les processus physiques qui régissent le transport de polluants dans l'atmosphère, nous présenterons les différents modèles à disposition. Le choix de l'un ou l'autre de ces modèles dépend de l'échelle d'étude et du niveau de détails (topographiques notamment) désiré. Nous présentons ensuite le cadre général (bayésien) de la modélisation inverse pour l'estimation de sources. Le principe est l'équilibre entre des informations a priori et des nouvelles informations apportées par des observations et le modèle numérique. Nous mettons en évidence la forte dépendance de l'estimation du terme source et de son incertitude aux hypothèses réalisées sur les statistiques des erreurs a priori. Pour cette raison nous proposons plusieurs méthodes pour estimer rigoureusement ces statistiques. Ces méthodes sont appliquées sur des exemples concrets : tout d'abord un algorithme semi-automatique est proposé pour la surveillance opérationnelle d'un parc de centrales nucléaires. Un second cas d'étude est la reconstruction des termes sources de césium-137 et d'iode-131 consécutifs à l'accident de la centrale nucléaire de Fukushima Daiichi. En ce qui concerne la localisation d'une source inconnue, deux stratégies sont envisageables : les méthodes dites paramétriques et les méthodes non-paramétriques. Les méthodes paramétriques s'appuient sur le caractère particulier des situations accidentelles dans lesquelles les émissions de polluants sont généralement d'étendue limitée. La source à reconstruire est alors paramétrisée et le problème inverse consiste à estimer ces paramètres, en nombre réduit. Dans les méthodes non-paramétriques, aucune hypothèse sur la nature de la source (ponctuelle, localisée, ...) n'est réalisée et le système cherche à reconstruire un champs d'émission complet (en 4 dimensions). Plusieurs méthodes sont proposées et testées sur des situations réelles à l'échelle urbaine avec prise en compte des bâtiments, pour lesquelles les méthodes que nous proposons parviennent à localiser la source à quelques mètres près, suivant les situations modélisées et les méthodes inverses utilisées / Uncontrolled releases of pollutant in the atmosphere may be the consequence of various situations : accidents, for instance leaks or explosions in an industrial plant, or terrorist attacks such as biological bombs, especially in urban areas. In the event of such situations, authorities' objectives are various : predict the contaminated zones to apply first countermeasures such as evacuation of concerned population ; determine the source location ; assess the long-term polluted areas, for instance by deposition of persistent pollutants in the soil. To achieve these objectives, numerical models can be used to model the atmospheric dispersion of pollutants. We will first present the different processes that govern the transport of pollutants in the atmosphere, then the different numerical models that are commonly used in this context. The choice between these models mainly depends of the scale and the details one seeks to take into account.We will then present the general framework of inverse modeling for the estimation of source. Inverse modeling techniques make an objective balance between prior information and new information contained in the observation and the model. We will show the strong dependency of the source term estimation and its uncertainty towards the assumptions made on the statistics of the prior errors in the system. We propose several methods to estimate rigorously these statistics. We will apply these methods on different cases, using either synthetic or real data : first, a semi-automatic algorithm is proposed for the operational monitoring of nuclear facilities. The second and third studies concern the source term estimation of the accidental releases from the Fukushima Daiichi nuclear power plant. Concerning the localization of an unknown source of pollutant, two strategies can be considered. On one hand parametric methods use a limited number of parameters to characterize the source term to be reconstructed. To do so, strong assumptions are made on the nature of the source. The inverse problem is hence to estimate these parameters. On the other hand non-parametric methods attempt to reconstruct a full emission field. Several parametric and non-parametric methods are proposed and evaluated on real situations at a urban scale, with a CFD model taking into account buildings influence on the air flow. In these experiments, some proposed methods are able to localize the source with a mean error of some meters, depending on the simulated situations and the inverse modeling methods
|
Page generated in 0.111 seconds