• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 125
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 242
  • 242
  • 62
  • 58
  • 53
  • 36
  • 35
  • 34
  • 33
  • 28
  • 26
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

La décomposition en polynôme du chaos pour l'amélioration de l'assimilation de données ensembliste en hydraulique fluviale / Polynomial chaos expansion in fluvial hydraulics in Ensemble data assimilation framework

El Moçayd, Nabil 01 March 2017 (has links)
Ce travail porte sur la construction d'un modèle réduit en hydraulique fluviale avec une méthode de décomposition en polynôme du chaos. Ce modèle réduit remplace le modèle direct afin de réduire le coût de calcul lié aux méthodes ensemblistes en quantification d'incertitudes et assimilation de données. Le contexte de l'étude est la prévision des crues et la gestion de la ressource en eau. Ce manuscrit est composé de cinq parties, chacune divisée en chapitres. La première partie présente un état de l'art des travaux en quantification des incertitudes et en assimilation de données dans le domaine de l'hydraulique ainsi que les objectifs de la thèse. On présente le cadre de la prévision des crues, ses enjeux et les outils dont on dispose pour prévoir la dynamique des rivières. On présente notamment la future mission SWOT qui a pour but de mesurer les hauteurs d'eau dans les rivières avec un couverture globale à haute résolution. On précise notamment l'apport de ces mesures et leur complémentarité avec les mesures in-situ. La deuxième partie présente les équations de Saint-Venant, qui décrivent les écoulements dans les rivières, ainsi qu'une discrétisation numérique de ces équations, telle qu'implémentée dans le logiciel Mascaret-1D. Le dernier chapitre de cette partie propose des simplifications des équations de Saint-Venant. La troisième partie de ce manuscrit présente les méthodes de quantification et de réduction des incertitudes. On présente notamment le contexte probabiliste de la quantification d'incertitudes et d'analyse de sensibilité. On propose ensuite de réduire la dimension d'un problème stochastique quand on traite de champs aléatoires. Les méthodes de décomposition en polynômes du chaos sont ensuite présentées. Cette partie dédiée à la méthodologie s'achève par un chapitre consacré à l'assimilation de données ensemblistes et à l'utilisation des modèles réduits dans ce cadre. La quatrième partie de ce manuscrit est dédiée aux résultats. On commence par identifier les sources d'incertitudes en hydraulique que l'on s'attache à quantifier et réduire par la suite. Un article en cours de révision détaille la validation d'un modèle réduit pour les équations de Saint-Venant en régime stationnaire lorsque l'incertitude est majoritairement portée par les coefficients de frottement et le débit à l'amont. On montre que les moments statistiques, la densité de probabilité et la matrice de covariances spatiales pour la hauteur d'eau sont efficacement et précisément estimés à l'aide du modèle réduit dont la construction ne nécessite que quelques dizaines d'intégrations du modèle direct. On met à profit l'utilisation du modèle réduit pour réduire le coût de calcul du filtre de Kalman d'Ensemble dans le cadre d'un exercice d'assimilation de données synthétiques de type SWOT. On s'intéresse précisément à la représentation spatiale de la donnée telle que vue par SWOT: couverture globale du réseau, moyennage spatial entre les pixels observés. On montre notamment qu'à budget de calcul donné les résultats de l'analyse d'assimilation de données qui repose sur l'utilisation du modèle réduit sont meilleurs que ceux obtenus avec le filtre classique. On s'intéresse enfin à la construction du modèle réduit en régime instationnaire. On suppose ici que l'incertitude est liée aux coefficients de frottement. Il s'agit à présent de juger de la nécessité du recalcul des coefficients polynomiaux au fil du temps et des cycles d'assimilation de données. Pour ce travail seul des données in-situ ont été considérées. On suppose dans un deuxième temps que l'incertitude est portée par le débit en amont du réseau, qui est un vecteur temporel. On procède à une décomposition de type Karhunen-Loève pour réduire la taille de l'espace incertain aux trois premiers modes. Nous sommes ainsi en mesure de mener à bien un exercice d'assimilation de données. Pour finir, les conclusions et les perspectives de ce travail sont présentées en cinquième partie. / This work deals with the formulation of a surrogate model for the shallow water equations in fluvial hydraulics with a chaos polynomial expansion. This reduced model is used instead of the direct model to reduce the computational cost of the ensemble methods in uncertainty quantification and data assimilation. The context of the study is the flood forecasting and the management of water resources. This manuscript is composed of five parts, each divided into chapters. The first part presents a state of art of uncertainty quantification and data assimilation in the field of hydraulics as well as the objectives of this thesis. We present the framework of flood forecasting, its stakes and the tools available (numerical and observation) to predict the dynamics of rivers. In particular, we present the SWOT2 mission, which aims to measure the height of water in rivers with global coverage at high resolution. We highlight particularty their contribution and their complementarity with the in-situ measurements. The second part presents the shallow water equations, which describe the flows in the rivers. We are particularly interested in a 1D representation of the equations.We formulate a numerical discretization of these equations, as implemented in the Mascaret software. The last chapter of this part proposes some simplifications of the shallow-water equations. The third part of this manuscript presents the uncertainty quantification and reduced order methods. We present particularly the probabilistic context which makes it possible to define well-defined problem of uncertainty quantification and sensitivity analysis. It is then proposed to reduce the size of a stochastic problem when dealing with random fields in the context of geophysical models. The methods of chaos polynomial expansion are then presented ; we present in particular the different strategies for the computation of the polynomial coefficients. This section devoted to methodology concludes with a chapter devoted to Ensemble based data assimilation (specially the Ensemble Kalman filter) and the use of surrogate models in this framework. The fourth part of this manuscript is dedicated to the results. The first step is to identify the sources of uncertainty in hydraulics that should be quantified and subsequently reduced. An article, in the review state, details the method and the validation of a polynomial surrogate model for shallow water equations in steady state when the uncertainty is mainly carried by the friction coefficients and upstream inflow. The study is conducted on the river Garonne. It is shown that the statistical moments, the probability density and the spatial covariance matrice for the water height are efficiently and precisely estimated using the reduced model whose construction requires only a few tens of integrations of the direct model. The use of the surrogate model is used to reduce the computational cost of the Ensemble Kalman filter in the context of a synthetic SWOT like data assimilation exercise. The aim is to reconstruct the spatialized friction coefficients and the upstream inflow. We are interested precisely in the spatial representation of the data as seen by SWOT : global coverage of the network, spatial averaging between the observed pixels. We show in particular that at the given calculation budget (2500 simulations of the direct model) the results of the data assimilation analysis based on the use of the polynomial surrogate model are better than those obtained with the classical Ensemble Kalman filter. We are then interested in the construction of the reduced model in unsteady conditions. It is assumed initially that the uncertainty is carried with the friction coefficients. It is now necessary to judge the need for the recalculation of polynomial coefficients over time and data assimilation cycles. For this work only ponctual and in-situ data were considered. It is assumed in a second step that the uncertainty is carried by the upstr
122

Adaptive control of deterministic and stochastic approximation errors in simulations of compressible flow / Contrôle adaptatif des erreurs d'approximation stochastique et déterministe dans la simulation des écoulements compressible

Van Langenhove, Jan Willem 25 October 2017 (has links)
La simulation de systèmes d'ingénierie non linéaire complexes tels que les écoulements de fluide compressibles peut être ciblée pour rendre plus efficace et précise l'approximation d'une quantité spécifique (scalaire) d'intérêt du système. En mettant de côté l'erreur de modélisation et l'incertitude paramétrique, on peut y parvenir en combinant des estimations d'erreurs axées sur des objectifs et des raffinements adaptatifs de maillage spatial anisotrope. A cette fin, un cadre élégant et efficace est celui de l'adaptation dite basé-métrique où une estimation d'erreur a priori est utilisée comme indicateur d’adaptation de maillage. Dans cette thèse on propose une nouvelle extension de cette approche au cas des approximations de système portant une composante stochastique. Dans ce cas, un problème d'optimisation est formulé et résolu pour un meilleur contrôle des sources d'erreurs. Ce problème est posé dans le cadre continu de l'espace de métrique riemannien. Des développements algorithmiques sont également proposés afin de déterminer les sources dominates d’erreur et effectuer l’adaptation dans les espaces physique ou des paramètres incertains. L’approche proposé est testée sur divers problèmes comprenant une entrée de scramjet supersonique soumise à des incertitudes paramétriques géométriques et opérationnelles. Il est démontré que cette approche est capable de bien capturé les singularités dans l’escape stochastique, tout en équilibrant le budget de calcul et les raffinements de maillage dans les deux espaces. / The simulation of complex nonlinear engineering systems such as compressible fluid flows may be targeted to make more efficient and accurate the approximation of a specific (scalar) quantity of interest of the system. Putting aside modeling error and parametric uncertainty, this may be achieved by combining goal-oriented error estimates and adaptive anisotropic spatial mesh refinements. To this end, an elegant and efficient framework is the one of (Riemannian) metric-based adaptation where a goal-based a priori error estimation is used as indicator for adaptivity. This thesis proposes a novel extension of this approach to the case of aforementioned system approximations bearing a stochastic component. In this case, an optimisation problem leading to the best control of the distinct sources of errors is formulated in the continuous framework of the Riemannian metric space. Algorithmic developments are also presented in order to quantify and adaptively adjust the error components in the deterministic and stochastic approximation spaces. The capability of the proposed method is tested on various problems including a supersonic scramjet inlet subject to geometrical and operational parametric uncertainties. It is demonstrated to accurately capture discontinuous features of stochastic compressible flows impacting pressure-related quantities of interest, while balancing computational budget and refinements in both spaces.
123

Towards multifidelity uncertainty quantification for multiobjective structural design / Vers une approche multi-fidèle de quantification de l'incertain pour l'optimisation multi-objectif

Lebon, Jérémy 12 December 2013 (has links)
Cette thèse a pour objectif l"établissement de méthodes numériques pour l'optimisation multi-objectif de structures soumises à des facteurs incertains. Au cœur de ce travail, nous nous sommes focalisés sur l'adaptation du chaos polynomial pour l'évaluation non intrusive de la part de l'incertain. Pour atteindre l'objectif fixé, nous sommes confrontés à deux verrous : l'un concerne les coûts élevés de calcul d'une simulation unitaire par éléments finis, l'autre sa précision limitée. Afin de limiter la charge de calcul pour la construction du chaos polynomial, nous nous sommes concentrés sur la construction d'un chaos polynomial creux. Nous avons également développé un programme d’échantillonnage basé sur l’hypercube latin personnalisé prenant en compte la précision limitée de la simulation. Du point de vue de la modélisation nous avons proposé une approche multi-fidèle impliquant une hiérarchie de modèles allant des simulations par éléments finis complètes jusqu'aux surfaces de réponses en passant par la réduction de modèles basés sur la physique. Enfin, nous avons étudié l'optimisation multi-objectif de structures sous incertitudes. Nous avons étendu le modèle PCE des fonctions objectif à la prise en compte des variables déterministes de conception. Nous avons illustré notre travail sur des exemples d'emboutissage et sur la conception optimale des structures en treillis. / This thesis aims at Multi-Objective Optimization under Uncertainty in structural design. We investigate Polynomial Chaos Expansion (PCE) surrogates which require extensive training sets. We then face two issues: high computational costs of an individual Finite Element simulation and its limited precision. From numerical point of view and in order to limit the computational expense of the PCE construction we particularly focus on sparse PCE schemes. We also develop a custom Latin Hypercube Sampling scheme taking into account the finite precision of the simulation. From the modeling point of view,we propose a multifidelity approach involving a hierarchy of models ranging from full scale simulations through reduced order physics up to response surfaces. Finally, we investigate multiobjective optimization of structures under uncertainty. We extend the PCE model of design objectives by taking into account the design variables. We illustrate our work with examples in sheet metal forming and optimal design of truss structures.
124

Surrogate Modeling for Uncertainty Quantification in systems Characterized by expensive and high-dimensional numerical simulators

Rohit Tripathy (8734437) 24 April 2020 (has links)
<div>Physical phenomena in nature are typically represented by complex systems of ordinary differential equations (ODEs) or partial differential equations (PDEs), modeling a wide range of spatio-temporal scales and multi-physics. The field of computational science has achieved indisputable success in advancing our understanding of the natural world - made possible through a combination of increasingly sophisticated mathematical models, numerical techniques and hardware resources. Furthermore, there has been a recent revolution in the data-driven sciences - spurred on by advances in the deep learning/stochastic optimization communities and the democratization of machine learning (ML) software.</div><div><br></div><div><div>With the ubiquity of use of computational models for analysis and prediction of physical systems, there has arisen a need for rigorously characterizing the effects of unknown variables in a system. Unfortunately, Uncertainty quantification (UQ) tasks such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying physical models. In order to deal with the high cost of the forward model, one typically resorts to the surrogate idea - replacing the true response surface with an approximation that is both accurate as well cheap (computationally speaking). However, state-ofart numerical systems are often characterized by a very large number of stochastic parameters - of the order of hundreds or thousands. The high cost of individual evaluations of the forward model, coupled with the limited real world computational budget one is constrained to work with, means that one is faced with the task of constructing a surrogate model for a system with high input dimensionality and small dataset sizes. In other words, one faces the <i>curse of dimensionality</i>.</div></div><div><br></div><div><div>In this dissertation, we propose multiple ways of overcoming the<i> curse of dimensionality</i> when constructing surrogate models for high-dimensional numerical simulators. The core idea binding all of our proposed approach is simple - we try to discover special structure in the stochastic parameter which captures most of the variance of the output quantity of interest. Our strategies first identify such a low-rank structure, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the low dimensional structure is small enough, learning the map between this reduced input space to the output is a much easier task in</div><div>comparison to the original surrogate modeling task.</div></div>
125

Efficient Computation of Accurate Seismic Fragility Functions Through Strategic Statistical Selection

Francisco J. Pena (5930132) 15 May 2019 (has links)
A fragility function quantifies the probability that a structural system reaches an undesirable limit state, conditioned on the occurrence of a hazard of prescribed intensity level. Multiple sources of uncertainty are present when estimating fragility functions, e.g., record-to-record variation, uncertain material and geometric properties, model assumptions, adopted methodologies, and scarce data to characterize the hazard. Advances in the last decades have provided considerable research about parameter selection, hazard characteristics and multiple methodology for the computation of these functions. However, there is no clear path on the type of methodologies and data to ensure that accurate fragility functions can be computed in an efficient manner. Fragility functions are influenced by the selection of a methodology and the data to be analyzed. Each selection may lead to different levels of accuracy, due to either increased potential for bias or the rate of convergence of the fragility functions as more data is used. To overcome this difficulty, it is necessary to evaluate the level of agreement between different statistical models and the available data as well as to exploit the information provided by each piece of available data. By doing this, it is possible to accomplish more accurate fragility functions with less uncertainty while enabling faster and widespread analysis. In this dissertation, two methodologies are developed to address the aforementioned challenges. The first methodology provides a way to quantify uncertainty and perform statistical model selection to compute seismic fragility functions. This outcome is achieved by implementing a hierarchical Bayesian inference framework in conjunction with a sequential Monte Carlo technique. Using a finite amount of simulations, the stochastic map between the hazard level and the structural response is constructed using Bayesian inference. The Bayesian approach allows for the quantification of the epistemic uncertainty induced by the limited number of simulations. The most probable model is then selected using Bayesian model selection and validated through multiple metrics such as the Kolmogorov-Smirnov test. The subsequent methodology proposes a sequential selection strategy to choose the earthquake with characteristics that yield the largest reduction in uncertainty. Sequentially, the quantification of uncertainty is exploited to consecutively select the ground motion simulations that expedite learning and provides unbiased fragility functions with fewer simulations. Lastly, some examples of practices during the computation of fragility functions that results i n undesirable bias in the results are discussed. The methodologies are implemented on a widely studied twenty-story steel nonlinear benchmark building model and employ a set of realistic synthetic ground motions obtained from earthquake scenarios in California. Further analysis of this case study demonstrates the superior performance when using a lognormal probability distribution compared to other models considered. It is concluded by demonstrating that the methodologies developed in this dissertation can yield lower levels of uncertainty than traditional sampling techniques using the same number of simulations. The methodologies developed in this dissertation enable reliable and efficient structural assessment, by means of fragility functions, for civil infrastructure, especially for time-critical applications such as post-disaster evaluation. Additionally, this research empowers implementation by being transferable, facilitating such analysis at community level and for other critical infrastructure systems (e.g., transportation, communication, energy, water, security) and their interdependencies.
126

COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS

Navarro Quiles, Ana 01 March 2018 (has links)
Desde las contribuciones de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob y Johann Bernoulli en el siglo XVII hasta ahora, las ecuaciones en diferencias y las diferenciales han demostrado su capacidad para modelar satisfactoriamente problemas complejos de gran interés en Ingeniería, Física, Epidemiología, etc. Pero, desde un punto de vista práctico, los parámetros o inputs (condiciones iniciales/frontera, término fuente y/o coeficientes), que aparecen en dichos problemas, son fijados a partir de ciertos datos, los cuales pueden contener un error de medida. Además, pueden existir factores externos que afecten al sistema objeto de estudio, de modo que su complejidad haga que no se conozcan de forma cierta los parámetros de la ecuación que modeliza el problema. Todo ello justifica considerar los parámetros de la ecuación en diferencias o de la ecuación diferencial como variables aleatorias o procesos estocásticos, y no como constantes o funciones deterministas, respectivamente. Bajo esta consideración aparecen las ecuaciones en diferencias y las ecuaciones diferenciales aleatorias. Esta tesis hace un recorrido resolviendo, desde un punto de vista probabilístico, distintos tipos de ecuaciones en diferencias y diferenciales aleatorias, aplicando fundamentalmente el método de Transformación de Variables Aleatorias. Esta técnica es una herramienta útil para la obtención de la función de densidad de probabilidad de un vector aleatorio, que es una transformación de otro vector aleatorio cuya función de densidad de probabilidad es conocida. En definitiva, el objetivo de este trabajo es el cálculo de la primera función de densidad de probabilidad del proceso estocástico solución en diversos problemas basados en ecuaciones en diferencias y diferenciales aleatorias. El interés por determinar la primera función de densidad de probabilidad se justifica porque dicha función determinista caracteriza la información probabilística unidimensional, como media, varianza, asimetría, curtosis, etc., de la solución de la ecuación en diferencias o diferencial correspondiente. También permite determinar la probabilidad de que acontezca un determinado suceso de interés que involucre a la solución. Además, en algunos casos, el estudio teórico realizado se completa mostrando su aplicación a problemas de modelización con datos reales, donde se aborda el problema de la estimación de distribuciones estadísticas paramétricas de los inputs en el contexto de las ecuaciones en diferencias y diferenciales aleatorias. / Ever since the early contributions by Isaac Newton, Gottfried Wilhelm Leibniz, Jacob and Johann Bernoulli in the XVII century until now, difference and differential equations have uninterruptedly demonstrated their capability to model successfully interesting complex problems in Engineering, Physics, Chemistry, Epidemiology, Economics, etc. But, from a practical standpoint, the application of difference or differential equations requires setting their inputs (coefficients, source term, initial and boundary conditions) using sampled data, thus containing uncertainty stemming from measurement errors. In addition, there are some random external factors which can affect to the system under study. Then, it is more advisable to consider input data as random variables or stochastic processes rather than deterministic constants or functions, respectively. Under this consideration random difference and differential equations appear. This thesis makes a trail by solving, from a probabilistic point of view, different types of random difference and differential equations, applying fundamentally the Random Variable Transformation method. This technique is an useful tool to obtain the probability density function of a random vector that results from mapping another random vector whose probability density function is known. Definitely, the goal of this dissertation is the computation of the first probability density function of the solution stochastic process in different problems, which are based on random difference or differential equations. The interest in determining the first probability density function is justified because this deterministic function characterizes the one-dimensional probabilistic information, as mean, variance, asymmetry, kurtosis, etc. of corresponding solution of a random difference or differential equation. It also allows to determine the probability of a certain event of interest that involves the solution. In addition, in some cases, the theoretical study carried out is completed, showing its application to modelling problems with real data, where the problem of parametric statistics distribution estimation is addressed in the context of random difference and differential equations. / Des de les contribucions de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob i Johann Bernoulli al segle XVII fins a l'actualitat, les equacions en diferències i les diferencials han demostrat la seua capacitat per a modelar satisfactòriament problemes complexos de gran interés en Enginyeria, Física, Epidemiologia, etc. Però, des d'un punt de vista pràctic, els paràmetres o inputs (condicions inicials/frontera, terme font i/o coeficients), que apareixen en aquests problemes, són fixats a partir de certes dades, les quals poden contenir errors de mesura. A més, poden existir factors externs que afecten el sistema objecte d'estudi, de manera que, la seua complexitat faça que no es conega de forma certa els inputs de l'equació que modelitza el problema. Tot aço justifica la necessitat de considerar els paràmetres de l'equació en diferències o de la equació diferencial com a variables aleatòries o processos estocàstics, i no com constants o funcions deterministes. Sota aquesta consideració apareixen les equacions en diferències i les equacions diferencials aleatòries. Aquesta tesi fa un recorregut resolent, des d'un punt de vista probabilístic, diferents tipus d'equacions en diferències i diferencials aleatòries, aplicant fonamentalment el mètode de Transformació de Variables Aleatòries. Aquesta tècnica és una eina útil per a l'obtenció de la funció de densitat de probabilitat d'un vector aleatori, que és una transformació d'un altre vector aleatori i la funció de densitat de probabilitat és del qual és coneguda. En definitiva, l'objectiu d'aquesta tesi és el càlcul de la primera funció de densitat de probabilitat del procés estocàstic solució en diversos problemes basats en equacions en diferències i diferencials. L'interés per determinar la primera funció de densitat es justifica perquè aquesta funció determinista caracteritza la informació probabilística unidimensional, com la mitjana, variància, asimetria, curtosis, etc., de la solució de l'equació en diferències o l'equació diferencial aleatòria corresponent. També permet determinar la probabilitat que esdevinga un determinat succés d'interés que involucre la solució. A més, en alguns casos, l'estudi teòric realitzat es completa mostrant la seua aplicació a problemes de modelització amb dades reals, on s'aborda el problema de l'estimació de distribucions estadístiques paramètriques dels inputs en el context de les equacions en diferències i diferencials aleatòries. / Navarro Quiles, A. (2018). COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/98703 / TESIS
127

EFFICIENT NUMERICAL METHODS FOR KINETIC EQUATIONS WITH HIGH DIMENSIONS AND UNCERTAINTIES

Yubo Wang (11792576) 19 December 2021 (has links)
<div><div>In this thesis, we focus on two challenges arising in kinetic equations, high dimensions and uncertainties. To reduce the dimensions, we proposed efficient methods for linear Boltzmann and full Boltzmann equations based on dynamic low-rank frameworks. For linear Boltzmann equation, we proposed a method that is based on macro-micro decomposition of the equation; the low-rank approximation is only used for the micro part of the solution. The time and spatial discretizations are done properly so that the overall scheme is second-order accurate (in both the fully kinetic and the limit regime) and asymptotic-preserving (AP). That is, in the diffusive regime, the scheme becomes a macroscopic solver for the limiting diffusion equation that automatically captures the low-rank structure of the solution. Moreover, the method can be implemented in a fully explicit way and is thus significantly more efficient compared to the previous state of the art. We demonstrate the accuracy and efficiency of the proposed low-rank method by a number of four-dimensional (two dimensions in physical space and two dimensions in velocity space) simulations. We further study the adaptivity of low-rank methods in full Boltzmann equation. We proposed a highly efficient adaptive low- rank method in Boltzmann equation for computations of steady state solutions. The main novelties of this approach are: On one hand, to the best of our knowledge, the dynamic low- rank integrator hasn’t been applied to full Boltzmann equation till date. The full collision operator is local in spatial variable while the convection part is local in velocity variable. This separated nature is well-suited for low-rank methods. Compared with full grid method (finite difference, finite volume,...), the dynamic low-rank method can avoid the full computations of collision operators in each spatial grid/elements. Resultingly, it can achieve much better efficiency especially for some low rank flows (e.g. normal shock wave). On the other hand, our adaptive low-rank method uses a novel dynamic thresholding strategy to adaptively control the computational rank to achieve better efficiency especially for steady state solutions. We demonstrate the accuracy and efficiency of the proposed adaptive low rank method by a number of 1D/2D Maxwell molecule benchmark tests. On the other hand, for kinetic equations with uncertainties, we focus on non-intrusive sampling methods where we are able to inherit good properties (AP, positivity preserving) from existing deterministic solvers. We propose a control variate multilevel Monte Carlo method for the kinetic BGK model of the Boltzmann equation subject to random inputs. The method combines a multilevel Monte Carlo technique with the computation of the optimal control variate multipliers derived from local or global variance minimization prob- lems. Consistency and convergence analysis for the method equipped with a second-order positivity-preserving and asymptotic-preserving scheme in space and time is also performed. Various numerical examples confirm that the optimized multilevel Monte Carlo method outperforms the classical multilevel Monte Carlo method especially for problems with dis- continuities<br></div></div>
128

Uncertainty Quantification in Particle Image Velocimetry

Sayantan Bhattacharya (7649012) 03 December 2019 (has links)
<div>Particle Image Velocimetry (PIV) is a non-invasive measurement technique which resolves the flow velocity by taking instantaneous snapshots of tracer particle motion in the flow and uses digital image cross-correlation to estimate the particle shift up to subpixel accuracy. The measurement chain incorporates numerous sets of parameters, such as the particle displacements, the particle image size, the flow shear rate, the out-of-plane motion for planar PIV and image noise to name a few, and these parameters are interrelated and influence the final velocity estimate in a complicated way. In the last few decades, PIV has become widely popular by virtue of developments in both the hardware capabilities and correlation algorithms, especially with the scope of 3-component (3C) and 3-dimensional (3D) velocity measurements using stereo-PIV and tomographic-PIV techniques, respectively. The velocity field measurement not only leads to other quantities of interest such as Pressure, Reynold stresses, vorticity or even diffusion coefficient, but also provides a reference field for validating numerical simulations of complex flows. However, such a comparison with CFD or applicability of the measurement to industrial design requires one to quantify the uncertainty in the PIV estimated velocity field. Even though the PIV community had a strong impetus in minimizing the measurement error over the years, the problem of uncertainty estimation in local instantaneous PIV velocity vectors have been rather unnoticed. A typical norm had been to assign an uncertainty of 0.1 pixels for the whole field irrespective of local flow features and any variation in measurement noise. The first article on this subject was published in 2012 and since then there has been a concentrated effort to address this gap. The current dissertation is motivated by such a requirement and aims to compare the existing 2D PIV uncertainty methods, propose a new method to directly estimate the planar PIV uncertainty from the correlation plane and subsequently propose the first comprehensive methods to quantify the measurement uncertainty in stereo-PIV and 3D Particle Tracking Velocimetry (PTV) measurements.</div><div>The uncertainty quantification in a PIV measurement is, however, non-trivial due to the presence of multitude of error sources and their non-linear coupling through the measurement chain transfer function. In addition, the advanced algorithms apply iterative correction process to minimize the residual which increases the complexity of the process and hence, a simple data-reduction equation for uncertainty propagation does not exist. Furthermore, the calibration or a reconstruction process in a stereo or volumetric measurement makes the uncertainty estimation more challenging. Thus, current uncertainty quantification methods develop a-posterior models utilizing the evaluated displacement information and combine it with either image information, correlation plane information or even calibration “disparity map” information to find the desired uncertainties in the velocity estimates.</div><div><br></div>
129

Structural and Dynamical Properties of Organic and Polymeric Systems using Molecular Dynamics Simulations

Lorena Alzate-Vargas (8088409) 06 December 2019 (has links)
<p>The use of atomistic level simulations like molecular dynamics are becoming a key part in the process of materials discovery, optimization and development since they can provide complete description of a material and contribute to understand the response of materials under certain conditions or to elucidate the mechanisms involved in the materials behavior.</p> <p>We will discuss to cases in which molecular dynamics simulations are used to characterize and understand the behavior of materials: i) prediction of properties of small organic crystals in order to be implemented in a multiscale modeling framework which objective is to predict mechanically induced amorphization without experimental input other than</p> <p>the molecular structure and ii) characterization of temperature dependent spatio-temporal domains of high mobility torsions in several bulk polymers, thin slab and isolated chains; strikingly we observe universality in the percolation of these domains across the glass transition.</p> <p>However, as in any model, validation of the predicted results against appropriate experiments is a critical stage, especially if the predicted results are to be used in decision making. Various sources of uncertainties alter both modeling and experimental results and therefore the validation process. We will present molecular dynamics simulations to assess uncertainties associated with the prediction of several important properties of thermoplastic polymers; in which we independently quantify how the predictions are affected by several sources. Interestingly, we nd that all sources of uncertainties studied influence predictions, but their relative importance depends on the specific quantity of interest.</p>
130

Inverse Uncertainty Quantification using deterministic sampling : An intercomparison between different IUQ methods

Andersson, Hjalmar January 2021 (has links)
In this thesis, two novel methods for Inverse Uncertainty Quantification are benchmarked against the more established methods of Monte Carlo sampling of output parameters(MC) and Maximum Likelihood Estimation (MLE). Inverse Uncertainty Quantification (IUQ) is the process of how to best estimate the values of the input parameters in a simulation, and the uncertainty of said estimation, given a measurement of the output parameters. The two new methods are Deterministic Sampling (DS) and Weight Fixing (WF). Deterministic sampling uses a set of sampled points such that the set of points has the same statistic as the output. For each point, the corresponding point of the input is found to be able to calculate the statistics of the input. Weight fixing uses random samples from the rough region around the input to create a linear problem that involves finding the right weights so that the output has the right statistic. The benchmarking between the four methods shows that both DS and WF are comparably accurate to both MC and MLE in most cases tested in this thesis. It was also found that both DS and WF uses approximately the same amount of function calls as MLE and all three methods use a lot fewer function calls to the simulation than MC. It was discovered that WF is not always able to find a solution. This is probably because the methods used for WF are not the optimal method for what they are supposed to do. Finding more optimal methods for WF is something that could be investigated further.

Page generated in 0.2349 seconds