Spelling suggestions: "subject:"anumerical"" "subject:"bnumerical""
41 |
Parametric Uncertainty Analysis of Uranium Transport Surface Complexation ModelsUnknown Date (has links)
Parametric uncertainty analysis of surface complexation modeling (SCM) has been studied using linear and nonlinear analysis. A computational SCM model was developed by Kohler et al. (1996) to simulate the breakthrough of Uranium(VI) in a column of quartz. Calibration of parameters which describe the reactions involved during reactive-transport simulation has been found to fit experimental data well. Further uncertainty analysis has been conducted which determines the predictive capability of these models. It was concluded that nonlinear analysis results in a more accurate prediction interval coverage than linear analysis. An assumption made by both linear and nonlinear analysis is that the parameters follow a normal distribution. In a preliminary study, when using Monte Carlo sampling a uniform distribution among a known feasible parameter range, the model exhibits no predictive capability. Due to high parameter sensitivity, few realizations exhibit accuracy to the known data. This results in a high confidence of the calibrated parameters, but poor understanding of the parametric distributions. This study first calibrates these parameters using a global optimization technique, multi-start quasi-newton BFGS method. Second, a Morris method (MOAT) analysis is used to screen parametric sensitivity. It is seen from MOAT that all parameters exhibit nonlinear effects on the simulation. To achieve an approximation of the simulated behavior of SCM parameters without the assumption of a normal distribution, this study employs the use of a Covariance-Adaptive Monte Carlo Markov chain algorithm. It is seen from posterior distributions generated from accepted parameter sets that the parameters do not necessarily follow a normal distribution. Likelihood surfaces confirm the calibration of the models, but shows that responses to parameters are complex. This complex surface is due to a nonlinear model and high correlations between parameters. The posterior parameter distributions are then used to find prediction intervals about an experiment not used to calibrate the model. The predictive capability of Adaptive MCMC is found to be better than that of linear and non-linear analysis, showing a better understanding of parametric uncertainty than previous study. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of
Master of Science. / Spring Semester, 2011. / November 18, 2010. / Groundwater contamination, Hydrology / Includes bibliographical references. / Ming Ye, Professor Directing Thesis; Robert van Engelen, Committee Member; Tomasz Plewa, Committee Member.
|
42 |
Reduced Order Modeling of Reactive Transport in a Column Using Proper Orthogonal DecompositionUnknown Date (has links)
Estimating parameters for reactive contaminant transport models can be a very computationally intensive. Typically this involves solving a forward problem many times, with many degrees of freedom that must be computed each time. We show that reduced order modeling (ROM) by proper orthogonal decomposition (POD) can be used to approximate the solution to the forward model using many fewer degrees of freedom. We provide background on the finite element method and reduced order modeling in one spatial dimension, and apply both methods to a system of linear uncoupled time-dependent equations simulating reactive transport in a column. By comparing the reduced order and finite element approximations, we demonstrate that the reduced model, while having many fewer degrees of freedom to compute, gives a good approximation of the high-dimensional (finite element) model. Our results indicate that one may substitute a reduced model in place of a high-dimensional model to solve the forward problem in parameter estimation with many fewer degrees of freedom. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2011. / November 4, 2011. / column experiment, computational hydrology, parameter estimation, proper orthogonal decomposition, reactive transport, reduced order modeling / Includes bibliographical references. / Janet Peterson, Professor Directing Thesis; Ming Ye, Professor Co-Directing Thesis; Sachin Shanbhag, Committee Member.
|
43 |
Assessment of Parameteric and Model Uncertainty in Groundwater ModelingUnknown Date (has links)
Groundwater systems are open and complex, rendering them prone to multiple conceptual interpretations and mathematical descriptions. When multiple models are acceptable based on available knowledge and data, model uncertainty arises. One way to assess the model uncertainty is postulating several alternative hydrologic models for a site and using model selection criteria to (1) rank these models, (2) eliminate some of them, and/or (3) weight and average predictions statistics generated by multiple models based on their model probabilities. This multimodel analysis has led to some debate among hydrogeologists about the merits and demerits of common model selection criteria such as AIC, AICc, BIC, and KIC. This dissertation contributes to the discussion by comparing the abilities of the two common Bayesian criteria (BIC and KIC) theoretically and numerically. The comparison results indicate that, using MCMC results as a reference, KIC yields more accurate approximations of model probability than does BIC. Although KIC reduces asymptotically to BIC, KIC provides consistently more reliable indications of model quality for a range of sample sizes. In the multimodel analysis, the model averaging predictive uncertainty is a weighted average of predictive uncertainties of individual models. So it is important to properly quantify individual model's predictive uncertainty. Confidence intervals based on regression theories and credible intervals based on Bayesian theories are conceptually different ways to quantify predictive uncertainties, and both are widely used in groundwater modeling. This dissertation explores their differences and similarities theoretically and numerically. The comparison results indicate that given Gaussian distributed observation errors, for linear or linearized nonlinear models, linear confidence and credible intervals are numerically identical when consistent prior parameter information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence and credible regions based on approximate likelihood method are used and intrinsic model nonlinearity is small; but they differ in practice due to numerical difficulties in calculating both confidence and credible intervals. Model error is a more vital issue than differences between confidence and credible intervals for individual models, suggesting the importance of considering alternative models. Model calibration results are the basis for the model selection criteria to discriminate between models. However, how to incorporate calibration data errors into the calibration process is an unsettled problem. It has been seen that due to the improper use of the error probability structure in the calibration, the model selection criteria lead to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. This dissertation finds that the errors reflected in the calibration should include two parts, measurement errors and model errors. To consider the probability structure of the total errors, I propose an iterative calibration method with two stages of parameter estimation. The multimodel analysis based on the estimation results leads to more reasonable averaging weights and better averaging predictive performance, compared to those with considering only measurement errors. Traditionally, data-worth analyses have relied on a single conceptual-mathematical model with prescribed parameters. Yet this renders model predictions prone to statistical bias and underestimation of uncertainty and thus affects the groundwater management decision. This dissertation proposes a multimodel approach to optimum data-worth analyses that is based on model averaging within a Bayesian framework. The developed multimodel Bayesian approach to data-worth analysis works well in a real geostatistical problem. In particular, the selection of target for additional data collection based on the approach is validated against actual data collected. The last part of the dissertation presents an efficient method of Bayesian uncertainty analysis. While Bayesian analysis is vital to quantify predictive uncertainty in groundwater modeling, its application has been hindered in multimodel uncertainty analysis because of computational cost of numerous models executions and the difficulty in sampling from the complicated posterior probability density functions of model parameters. This dissertation develops a new method to improve computational efficiency of Bayesian uncertainty analysis using sparse-grid method. The developed sparse-grid-based method for Bayesian uncertainty analysis demonstrates its superior accuracy and efficiency to classic importance sampling and MCMC sampler when applied to a groundwater flow model. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2012. / March 29, 2012. / Bayesian model averaging, Data worth, Model selection criteria, Multimodel analysis, Uncertainty measure / Includes bibliographical references. / Ming Ye, Professor Directing Dissertation; Xufeng Niu, University Representative; Peter Beerli, Committee Member; Gary Curtis, Committee Member; Michael Navon, Committee Member; Tomasz Plewa, Committee Member.
|
44 |
Integrating Two-Way Interaction Between Fluids and Rigid Bodies in the Real-Time Particle Systems LibraryUnknown Date (has links)
In the last 15 years, Video games have become a dominate form of entertainment. The popularity of video games means children are spending more of their free time play video games. Usually, the time spent on homework or studying is decreased to allow for the extended time spent on video games. In an effort to solve the problem, researchers have begun creating educational video games. Some studies have shown a significant increase in learning ability from video games or other interactive instruction. Educational games can be used in conjunction with formal educational methods to improve the retention among students. To facilitate the creation of games for science education, the RTPS library was created by Ian Johnson to simulate fluid dynamics in real-time. This thesis seeks to extend the RTPS library, to provide more realistic simulations. Rigid body dynamics have been added to the simulation framework. In addition, a two-way coupling between the rigid bodies and fluids have been implemented. Another contribution to the library, was the addition of fluid surface rendering to provide a more realistic looking simulation. Finally, a Qt interface was added to allow for modification of simulation parameters in real-time. In order to perform these simulations in real-time one must have a significant amount of computational power. Though processing power has seen consistent growth for many years, the demands for higher performance desktops grew faster than CPUs could satisfy. In 2006, general purpose graphics processing(GPGPU) was introduced with the CUDA programming language. This new language allowed developers access to an incredible amount of processing power. Some researchers were reporting up to 10 times speed-ups over a CPU. With this power, one can perform simulations on their desktop computers that were previously only feasible on super computers. GPGPU technology is utilized in this thesis to enable real-time simulations. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2012. / September 4, 2012. / Fluid Dynamics, Fluid Rendering, GPGPU, Physics Simulation, Real-Time, SPH / Includes bibliographical references. / Gordon Erlebacher, Professor Directing Thesis; Tomasz Plewa, Committee Member; Sachin Shanbhag, Committee Member.
|
45 |
Sparse-Grid Methods for Several Types of Stochastic Differential EquationsUnknown Date (has links)
This work focuses on developing and analyzing novel, efficient sparse-grid algorithms for solving several types of stochastic ordinary/partial differential equations and corresponding inverse problem, such as parameter identification. First, we consider linear parabolic partial differential equations with random diffusion coefficients, forcing term and initial condition. Error analysis for a stochastic collocation method is carried out in a wider range of situations than previous literatures, including input data that depend nonlinearly on the random variables and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate the exponential decay of the interpolation error in the probability space for both semi-discrete and fully-discrete solutions. Second, we consider multi-dimensional backward stochastic differential equations driven by a vector of white noise. A sparse-grid scheme are proposed to discretize the target equation in the multi-dimensional time-space domain. In our scheme, the time discretization is conducted by the multi-step scheme. In the multi-dimensional spatial domain, the conditional mathematical expectations derived from the original equation are approximated using sparse-grid Gauss-Hermite quadrature rule and adaptive hierarchical sparse-grid interpolation. Error estimates are rigorously proved for the proposed fully-discrete scheme for multi-dimensional BSDEs with certain types of simplified generator functions. Third, we investigate the propagation of input uncertainty through nonlocal diffusion models. Since the stochastic local diffusion equations, e.g. heat equations, have already been well studied, we are interested in extending the existing numerical methods to solve nonlocal diffusion problems. In this work, we use sparse-grid stochastic collocation method to solve nonlocal diffusion equations with colored noise and Monte-Carlo method to solve the ones with white noise. Our numerical experiments show that the existing methods can achieve the desired accuracy in the nonlocal setting. Moreover, in the white noise case, the nonlocal diffusion operator can reduce the variance of the solution because the nonlocal diffusion operator has "smoothing" effect on the random field. At last, stochastic inverse problem is investigated. We propose sparse-grid Bayesian algorithm to improve the efficiency of the classic Bayesian methods. Using sparse-grid interpolation and integration, we construct a surrogate posterior probability density function and determine an appropriate alternative density which can capture the main features of the true PPDF to improve the simulation efficiency in the framework of indirect sampling. By applying this method to a groundwater flow model, we demonstrate its better accuracy when compared to brute-force MCMC simulation results. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2012. / June 22, 2012. / Beysian analysis, inverse problem, nonlocal diffusion, sparse grid, stochastic differential equations, uncertainty quantification / Includes bibliographical references. / Max D. Gunzburger, Professor Directing Dissertation; Xiaoming Wang, University Representative; Janet Peterson, Committee Member; Xiaoqiang Wang, Committee Member; Ming Ye, Committee Member; Clayton Webster, Committee Member; John Burkardt, Committee Member.
|
46 |
Solution of the Navier-Stokes Equations by the Finite Element Method Using Reduced Order ModelingUnknown Date (has links)
Reduced Order Models (ROM) provide a low-dimensional alternative form of a system of differential equations. Such a form permits faster computation of solutions. In this paper, Poisson's Equation in two dimensions, the Heat Equation in one dimension, and a Nonlinear Reaction-Diffusion equation in one dimension are solved using the Galerkin formulation of the Finite Element Method (FEM) in conjunction with Newton's Method. Reduced Order Modeling (ROM) by Proper Orthogonal Decomposition (POD) is then used to accelerate the solution of successive linear systems required by Newton's Method. This is done to show the viability of the method on a simple problem. The Navier-Stokes (NS) Equations are introduced and solved by FEM. A ROM using both POD and clustering by Centroidal Voronoi Tesselation (CVT) are then used to solve the NS equations, and the results are compared with the FEM solution. The specific NS problem we consider has inhomogeneous Dirichlet boundary conditions and the treatment of the boundary conditions is explained. The resulting decrease in computation time required for solving the various equations are compared with ROM methods. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2012. / October 5, 2012. / Finite Element Methods, Navier-Stokes Equations, Nonlinear PDEs, Reduced Order Modeling / Includes bibliographical references. / Janet Peterson, Professor Directing Thesis; Tomasz Plewa, Committee Member; Sachin Shanbhag, Committee Member.
|
47 |
Spherical Centroidal Voronoi Tessellation Based Unstructured Meshes for Multidomain Multiphysics ApplicationsUnknown Date (has links)
This dissertation presents and investigates ideas for improvement of the creation of quality centroidal voronoi tessellations on the sphere (SCVT) which are to be used for multiphysics, multidomain applications. As an introduction, we discuss grid generation on the sphere in a broad fashion. Next, we discuss the theory of CVTs in general, and specifically on the sphere. Subsequently we consider the iterative processes, such as Lloyd's algorithm, which are used to construct them. Following this, we describe a method for density functions via images so that we can shape generator density in an intuitive, yet arbitrary, manner, and then a method by which SCVTs can be easily adapted to conform to arbitrary sets of line segments, or shorelines. Then, we discuss sample meshes, used for various physical and nonphysical applications. Penultimately, we discuss two sample applications, as a proof of concept, where we adapt the Shallow Water Model from Model for Predictions Across Scales (MPAS) to use our grids for a more accurate border, and we also discuss elliptic interface problems both with and without hanging nodes. Finally, we share a few concluding remarks. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester, 2011. / November 3, 2011. / Includes bibliographical references. / Max Gunzburger, Professor Co-Directing Dissertation; Janet Peterson, Professor Co-Directing Dissertation; Kyle Gallivan, University Representative; Gordon Erlebacher, Committee Member; Xiaoqiang Wang, Committee Member; Todd Ringler, Committee Member.
|
48 |
Numerical Implementation of Continuum Dislocation TheoryUnknown Date (has links)
This thesis aims at theoretical and computational modeling of the continuum dislocation theory coupled with its internal elastic field. In this continuum description, the space-time evolution of the dislocation density is governed by a set of hyperbolic partial differential equations. These PDEs must be complemented by elastic equilibrium equations in order to obtain the velocity field that drives dislocation motion on slip planes. Simultaneously, the plastic eigenstrain tensor that serves as a known field in equilibrium equations should be updated by the motion of dislocations according to Orowan's law. Therefore, a stress- dislocation coupled process is involved when a crystal undergoes elastoplastic deformation. The solutions of equilibrium equation and dislocation density evolution equation are tested by a few examples in order to make sure appropriate computational schemes are selected for each. A coupled numerical scheme is proposed, where resolved shear stress and Orowan's law are two passages that connect these two sets of PDEs. The numerical implementation of this scheme is illustrated by an example that simulates the recovery process of a dislocated cubic crystal. The simulated result demonstrates the possibility to couple macroscopic(stress) and microscopic(dislocation density tensor) physical quantity to obtain crystal mechanical response. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2011. / November 7, 2011. / crystal plasticity, dislocation, dislocation density tensor, dislocation evolution equation / Includes bibliographical references. / Anter El-Azab, Professor Directing Thesis; Tomasz Plewa, Committee Member; Xiaoqiang Wang, Committee Member.
|
49 |
Practical Optimization Algorithms in the Data Assimilation of Large-Scale Systems with Non-Linear and Non-Smooth Observation OperatorsUnknown Date (has links)
This dissertation compares and contrasts large-scale optimization algorithms in the use of variational and sequential data assimilation on two novel problems chosen to highlight the challenges in non-linear and non-smooth data assimilation. The first problem explores the impact of a highly non-linear observation operator and highlights the importance of background information on the data assimilation problem. The second problem tackles large-scale data assimilation with a non-smooth observation operator. Together, these two cases show both the importance of choosing an appropriate data assimilation method and, when a variational or variationally-inspired method is chosen, the importance of choosing the right optimization algorithm for the problem at hand. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2012. / November 21, 2011. / All-sky infrared satellite, Cloudy IR, Inverse problem, Limited Memory Bundle Method, Non-differentiable, Quasi-Newton / Includes bibliographical references. / Ionel Michael Navon, Professor Directing Thesis; Guosheng Liu, University Representative; Max Gunzburger, Committee Member; Gordon Erlebacher, Committee Member; Milijia Zupanski, Committee Member; Napsu Karmitsa, Committee Member.
|
50 |
A Sender-Centric Approach to Spam and Phishing ControlUnknown Date (has links)
The Internet email system as a popular online communication tool has been increasingly misused by ill-willed users to carry out malicious activities including spamming and phishing. Alarmingly, in recent years the nature of the email-based malicious activities has evolved from being purely annoying (with the notorious example of spamming) to being criminal (with the notorious example of phishing). Despite more than a decade of anti-spam and anti-phishing research and development efforts, both the sophistication and volume of spam and phishing messages on the Internet have continuously been on the rise over the years. A key difficulty in the control of email-based malicious activities is that malicious actors have great operational flexibility in performing email-based malicious activities, in terms of both the email delivery infrastructure and email content; moreover, existing anti-spam and anti-phishing measures allow for arms race between malicious actors and the anti-spam and anti-phishing community. In order to effectively control email-based malicious activities such as spamming and phishing, we argue that we must limit (and ideally, eliminate) the operational flexibility that malicious actors have enjoyed over the years. In this dissertation we develop and evaluate a sender-centric approach (SCA) to addressing the problem of email-based malicious activities so as to control spam and phishing emails on the Internet. SCA consists of three complementary components, which together greatly limit the operational flexibility of malicious actors in sending spam and phishing emails. The first two components of SCA focus on limiting the infrastructural flexibility of malicious actors in delivering emails, and the last component focuses on on limiting the flexibility of malicious actors in manipulating the content of emails. In the first component of SCA, we develop a machine-learning based system to prevent malicious actors from utilizing compromised machines to send spam and phishing emails. Given that the vast majority of spam and phishing emails are delivered via compromised machines on the Internet today, this system can greatly limit the infrastructural flexibility of malicious actors. Ideally, malicious actors should be forced to send spam and phishing messages from their own machines so that blacklists and reputation-based systems can be effectively used to block spam and phishing emails. The machine-learning based system we develop in this dissertation is a critical step towards this goal. In recent years, malicious actors also started to employ advanced techniques to hijack network prefixes in conducting email-based malicious activities, which makes the control and attribution of spam and phishing emails even harder. In the second component of SCA, we develop a practical approach to improve the security of the Internet inter-domain routing protocol BGP. Given that the key difficulties in adopting any mechanism to secure the Internet inter-domain routing are the overhead and incremental deployment property of the mechanism, our scheme is designed to have minimum overhead and it can be incrementally deployed by individual networks on the Internet to protect themselves (and their customer networks), so that individual networks have incentives to deploy the scheme. In addition to the infrastructural flexibility in delivering spam and phishing emails, malicious actors have enormous flexibility in manipulating the format and content of email messages. In particular, malicious actors can forge phishing messages as close to legitimate messages in terms of both format and content. Although malicious actors have immense power in manipulating the format and content of phishing emails, they cannot completely hide how a message is delivered to the recipients. Based on this observation, in the last component of SCA, we develop a system to identify phishing emails based on the sender- related information instead of the format or content of email messages. Together, the three complementary components of SCA will greatly limit the operational flexibility and capability that malicious actors have enjoyed over the years in delivering spam and phishing emails, and we believe that SCA will make a significant contribution towards addressing the spam and phishing problem on the Internet. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester, 2011. / November 4, 2011. / Flexibility, Phishing, Sender-centric, Spam / Includes bibliographical references. / Zhenhai Duan, Committee Member; Xufeng Niu, University Representative; Xin Yuan, Committee Member; Sudhir Aggarwal, Committee Member.
|
Page generated in 0.085 seconds