Spelling suggestions: "subject:"adaptive mest refinement"" "subject:"daptive mest refinement""
1 |
Adaptive Algorithms for Deterministic and Stochastic Differential EquationsMoon, Kyoung-Sook January 2003 (has links)
No description available.
|
2 |
Adaptive Algorithms for Deterministic and Stochastic Differential EquationsMoon, Kyoung-Sook January 2003 (has links)
No description available.
|
3 |
Using Phase-Field Modeling With Adaptive Mesh Refinement To Study Elasto-Plastic Effects In Phase TransformationsGreenwood, Michael 11 1900 (has links)
<p> This thesis details work done in the development of the phase field model which
allows simulation of elasticity with diffuse interfaces and the extension of a thin
interface analysis developed by previous authors to study non-dilute ideal alloys.
These models are coupled with a new finite difference adaptive mesh algorithm to
efficiently simulate a variety of physical systems. The finite difference adaptive
mesh algorithm is shown to be at worse 4-5 times faster than an equivalent finite element
method on a per node basis. In addition to this increase in speed for explicit
solvers in the code, an iterative solver used to compute elastic fields is found to
converge in O(N) time for a dynamically growing precipitate, where N is the number
of nodes on the adaptive mesh. A previous phase field formulation is extended
such as to make possible the study of non-ideal binary alloys with complex phase
diagrams. A phase field model is also derived for a free energy that incorporates an
elastic free energy and is used to investigate the competitive development of solid
state structures in which the kinetic transfer rate of atoms from the parent phase
to the precipitate phase is large. This results in the growth of solid state dendrites.
The morphological effects of competing surface anisotropy and anisotropy in the
elastic modulus tensor is analyzed. It is shown that the transition from surfaceenergy
driven dendrites to elastically driven dendrites depends on the magnitudes
of the surface energy anisotropy coefficient (E4 ) and the anisotropy of the elastic
tensor (β) as well as on the super saturation of the particle and therefore to a specific
Mullins-Sekerka onset radius. The transition point of this competitive process
is predicted from these three controlling parameters. </p> / Thesis / Doctor of Philosophy (PhD)
|
4 |
Adjoint-based space-time adaptive solution algorithms for sensitivity analysis and inverse problemsAlexe, Mihai 14 April 2011 (has links)
Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method.
This dissertation develops a complete framework for fully discrete adjoint sensitivity analysis and inverse problem solutions, in the context of time dependent, adaptive mesh, and adaptive step models. The discrete framework addresses all the necessary ingredients of a state–of–the–art adaptive inverse solution algorithm: adaptive mesh and time step refinement, solution grid transfer operators, a priori and a posteriori error analysis and estimation, and discrete adjoints for sensitivity analysis of flux–limited numerical algorithms. / Ph. D.
|
5 |
Galerkin Projections Between Finite Element SpacesThompson, Ross Anthony 17 June 2015 (has links)
Adaptive mesh refinement schemes are used to find accurate low-dimensional approximating spaces when solving elliptic PDEs with Galerkin finite element methods. For nonlinear PDEs, solving the nonlinear problem with Newton's method requires an initial guess of the solution on a refined space, which can be found by interpolating the solution from a previous refinement. Improving the accuracy of the representation of the converged solution computed on a coarse mesh for use as an initial guess on the refined mesh may reduce the number of Newton iterations required for convergence. In this thesis, we present an algorithm to compute an orthogonal L^2 projection between two dimensional finite element spaces constructed from a triangulation of the domain. Furthermore, we present numerical studies that investigate the efficiency of using this algorithm to solve various nonlinear elliptic boundary value problems. / Master of Science
|
6 |
Efficient Execution Of AMR Computations On GPU SystemsRaghavan, Hari K 11 1900 (has links) (PDF)
Adaptive Mesh Refinement (AMR) is a method which dynamically varies the spatio-temporal resolution of localized mesh regions in numerical simulations, based on the strength of the solution features. Due to high resolution discretization of localized regions of interests into rectangular mesh units called patches, AMR provides low cost of computations and high degree of accuracy. General purpose graphics processing units (GPGPUs) with their support for fine-grained parallelism, offer an attractive option for obtaining high performance for AMR applications. The data parallel computations of the finite difference schemes of AMR can be efficiently performed on GPGPUs. This research deals with challenges and develops techniques for efficient executions of AMR applications with uniform and non-uniform patches on GPUs.
In the first part of the thesis, we optimize an AMR model with uniform patches. We have developed strategies for continuous online visualization of time evolving data for AMR applications executed on GPUs. In-situ visualization plays an important role for analyzing the time evolving characteristics of the domain structures. Continuous visualization of the output data for various time steps results in better study of the underlying domain and the model used for simulating the domain. We reorder the meshes for computations on the GPU based on the users input related to the subdomain that he wants to visualize. This makes the data available for visualization at a faster rate. We then perform asynchronous executions of the visualization steps and fix-up operations on the coarse meshes on the CPUs while the GPU advances the solution. By performing experiments on Tesla S1070 and Fermi C2070 clusters, we found that our strategies result in up to 60% improvement in response time and 16% improvement in the rate of visualization of frames over the existing strategy of performing fix-ups and visualization at the end of the time steps.
The second part of the thesis deals with adaptive strategies for efficient execution of block structured AMR applications with non-uniform patches on GPUs. Most AMR approaches use patches of uniform sizes over regions of interests. Since this leads to over-refinement, some efforts have focused on forming patches of non-uniform dimensions to improve computational efficiency since the dimensions of a patch can be tuned to the geometry of a region of interest. While effective hybrid execution strategies exist for applications with uniform patches, our work considers efficient execution of non-uniform patches with different workloads. Our techniques include a geometric bin-packing method to load balance GPU computations and reduce thread idling, adaptive determination of amount of work to maximize asynchronism between CPU and GPU executions using a knapsack formulation, and scheduling communications for multi-GPU executions. We test our strategies for synthetic inputs as well as for traces from real applications. Our experiments on Tesla S1070 and Fermi C2070 clusters with both single-GPU and multi-GPU executions show that our strategies result in up to 69% improvement in performance over existing strategies. Our bin-packing based load balancing gives performance gains up to 39%, kernel optimizations give an improvement of up to 20%, and our strategies for adaptive asynchronism between CPU-GPU executions give performance improvements of up to 17% over default static asynchronous executions.
|
7 |
Convergence rates of adaptive algorithms for deterministic and stochastic differential equationsMoon, Kyoung-Sook January 2001 (has links)
No description available.
|
8 |
Uncertainty Quantification and Assimilation for Efficient Coastal Ocean ForecastingSiripatana, Adil 21 April 2019 (has links)
Bayesian inference is commonly used to quantify and reduce modeling uncertainties in coastal ocean models by computing the posterior probability distribution function (pdf) of some uncertain quantities to be estimated conditioned on available observations. The posterior can be computed either directly, using a Markov Chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation (DA) approach. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without a significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach often due to restricted Gaussian prior and noise assumptions.
This thesis aims to develop, implement and test novel efficient Bayesian inference techniques to quantify and reduce modeling and parameter uncertainties of coastal ocean models. Both state and parameter estimations will be addressed within the framework of a state of-the-art coastal ocean model, the Advanced Circulation (ADCIRC) model. The first part of the thesis proposes efficient Bayesian inference techniques for uncertainty quantification (UQ) and state-parameters estimation. Based on a realistic framework of observation system simulation experiments (OSSEs), an ensemble Kalman filter (EnKF) is first evaluated against a Polynomial Chaos (PC)-surrogate MCMC method under identical scenarios. After demonstrating the relevance of the EnKF for parameters estimation, an iterative EnKF is introduced and validated for the estimation of a spatially varying Manning’s n coefficients field. Karhunen-Lo`eve (KL) expansion is also tested for dimensionality reduction and conditioning of the parameter search space. To further enhance the performance of PC-MCMC for estimating spatially varying parameters, a coordinate transformation of a Gaussian process with parameterized prior covariance function is next incorporated into the Bayesian inference framework to account for the uncertainty in covariance model hyperparameters. The second part of the thesis focuses on the use of UQ and DA on adaptive mesh models. We developed new approaches combining EnKF and multiresolution analysis, and demonstrated significant reduction in the cost of data assimilation compared to the traditional EnKF implemented on a non-adaptive mesh.
|
9 |
A dimensionally split Cartesian cut cell method for Computational Fluid DynamicsGokhale, Nandan Bhushan January 2019 (has links)
We present a novel dimensionally split Cartesian cut cell method to compute inviscid, viscous and turbulent flows around rigid geometries. On a cut cell mesh, the existence of arbitrarily small boundary cells severely restricts the stable time step for an explicit numerical scheme. We solve this `small cell problem' when computing solutions for hyperbolic conservation laws by combining wave speed and geometric information to develop a novel stabilised cut cell flux. The convergence and stability of the developed technique are proved for the one-dimensional linear advection equation, while its multi-dimensional numerical performance is investigated through the computation of solutions to a number of test problems for the linear advection and Euler equations. This work was recently published in the Journal of Computational Physics (Gokhale et al., 2018). Subsequently, we develop the method further to be able to compute solutions for the compressible Navier-Stokes equations. The method is globally second order accurate in the L1 norm, fully conservative, and allows the use of time steps determined by the regular grid spacing. We provide a full description of the three-dimensional implementation of the method and evaluate its numerical performance by computing solutions to a wide range of test problems ranging from the nearly incompressible to the highly compressible flow regimes. This work was recently published in the Journal of Computational Physics (Gokhale et al., 2018). It is the first presentation of a dimensionally split cut cell method for the compressible Navier-Stokes equations in the literature. Finally, we also present an extension of the cut cell method to solve high Reynolds number turbulent automotive flows using a wall-modelled Large Eddy Simulation (WMLES) approach. A full description is provided of the coupling between the (implicit) LES solution and an equilibrium wall function on the cut cell mesh. The combined methodology is used to compute results for the turbulent flow over a square cylinder, and for flow over the SAE Notchback and DrivAer reference automotive geometries. We intend to publish the promising results as part of a future publication, which would be the first assessment of a WMLES Cartesian cut cell approach for computing automotive flows to be presented in the literature.
|
10 |
Numerical simulations of instabilities in general relativityKunesch, Markus January 2018 (has links)
General relativity, one of the pillars of our understanding of the universe, has been a remarkably successful theory. It has stood the test of time for more than 100 years and has passed all experimental tests so far. Most recently, the LIGO collaboration made the first-ever direct detection of gravitational waves, confirming a long-standing prediction of general relativity. Despite this, several fundamental mathematical questions remain unanswered, many of which relate to the global existence and the stability of solutions to Einstein's equations. This thesis presents our efforts to use numerical relativity to investigate some of these questions. We present a complete picture of the end points of black ring instabilities in five dimensions. Fat rings collapse to Myers-Perry black holes. For intermediate rings, we discover a previously unknown instability that stretches the ring without changing its thickness and causes it to collapse to a Myers-Perry black hole. Most importantly, however, we find that for very thin rings, the Gregory-Laflamme instability dominates and causes the ring to break. This provides the first concrete evidence that in higher dimensions, the weak cosmic censorship conjecture may be violated even in asymptotically flat spacetimes. For Myers-Perry black holes, we investigate instabilities in five and six dimensions. In six dimensions, we demonstrate that both axisymmetric and non-axisymmetric instabilities can cause the black hole to pinch off, and we study the approach to the naked singularity in detail. Another question that has attracted intense interest recently is the instability of anti-de Sitter space. In this thesis, we explore how breaking spherical symmetry in gravitational collapse in anti-de Sitter space affects black hole formation. These findings were made possible by our new open source general relativity code, GRChombo, whose adaptive mesh capabilities allow accurate simulations of phenomena in which new length scales are produced dynamically. In this thesis, we describe GRChombo in detail, and analyse its performance on the latest supercomputers. Furthermore, we outline numerical advances that were necessary for simulating higher dimensional black holes stably and efficiently.
|
Page generated in 0.1197 seconds