Spelling suggestions: "subject:"[een] NUMERICAL"" "subject:"[enn] NUMERICAL""
61 |
Reduced Order Modeling Using the Wavelet-Galerkin Approximation of Differential EquationsUnknown Date (has links)
Over the past few decades an increased interest in reduced order modeling approaches has led to its application in areas such as real time simulations and parameter studies among many others. In the context of this work reduced order modeling seeks to solve differential equations using substantially fewer degrees of freedom compared to a standard approach like the finite element method. The finite element method is a Galerkin method which typically uses piecewise polynomial functions to approximate the solution of a differential equation. Wavelet functions have recently become a relevant topic in the area of computational science due to their attractive properties including differentiability and multi-resolution. This research seeks to combine a wavelet-Galerkin method with a reduced order approach to approximate the solution to a differential equation with a given set of parameters. This work will focus on showing that using a reduced order approach in a wavelet-Galerkin setting is a viable option in determining a reduced order solution to a differential equation. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2013. / October 30, 2013. / Daubechies, Finite Element Method, Partial Differential Equation, Proper Orthogonal Decomposition, Reduced Order Modeling, Wavelet / Includes bibliographical references. / Janet Peterson, Professor Directing Thesis; Max Gunzburger, Committee Member; Ming Ye, Committee Member.
|
62 |
Toward Connecting Core-Collapse Supernova Theory with ObservationsUnknown Date (has links)
We study the evolution of the collapsing core of a 15 solar mass blue supergiant supernova progenitor from the moment shortly after core bounce until 1.5 seconds later. We present a sample of two- and three-dimensional hydrodynamic models parameterized to match the explosion energetics of supernova SN 1987A. We focus on the characteristics of the flow inside the gain region and the interplay between hydrodynamics, self-gravity, and neutrino heating, taking into account uncertainty in the nuclear equation of state. We characterize the evolution and structure of the flow behind the shock in terms the accretion flow dynamics, shock perturbations, energy transport and neutrino heating effects, and convective and turbulent motions. We also analyze information provided by particle tracers embedded in the flow. Our models are computed with a high-resolution finite volume shock capturing hydrodynamic code. The code includes source terms due to neutrino-matter interactions from a light-bulb neutrino scheme that is used to prescribe the luminosities and energies of the neutrinos emerging from the core of the proto-neutron star. The proto-neutron star is excised from the computational domain, and its contraction is modeled by a time-dependent inner boundary condition. We find the spatial dimensionality of the models to be an important contributing factor in the explosion process. Compared to two-dimensional simulations, our three-dimensional models require lower neutrino luminosities to produce equally energetic explosions. We estimate that the convective engine in our models is $4$% more efficient in three dimensions than in two dimensions. We propose that this is due to the difference of morphology of convection between two- and three-dimensional models. Specifically, the greater efficiency of the convective engine found in three-dimensional simulations might be due to the larger surface-to-volume ratio of convective plumes, which aids in distributing energy deposited by neutrinos. We do not find evidence of the standing accretion shock instability in our models. Instead we identify a relatively long phase of quasi-steady convection below the shock, driven by neutrino heating. During this phase, the analysis of the energy transport in the post-shock region reveals characteristics closely resembling that of penetrative convection. We find that the flow structure grows from small scales and organizes into large, convective plumes on the size of the gain region. We use tracer particles to study the flow properties, and find substantial differences in residency times of fluid elements in the gain region between two-dimensional and three-dimensional models. These appear to originate at the base of the gain region and are due to differences in the structure of convection. We also identify differences in the evolution of energy of the fluid elements, how they are heated by neutrinos, and how they become gravitationally unbound. In particular, at the time when the explosion commences, we find that the unbound material has relatively long residency times in two-dimensional models, while in three dimensions a significant fraction of the explosion energy is carried by particles with relatively short residency times. We conduct a series of numerical experiments in which we methodically decrease the angular resolution in our three-dimensional models. We observe that the explosion energy decreases dramatically once the resolution is inadequate to capture the morphology of convection on large scales. Thus, we demonstrated that it is possible to connect successful, energetic, three-dimensional models with unsuccessful three-dimensional models just by decreasing numerical resolution, and thus the amount of resolved physics. This example shows that the role of dimensionality is secondary to correctly accounting for the basic physics of the explosion. The relatively low spatial resolution of current three-dimensional models allows for only rudimentary insights into the role of turbulence in driving the explosion. However, and contrary to some recent reports, we do not find evidence for turbulence being a key factor in reviving the stalled supernova shock. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2014. / April 15, 2014. / Convection, Hydrodynamics, Instabilities, Shock Waves, Supernovae / Includes bibliographical references. / Tomasz Plewa, Professor Directing Dissertation; Mark Sussman, University Representative; Anke Meyer-Baese, Committee Member; Gordon Erlebacher, Committee Member; Ionel M. Navon, Committee Member.
|
63 |
Binary White Dwarf Mergers: Weak Evidence for Prompt Detonations in High-Resolution Adaptive Mesh SimulationsUnknown Date (has links)
The origins of thermonuclear supernovae remain poorly understood--a troubling fact, given their importance in astrophysics and cosmology. A leading theory posits that these events arise from the merger of white dwarfs in a close binary system. In this study we examine the possibility of prompt ignition, in which a runaway fusion reaction is initiated in the early stages of the merger. We present a set of three-dimensional white dwarf merger simulations performed with the help of a high-resolution adaptive mesh refinement hydrocode. We consider three binary systems of different mass ratios composed of carbon/oxygen white dwarfs with total mass exceeding the Chandrasekhar mass. We additionally explore the effects of mesh resolution on important simulation parameters. We find that two distinct behaviors emerge depending on the progenitor mass ratio. For systems of components with differing masses, a boundary layer forms around the accretor. For systems of nearly equal mass, the merger product displays deep entraintment of each star into the other. We closely monitor thermonuclear burning that begins when sufficiently dense material is shocked during early stages of the merger process. Analysis of ignition times lead us to conclude that for binary systems with components of unequal mass whose combined mass is close to the Chandrasekhar limit, there is a negligible chance of prompt ignition. Simulations of similar systems with a combined mass of 2 solar masses suggest that prompt ignition may be possible, but require further study using higher-resolution. The system with components of nearly equal mass does not seem likely to undergo prompt ignition, and higher resolution simulations are unlikely to change this conclusion. We additionally find that white dwarf merger simulations require high resolution. Insufficient resolution can qualitatively change simulation outcomes, either by smoothing important fluctuations in density and temperature, or by altering the dynamics of the system such that additional physics processes, such as gravity, are incorrectly represented. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Spring Semester, 2014. / April 14, 2014. / Binaries: Close, Hydrodynamics: Instabilities, Stars: Accretion, White Dwarfs, Supernovae:General / Includes bibliographical references. / Tomasz Plewa, Professor Directing Thesis; Mark Sussman, Committee Member; Gordon Erlebacher, Committee Member.
|
64 |
Bayesian Neural Networks in Data-Intensive High Energy Physics ApplicationsUnknown Date (has links)
This dissertation studies a graphical processing unit (GPU) construction of Bayesian neural networks (BNNs) using large training data sets. The goal is to create a program for the mapping of phenomenological Minimal Supersymmetric Standard Model (pMSSM) parameters to their predictions. This would allow for a more robust method of studying the Minimal Supersymmetric Standard Model, which is of much interest at the Large Hadron Collider (LHC) experiment CERN. A systematic study of the speedup achieved in the GPU application compared to a Central Processing Unit (CPU) implementation are presented. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2014. / April 1, 2014. / Bayesian Neural Networks, GPU, pMSSM, Scientific Computing / Includes bibliographical references. / Anke Meyer-Baese, Professor Directing Dissertation; Harrison Prosper, Professor Directing Dissertation; Jorge Piekarewicz, University Representative; Sachin Shanbhag, Committee Member; Peter Beerli, Committee Member.
|
65 |
Improvements in Metadynamics Simulations: The Essential Energy Space Random Walk and the Wang-Landau RecursionUnknown Date (has links)
Metadynamics is a popular tool to explore free energy landscapes and it has been use to elucidate various chemical or biochemical processes. The height of updating Gaussian function is very important for proper free energy convergence to the target free energy surface. Both higher and lower Gaussian heights have advantages and disadvantages, a balance is required. This thesis presents the implementation of the Wang-Landau recursion scheme in metadynamics simulations to adjust the height of the unit Gaussian function. Compared with classical fixed Gaussian heights, this dynamic adjustable method was demonstrated to efficiently yield better converged free energy surfaces. In addition, through combination with the realization of an energy space random walk, the Wang-Landau recursion scheme can be readily used to deal with the pseudoergodicity problem in molecular dynamic simulations. The use of this scheme is proven to efficiently and robustly obtain a biased free energy function within this thesis. / A Thesis Submitted to the School of Computational Science in Partial FulfiLlment of
the Requirements for the Degree of Master of Science. / Summer Semester, 2008. / June 20, 2008. / Essential Energy Space Random Walk, Metadynamics Simulations, Wang-Landau Method / Includes bibliographical references. / Wei Yang, Professor Directing Thesis; Gordon Erlebacher, Committee Member; Janet Peterson, Committee Member.
|
66 |
A GIS-Based Model for Estimating Nitrate Fate and Transport from Septic Systems in Surficial AquifersUnknown Date (has links)
Estimating groundwater nitrate fate and transport is an important task in water resources and environmental management because excess nitrate loads may have negative impacts on human and environmental health. This work discusses the development of a simplified nitrate transport model and its implementation as a geographic information system (GIS)-based screening tool, whose purpose is to estimate nitrate loads to surface water bodies from onsite wastewater-treatment systems (OWTS). Key features of this project are the reduced data demands due to the use of a simplified model, as well as ease of use compared to traditional groundwater flow and transport models, achieved by embedding the model within a GIS. The simplified conceptual model consists of a simplified groundwater flow model in the surficial aquifer, and a simplified transport model that makes use of an analytical solution to the advection-dispersion equation, used for determining nitrate fate and transport. Denitrification is modeled using first order decay in the analytical solution with the decay constant obtained from literature and/or site-specific data. The groundwater flow model uses readily available topographic data to approximate the hydraulic gradient, which is then used to calculate seepage velocity magnitude and direction. The flow model is evaluated by comparing the results to a previous numerical modeling study of the U.S. Naval Air Station, Jacksonville (NAS) performed by the USGS. The results show that for areas in the vicinity of the NAS, the model is capable of predicting groundwater travel times from a source to a surface water body to within ±20 years of the USGS model, 75% of the time. The transport model uses an analytical solution based on the one by Domenico and Robbins (1985), the results of which are then further processed so that they may be applied to more general, real-world scenarios. The solution, as well as the processing steps are tested using artificially constructed scenarios, each meant to evaluate a certain aspect of the solution. For comparison purposes, each scenario is solved using a well known numerical contaminant transport model. The results show that the analytical solution provides a reasonable approximation to the numerical result. However, it generally underestimates the concentration distribution to varying degrees depending on choice of parameters, especially along the plume centerline. These results are in agreement with previous studies (Srinivasan et al., 2007; West et al., 2007). The adaptation of the analytical solution to more realistic scenarios results in an adequate approximation to the numerically calculated plume, except in areas near the advection front, where the model produces a plume whose shape differs noticeably from the numerical solution. Load calculations are carried out using a mass balance approach where the system is considered to be in the steady state. The steady-state condition allows for a load estimate by subtracting the mass removal rate due to denitrification from the input mass rate. The input mass rate is calculated by taking into account advection and dispersion while the mass removal rate due to denitrification is calculated from the definition of a first order reaction. Comparison with the synthetic scenarios of the transport model shows that for the test cases, when decay rates are low, the model agrees well with the load calculation from the numerical model. As decay rates increase and the plume becomes shorter, the input load is overestimated by about 9% in the test cases and the mass removed due to denitrification is underestimated by 30% in the worst case. These results are likely due to the underestimation of concentration values by the analytical solution of the transport model. / A Thesis Submitted to the Department of Scientific Computing in Partial Fulfillment of the Requirements for the Degree of Master of Science. / Fall Semester, 2010. / October 22, 2010. / Simplified model, Denitrification, OWTS, GIS, Septic tank, Contaminant transport, Nitrate contamination / Includes bibliographical references. / Ming Ye, Professor Directing Thesis; Janet Peterson, Committee Member; Sachin Shanbhag, Committee Member; James Wilgenbusch, Committee Member.
|
67 |
Flocking Implementation for the Blender Game EngineUnknown Date (has links)
In this thesis, we discuss the development of a new Boids system that simulates flocking behavior inside the Blender Game Engine and within the framework of the Real-Time Par- ticles System (RTPS) library developed by Ian Johnson. The collective behavior of Boids is characterized as an emergent behavior caused by following three steering behaviors: sep- aration, alignment, and cohesion. The implementation leverages OpenCL to maintain the portability of the Blender across different graphics cards and operating systems. Bench- marks of the RTPS-FLOCK system show that our implementation speeds up Blender's original Boids implementation (which only runs outside the game engine) by more than an order of magnitude. We demonstrate our boids system in three ways. First, we illustrate how symmetry of the steering behavior is maintained in time. Second, we consider the behavior of a "swarm of bees" approaching their hive. And third, we simulate the motion of a "crowd" constrained to a two-dimensional plane. / A Thesis Submitted to the Department of ScientifiC Computing in Partial FulfiLlment of the Requirements for the Degree of Master of Science. / Summer Semester, 2011. / June 24, 2011. / RTPS, Blender, boids, flocking / Includes bibliographical references. / Gordon Erlebacher, Professor Directing Thesis; Ming Ye, Committee Member; Xiaoqiang Wang, Committee Member.
|
68 |
Supervised Aggregation of Classifiers Using Artificial Prediction MarketsUnknown Date (has links)
Prediction markets have been demonstrated to be accurate predictors of the outcomes of future events. They have been successfully used to predict the outcomes of sporting events, political elections and even business decisions. Their prediction accuracy has even outperformed the accuracy of other prediction methods such as polling. As an attempt to reproduce their predictive capability, a machine learning model of prediction markets is developed herein for classification. This model is a novel classifier aggregation technique that generalizes linear aggregation techniques. This prediction market aggregation technique is shown to outperform or match Random Forest on both artificial and real data sets. The notion of specialization is also developed and explored herein. This leads to a new kind of classifier referred to as a specialized classifier. These specialized classifiers are shown to improve the accuracy of prediction market aggregation even to perfection. / A Thesis submitted to the Department of ScientifiC Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2009. / November 5, 2009. / Machine Learning, Aggregation, Random Forest / Includes bibliographical references. / Adrian Barbu, Professor Directing Thesis; Anke Meyer-Baese, Committee Member; Tomasz Plewa, Committee Member.
|
69 |
Effects of Vertical Mixing Closures on North Atlantic Overflow SimulationsUnknown Date (has links)
We are exploring the effect of using various vertical mixing closures on resolving the physical process known as overflow. This is when cold dense water overflows from a basin in the ocean. This process is responsible for the majority of the Ocean's dense water transport, and also creates many of the dense water currents that are part of what is known as the Ocean Conveyor Belt. One of the main places this happens is in the North Atlantic, in the Denmark strait and the Faroe Bank Sea Channel. To simulate this process, two ocean models are used, the Parallel Ocean Program (POP) and the hybrid-coordinate Parallel Ocean Program (HyPOP). Using these models, differences are observed in three main vertical mixing schemes Constant, Richardson Number, and KPP. Though, not included in this thesis the research also explores three different vertical griding schemes, Z-Grid, Sigma Coordinate, and Isopycnal grids. The goal is to attempt to determine which combination gives the most acceptable results for resolving the overflow process. This is motivated by the large role this process plays in the ocean, as well as the difficulty in modeling this process. If an ocean model cannot accurately simulate overflow, then a large portion of the ocean model will be incorrect and one cannot hope to get reasonable results for long simulations out of it. / A Thesis submitted to the Department of ScientifiC Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2009. / November 6, 2009. / Overflow, Ocean Modeling, Vertical Mixing, Viscosity, Diffusion / Includes bibliographical references. / Max Gunzburger, Professor Directing Thesis; Gordon Erlebacher, Committee Member; Janet Peterson, Committee Member.
|
70 |
Parallel Grid Generation and Multi-Resolution Methods for Climate Modeling ApplicationsUnknown Date (has links)
Spherical centroidal Voronoi tessellations (SCVT) are used in many applications in a variety of fields, one being climate modeling. They are a natural choice for spatial discretizations on the surface of the Earth. New modeling techniques have recently been developed that allow the simulation of ocean and atmosphere dynamics on arbitrarily unstructured meshes, including SCVTs. Creating ultra-high resolution SCVTs can be computationally expensive. A newly developed algorithm couples current algorithms for the generation of SCVTs with existing computational geometry techniques to provide the parallel computation of SCVTs and spherical Delaunay triangulations. Using this new algorithm, computing spherical Delaunay triangulations shows a speed up on the order of 4000 over other well known algorithms, when using 42 processors. As mentioned previously, newly developed numerical models allow the simulation of ocean and atmosphere systems on arbitrary Voronoi meshes providing a multi-resolution modeling framework. A multi-resolution grid allows modelers to provide areas of interest with higher resolution with the hopes of increasing accuracy. However, one method of providing higher resolution lowers the resolution in other areas of the mesh which could potentially increase error. To determine the effect of multi-resolution meshes on numerical simulations in the shallow-water context, a standard set of shallow-water test cases are explored using the Model for Prediction Across Scales (MPAS), a new modeling framework jointly developed by the Los Alamos National Laboratory and the National Center for Atmospheric Research. An alternative approach to multi-resolution modeling is Adaptive Mesh Refinement (AMR). AMR typically uses information about the simulation to determine optimal locations for degrees of freedom, however standard AMR techniques are not well suited for SCVT meshes. In an effort to solve this issue, a framework is developed to allow AMR simulations on SCVT meshes within MPAS. The resulting research contained in this dissertation ties together a newly developed parallel SCVT generator with a numerical method for use on arbitrary Voronoi meshes. Simulations are performed within the shallow-water context. New algorithms and frameworks are described and bench-marked. / A Dissertation submitted to the Department of ScientifiC Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2011. / June 14, 2011. / spherical centroidal voronoi tessellation, grid generation, high performance computing, spherical delaunay triangulation, adaptive mesh refinement, shallow-water equations, ocean modeling / Includes bibliographical references. / Max Gunzburger, Professor Directing Thesis; Doron Nof, University Representative; Janet Peterson, Committee Member; Gordon Erlebacher, Committee Member; Michael Navon, Committee Member; John Burkardt, Committee Member; Todd Ringler, Committee Member.
|
Page generated in 0.0999 seconds