• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3255
  • 1476
  • 1007
  • 307
  • 200
  • 94
  • 74
  • 73
  • 71
  • 71
  • 71
  • 71
  • 71
  • 69
  • 53
  • Tagged with
  • 8015
  • 2282
  • 1817
  • 1084
  • 980
  • 971
  • 963
  • 847
  • 838
  • 834
  • 793
  • 781
  • 668
  • 614
  • 608
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Real-Time Particle Systems in the Blender Game Engine

Unknown Date (has links)
Advances in computational power have lead to many developments in science and en- tertainment. Powerful simulations which required expensive supercomputers can now be carried out on a consumer personal computer and many children and young adults spend countless hours playing sophisticated computer games. The focus of this research is the development of tools which can help bring the entertaining and appealing traits of video games to scientific education. Video game developers use many tools and programming languages to build their games, for example the Blender 3D content creation suite. Blender includes a Game Engine that can be used to design and develop sophisticated interactive experiences. One important tool in computer graphics and animation is the particle system, which makes simulated effects such as fire, smoke and fluids possible. The particle system available in Blender is unfortunately not available in the Blender Game Engine because it is not fast enough to run in real-time. One of the main factors contributing to the rise in computational power and the increas- ing sophistication of video games is the Graphics Processing Unit (GPU). Many consumer personal computers are equipped with powerful GPUs which can be harnassed for general purpose computation. This thesis presents a particle system library is accelerated by the GPU using the OpenCL programming language. The library integrated into the Blender Game Engine providing an interactive platform for exploring fluid dynamics and creating video games with realistic water effects. The primary system implemented in this research is a fluid sim- ulator using the Smoothed Particle Hydrodynamics technique for simulating incompressible fluids such as water. The library created for this thesis can simulate water using SPH at 40fps with upwards x  of 100,000 particles on an NVIDIA GTX480 GPU. The fluid system has interactive features such as object collision, and the ability to add and remove particles dynamically. These features as well as phsyical properties of the simulation can be controlled intuitively from the user interface of Blender. / A Thesis submitted to the Department of ScientifiC Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2011. / August 24, 2011. / Game Design, GPU, OpenCL, SPH / Includes bibliographical references. / Gordon Erlebacher, Professor Directing Thesis; Tomasz Plewa, Committee Member; Anter El-Azab, Committee Member.
32

Construction of Delaunay Triangulations on the Sphere: A Parallel Approach

Unknown Date (has links)
This thesis explores possible improvements in the construction of Delaunay Triangulations on the Sphere by designing and implementing a parallel alternative to the software package STRIPACK. First, it gives an introduction to Delaunay Triangulations on the plane and presents current methods available for their construction. Then, these concepts are mapped to the spherical case: Spherical Delaunay Triangulation (SDT). To provide a better understanding of the design choices, this document includes a brief overview of parallel programming, that is followed by the details of the implementation of the SDT generation code. In addition, it provides examples of resulting SDTs as well as benchmarks to analyze its performance. This project was inspired by the concepts presented in Robert Renka's work and was implemented in C++ using MPI. / A Thesis submitted to the Department of ScientifiC Computing in partial fulfillment of the requirements for the degree of Master of Science. / Spring Semester, 2011. / April 1, 2011. / Delaunay Triangulation, Spherical Delaunay Triangulation, Parallel Programming, Software Package / Includes bibliographical references. / Max Gunzburger, Professor Directing Thesis; Anke Meyer-Baese, Committee Member; Janet Peterson, Committee Member; Jim Wilgenbusch, Committee Member.
33

Barrier Island Responses to Storms and Sea-Level Rise: Numerical Modeling and Uncertainty Analysis

Unknown Date (has links)
In response to potential increasing rate of sea-level rise, planners and engineers are making accommodations in their management plans for protection of coastal infrastructure and natural resources. Dunes and barrier islands are important for coastal protection and restoration, because they absorb storm energy and play an essential role in sediment transportation. Most of traditional coastal models do not simulate joint evolution of dunes and barrier islands and do not explicitly address sea-level rise. A new model was developed in this study that represents basic barrier island processes under sea-level rise and links dynamics of different components of barrier islands. The model was used to evaluate near-future (100 years) responses of a semi-synthetic island, with the characteristics of Santa Rosa Island of Florida, USA, to five rates of sea-level rise. The new model is capable of representing considerable practical information about effects of different sea level rise scenarios on the test island. The modeling results show that different areas and components of the island have different responses to sea-level rise. Depending on the rate of sea level rise and overwash sediment supply, evolution of dunes and barrier islands is important to habitat suitable for coastal birds or to backbarrier salt marshes. The modeling results are inherently uncertain due to unknown storm variability and sea-level rise scenarios. The storm uncertainty, characterized as parametric uncertainty, and its propagation to the modeling results, were assessed using the Monte Carlo (MC) method for the synthetic barrier island. A total of 1000 realizations of storm magnitude, frequency, and track through a barrier island were generated and used for the MC simulation. To address the scenario uncertainty, five sea-level rise scenarios were considered using the current rate and four additional rates that lead to sea-level rise of to 0.5m, 1.0m, 1.5m, and 2.0m in the next 100 years. Parametric uncertainty in the simulated beach dune heights and the backshore positions was assessed for the individual scenarios. For a given scenario, the parametric uncertainty varies with time, becoming larger when time increases. For different sea-level rise scenarios, the parametric uncertainty is different, being larger for more severe sea-level rise. The method of scenario averaging was used to quantify the scenario uncertainty. The scenario averaging results are between the results of smallest and largest sea-level rise scenarios. The results of uncertainty analysis provide guidelines for coastal management and protection of coastal ecology. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2011. / November 3, 2011. / barrier island, morphology, sea-level rise, storm / Includes bibliographical references. / Ming Ye, Professor Directing Thesis; Dennis Slice, Committee Member; Tomasz Plewa, Committee Member.
34

Monte Carlo Simulation of Phonon Transport in Uranium Dioxide

Unknown Date (has links)
Heat is transfered in crystalline semiconductor materials via lattice vibrations. Lattice vibrations are treated with a wave-particle duality just like photons are quantum mechanical representations of electro-magnetic waves. The quanta of energy of these lattice waves are called phonons. The Boltzmann Transport Equation (BTE) has proved to be a powerful tool in modeling the phonon heat conduction in crystalline solids. The BTE tracks the phonon number density function as it evolves according to the drift of all phonons and to the phonon-phonon interactions (or collisions). Unlike Fourier's law which is limited to describing diffusive energy transport, the BTE can accurately predict energy transport in both ballistic (virtually no collisions) and diffuse regimes. Motivated by the need to understand thermal transport in irradiated Uranium Dioxide at the mesoscale, this work investigates phonon transport in UO2 using Monte Carlo simulation. The simulation scheme aims to solve the Boltzmann transport equation for phonons within a relaxation time approximation. In this approximation the Boltzmann transport equation is simplified by assigning time scales to each scattering mechanism associated with phonon interactions. The Monte Carlo method is first benchmarked by comparing to similar models for silicon. Unlike most previous works on solving this equation by Monte Carlo method, the momentum and energy conservation laws for phonon-phonon interactions in UO2 are treated exactly; in doing so, the magnitude of possible wave vectors and frequency space are all discretized and a numerical routine is then implemented which considers all possible phonon-phonon interactions and chooses those interactions which obey the conservation laws. The simulation scheme accounts for the acoustic and optical branches of the dispersion relationships of UO2. The six lowest energy branches in the [001] direction are tracked within the Monte Carlo. Because of their predicted low group velocities, the three remaining, high-energy branches are simply treated as a reservoir of phonons at constant energy in K-space. These phonons contribute to the thermal conductivity only by scattering with the six lower energy branches and not by their group velocities. Using periodic boundary conditions, this work presents results illustrating the diffusion limit of phonon transport in UO2 single crystals, and computes the thermal conductivity of the material in the diffusion limit based on the detailed phonon dynamics. The temperature effect on conductivity is predicted and the results are compared with experimental data available in the literature. / A Thesis submitted to the Department of ScientifiC Conmputing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2011. / November 7, 2011. / Boltzmann, Carlo, Monte, Phonon, Thermal, Transport / Includes bibliographical references. / Anter El-Azab, Professor Directing Thesis; Tomasz Plewa, Committee Member; Xiaoqiang Wang, Committee Member.
35

Numerical Methods for Deterministic and Stochastic Nonlocal Problem in Diffusion and Mechanics

Unknown Date (has links)
In this dissertation, the recently developed peridynamic nonlocal continuum model for solid mechanics is extensively studied, specifically, the numerical methods for the deterministic and stochastic steady-state peridynamics models. In contrast to the classical partial differential equation models, peridynamic model is an integro-differential equation that does not involve spatial derivatives of the displacement field. As a result, the peridynamic model admits solutions having jump discontinuities so that it has been successfully applied to the fracture problems. This dissentation consists of three major parts. The first part focuses on the one-dimensional steady-state peridynamics model. Based on a variational formulation, continuous and discontinuous Galerkin finite element methods are developed for the peridynamic model. Optimal convergence rates for different continuous and discontinuous manufactured solutions are obtained. A strategy for identifying the discontinuities of the solution is developed and implemented. The convergence of peridynamics model to classical elasticity model is studied. Some relevant nonlocal problems are also considered. In the second part, we focus on the two-dimensional steady-state peridynamics model. Based on the numerical strategies and results from the one-dimensional peridynamics model, we developed and implemented the corresponding approaches for the two-dimensional case. Optimal convergence rates for different continuous and discontinuous manufactured solutions are obtained. In the third part, we study the stochastic peridynamics model. We focus on a version of peridynamics model whose forcing terms are described by a finite-dimensional random vector, which is often called the finite-dimensional noise assumption. Monte Carlo methods, stochastic collocation with full tensor product and sparse grid methods based on this stochastic peridynamics model are implemented and compared. / A Dissertation submitted to the Department of ScientifiC Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2012. / June 22, 2012. / DISCONTINUOUS GALEKIN METHODS, FINITE ELEMENT METHODS, INTEGRAL DIFFERENTIAL EQUATIONS, NONLOCAL DIFFUSION PROBLEM, PERIDYNAMICS, STOCHASTIC / Includes bibliographical references. / Max Gunzburger, Professor Directing Dissertation; Xiaoming Wang, University Representative; Janet Peterson, Committee Member; Xiaoqiang Wang, Committee Member; Ming Ye, Committee Member; John Burkardt, Committee Member.
36

Irradiation-Induced Composition Patterns and Segregation in Binary Solid Solutions

Unknown Date (has links)
A theoretical-computational model is developed to study irradiation-induced composition patterns and segregation in binary solid solutions under irradiation, which is motivated by the fact that such composition changes alter a wide range of metallurgical properties of structural alloys used in the nuclear industry. For a binary alloy system, the model is based on a coupled, nonlinear set of reaction-diffusion equations for six defect and atomic species, which include vacancies, three interstitial dumbbell configurations, and the two alloy elements. Two sets of boundary conditions have been considered: periodic boundary conditions, which are used to investigate composition patterning in bulk alloys under irradiation, and reaction boundary conditions to study the radiation-induced segregation at surfaces. Reactions are considered to be either between defects, which is called recombination, or between defects and alloying elements, which result in change in the interstitial dumbbell type. Long range diffusion of all the species is considered to happen by vacancy and interstitialcy mechanisms. As such, diffusion of the alloy elements is coupled to the diffusion of vacancies and interstitials. Defect generation is considered to be associated with collision cascade events that occur randomly in space and time. Each event brings about a change in the local concentration of all the species over the mesoscale material volume affected by the cascade. Stiffly-stable Gear's method has been implemented to solve the reaction-diffusion model numerically. Gear's method is a variant of higher order implicit linear multi-step method, implemented in predictor-corrector fashion. The resulting model has been tested with a miscible CuAu solid solution. For this alloy, and in the absence of boundaries, steady state composition patterns of several nanometers have been observed. Fourier space properties of these patterns have been found to depend on irradiation-specific control parameters, temperature, and initial state of the alloy. Linear stability analysis of the set of reaction-diffusion equations confirms the findings of the numerical simulations. In the presence of boundaries, radiation-induced segregation of alloying species has been observed near in the boundary layer: enrichment of faster diffusing species and depletion of slower diffusing species. Radiation-induced segregation has also been found to depend upon irradiation-specific control parameters and temperature. The results show that the degree of segregation is spatially non-uniform and hence it should be studied in higher dimensions. Proper formulation of the boundary conditions showed that segregation of the alloy elements to the boundary is coupled to the boundary motion. With both patterning and segregation investigations, the irradiated sample has been found to recover its uniform state with time when irradiation is turned off. The inference drawn out from this observation is that in miscible solid solutions irradiation-induced composition patterning and radiation-induced segregation are not realizable in the absence of irradiation. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2012. / June 22, 2012. / Binary alloys, Composition patterning, Irradiation, Reaction-Diffusion, Segregation, Stiffness / Includes bibliographical references. / Anter El Azab, Professor Directing Thesis; Per Arne Rikvold, University Representative; Sachin Shanbhag, Committee Member; Gordon Erlebacher, Committee Member; Tomasz Plewa, Committee Member.
37

Generalizes Procrustes Surface Analysis: A Landmark-Free Approach to Superimposition and Shape Analysis

Unknown Date (has links)
The tools and techniques used in shape analysis have constantly evolved, but their objective remains fixed: to quantify the differences in shape between two objects in a consistent and meaningful manner. The hand-measurements of calipers and protractors of the past have yielded to laser scanners and landmark-placement software, but the process still involves transforming an object's physical shape into a concise set of numerical data that can be readily analyzed by mathematical means [Rohlf 1993]. In this paper, we present a new method to perform this transformation by taking full advantage of today's high-power computers and high-resolution scanning technology. This method uses surface scans to calculate a shape-difference metric and perform superimposition rather than relying on carefully (and tediously) placed manual landmarks. This is accomplished by building upon and extending the Iterative Closest Point algorithm. We also examine some new ways this data may be used; we can, for example, calculate an averaged surface directly and visualize point-wise shape information over this surface. Finally, we demonstrate the use of this method on a set of primate skulls and compare the results of the new methodology with traditional geometric morphometric analysis. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2013. / October 11, 2013. / GPSA, Heat Maps, ICP, Morphometrics, Procrustes / Includes bibliographical references. / Dennis Slice, Professor Directing Thesis; Peter Beerli, Committee Member; Sachin Shanbhag, Committee Member.
38

The Solution of a Burgers' Equation Inverse Problem with Reduced-Order Modeling Proper Orthogonal Decomposition

Unknown Date (has links)
This thesis presents and evaluates methods for solving the 1D viscous Burgers' partial differential equation with finite difference, finite element, and proper orthogonal decomposition (POD) methods in the context of an optimal control inverse problem. Based on downstream observations, the initial conditions that optimize a lack-of-fit cost functional are reconstructed for a variety of different Reynolds numbers. For moderate Reynolds numbers, our POD method proves to be not only fast and accurate, it also demonstrates a regularizing effect on the inverse problem. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester, 2009. / May 20, 2009. / Reduced Order Modeling, Proper Orthogonal Decomposition, Inverse Problem, Partial Differential Equations, pde, Optimization, Optimal Control, Fluid Dynamics, Finite Difference, Finite Element / Includes bibliographical references. / Ionel M. Navon, Professor Directing Thesis; Max Gunzburger, Committee Member; Gordon Erlebacher, Committee Member.
39

Characterization of Metallocene-Catalyzed Polyethylenes from Rheological Measurements Using a Bayesian Formulation

Unknown Date (has links)
Long-chain branching affects the rheological properties of the polyethylenes strongly. Branching structure - density of branch points, branch length, and the locations of the branches - is complicated, therefore, without controlled branching structure it is almost impossible to study the effect of long-chain branching on the rheological properties. Single-site catalysts now make it possible to prepare samples in which the molecular weight distribution is relatively narrow and quite reproducible. In addition, a particular type of single-site catalyst, the constrained geometry catalyst, makes it possible to introduce low and well-controlled levels of long chain branching while keeping the molecular weight distribution narrow. Linear viscoelastic properties (LVE) of rheological properties contain a rich amount of data regarding molecular structure of the polymers. A computational algorithm that seeks to invert the linear viscoelastic spectrum of single-site metallocene-catalyzed polyethylenes is presented in this work. The algorithm uses a general linear rheological model of branched polymers as its underlying engine, and is based on a Bayesian formulation that transforms the inverse problem into a sampling problem. Given experimental rheological data on unknown single-site metallocene-catalyzed polyethylenes, it is able to quantitatively describe the range of values of weight-averaged molecular weight, MW, and average branching density, bm, consistent with the data. The algorithm uses a Markov-chain Monte Carlo method to simulate the sampling problem. If, and when information about the molecular weight is available through supplementary experiments, such as chromatography or light scattering, it can easily be incorporated into the algorithm, as demonstrated. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester, 2011. / June 3, 2011. / Bayesian, Polethylenes, Metallocene / Includes bibliographical references. / Sachin Shanbhag, Professor Directing Thesis; Anter El-Azab, Committee Member; Peter Beerli, Committee Member.
40

Edge-Weighted Centroidal Voronoi Tessellation Based Algorithms for Image Segmentation

Unknown Date (has links)
Centroidal Voronoi tessellations (CVTs) are special Voronoi tessellations whose generators are also the centers of mass (centroids) of the Voronoi regions with respect to a given density function. CVT-based algorithms have been proved very useful in the context of image processing. However when dealing with the image segmentation problems, classic CVT algorithms are sensitive to noise. In order to overcome this limitation, we develop an edge-weighted centroidal Voronoi Tessellation (EWCVT) model by introducing a new energy term related to the boundary length which is called "edge energy". The incorporation of the edge energy is equivalent to add certain form of compactness constraint in the physical space. With this compactness constraint, we can effectively control the smoothness of the clusters' boundaries. We will provide some numerical examples to demonstrate the effectiveness, efficiency, flexibility and robustness of EWCVT. Because of its simplicity and flexibility, we can easily embed other mechanisms with EWCVT to tackle more sophisticated problems. Two models based on EWCVT are developed and discussed. The first one is "local variation and edge-weighted centroidal Voronoi Tessellation" (LVEWCVT) model by encoding the information of local variation of colors. For the classic CVTs or its generalizations (like EWCVT), pixels inside a cluster share the same centroid. Therefore the set of centroids can be viewed as a piecewise constant function over the computational domain. And the resulting segmentation have to be roughly the same with respect to the corresponding centroids. Inspired by this observation, we propose to calculate the centroids for each pixel separately and locally. This scheme greatly improves the algorithms' tolerance of within-cluster feature variations. By extensive numerical examples and quantitative evaluations, we demonstrate the excellent performance of LVEWCVT method compared with several state-of-art algorithms. LVEWCVT model is especially suitable for detection of inhomogeneous targets with distinct color distributions and textures. Based on EWCVT, we build another model for "Super-pixels" which is in fact a "regularization" of highly inhomogeneous images. We call our algorithm for super-pixels as "VCells" which is the abbreviation of "Voronoi cells". For a wide range of images, VCells is capable to generate roughly uniform sub-regions and meanwhile nicely preserves local image boundaries. The under-segmentation error is effectively limited in a controllable manner. Moreover, VCells is very efficient. The computational cost is roughly linear in image size with small constant coefficient. For megapixel sized images, VCells is able to generate very dense superpixels in a matter of seconds. We demonstrate that VCells outperforms several state-of-art algorithms through extensive qualitative and quantitative results on a wide range of complex images. Another important contribution of this work is the "Detecting-Segment-Breaking" (DSB) algorithm which can be used to guarantee the spatial connectedness of resulting segments generated by CVT based algorithms. Since the metric is usually defined on the color space, the resulting segments by CVT based algorithms are not necessarily spatially connected. For some applications, this feature is useful and conceptually meaningful, e.g., the foreground objects are not spatially connected. But for some other applications, like the superpixel problem, this "good" feature becomes unacceptable. By simple "extracting-connected-component" and "relabeling" schemes, DSB successfully overcomes the above difficulty. Moreover, the computational cost of DSB is roughly linear in image size with a small constant coefficient. From the theoretical perspective, the innovative idea of EWCVT greatly enriches the methodology of CVTs. (The idea of EWCVT has already been used for variational curve smoothing and reconstruction problems.) For applications, this work shows the great power of EWCVT for image segmentation related problems. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2011. / June 24, 2011. / Image Segmentation, Centroidal Voronoi Tessellation, Clusters, Kmeans, Computer Vision, Superpixels, Inhomogeneity, Edge Detection, Active Contours / Includes bibliographical references. / Xiaoqiang Wang, Professor Directing Dissertation; Xiaoming Wang, University Representative; Max Gunzburger, Committee Member; Janet Peterson, Committee Member; Anter El-Azab, Committee Member.

Page generated in 0.1293 seconds