• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 760
  • 296
  • 122
  • 78
  • 67
  • 42
  • 16
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • 7
  • 7
  • Tagged with
  • 1825
  • 1825
  • 329
  • 318
  • 293
  • 274
  • 259
  • 252
  • 239
  • 229
  • 196
  • 188
  • 180
  • 178
  • 171
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Improving Inference in Population Genetics Using Statistics

Unknown Date (has links)
My studies at Florida State University focused on using computers and statistics to solve problems in population genetics. I have created models and algorithms that have the potential to improve the statistical analysis of population genetics. Population genetical data is often noisy and thus requires the use of statistics in order to be able to draw meaning from the data. This dissertation consists of three main projects. The first project involves the parallel evaluation an model inference on multi-locus data sets. Bayes factors are used for model selection. We used thermodynamic integration to calculate these Bayes factors. To be able to take advantage of parallel processing and parallelize calculation across a high performance computer cluster, I developed a new method to split the Bayes factor calculation into independent units and then combine them later. The next project, the Transition Probability Structured Coalescence [TSPC], involved the creation of a continuous approximation to the discrete migration process used in the structured coalescent that is commonly used to infer migration rates in biological populations. Previous methods required the simulation of these migration events, but there is little power to estimate the time and occurrence of these events. In my method, they are replaced with a one dimensional numerical integration. The third project involved the development of a model for the inference of the time of speciation. Previous models used a set time to delineate a speciation and speciation was a point process. Instead, this point process is replaced with a parameterized speciation model where each lineage speciates according to a parameterized distribution. This is effectively a broader model that allows both very quick and slow speciation. It also includes the previous model as a limiting case. These three project, although rather independent of each other, improve the inference of population genetic models and thus allow better analyses of genetic data in fields such as phylogeography, conservation, and epidemiology. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2013. / March 26, 2013. / Includes bibliographical references. / Peter Beerli, Professor Directing Thesis; Anuj Srivastava, University Representative; Gordon Erlebacher, Committee Member; Alan Lemmon, Committee Member; Dennis Slice, Committee Member.
42

Peridynamic Modeling and Simulation of Polymer-Nanotube Composites

Unknown Date (has links)
In this document, we develop and demonstrate a framework for simulating the mechanics of polymer materials that are reinforced by carbon nanotubes. Our model utilizes peridynamic theory to describe the mechanical response of the polymer and polymer-nanotube interfaces. We benefit from the continuum formulation used in peridynamics because (1) it allows the polymer material to be coarse-grained to the scale of the reinforcing nanofibers, and (2) failure via nanotube pull-out and matrix tearing are possible based on energetic considerations alone (i.e. without special treatment). To reduce the degrees of freedom that must be simulated, the reinforcement effect of the nanotubes is represented by a mesoscale bead-spring model. This approach permits the arbitrary placement of reinforcement ''strands'' in the problem domain and motivates the need for irregular quadrature point distributions, which have not yet been explored in the peridynamic setting. We address this matter in detail and report on aspects of mesh sensitivity that we uncovered in peridynamic simulations. Using a manufactured solution, we study the effects of quadrature point placement on the accuracy of the solution scheme in one and two dimensions. We demonstrate that square grids and the generator points of a centroidal Voronoi tessellation (CVT) support solutions of similar accuracy, but CVT grids have desirable characteristics that may justify the additional computational cost required for their construction. Impact simulations provide evidence that CVT grids support fracture patterns that resemble those obtained on higher resolution cubic Cartesian grids with a reduced computational burden. With the efficacy of irregular meshing schemes established, we exercise our model by dynamically stretching a cylindrical specimen composed of the polymer-nanotube composite. We vary the number of reinforcements, alignment of the filler, and the properties of the polymer-nanotube interface. Our results suggest that enhanced reinforcement requires an interfacial stiffness that exceeds that of the neat polymer. We confirm that the reinforcement is most effective when a nanofiber is aligned with the applied deformation, least effective when a nanofiber is aligned transverse to the applied deformation, and achieves intermediate values for other orientations. Sample configurations containing two fibers are also investigated. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester, 2013. / November 6, 2013. / Composites, Multiscale, Nanotube, Nonlocal, Peridynamics, Polymer / Includes bibliographical references. / Sachin Shanbhag, Professor Directing Dissertation; Okenwa Okoli, University Representative; Gordon Erlebacher, Committee Member; Tomasz Plewa, Committee Member; William Oates, Committee Member.
43

The Integration of Artificial Neural Networks and Geometric Morphometrics to Classify Teeth from Carcharhinus Sp

Unknown Date (has links)
The advent of geometric morphometrics and the revitalization of artificial neural networks have created powerful new tools to classify morphological structures to groups. Although these two approaches have already been combined, there has been less attention on how such combinations perform relative to more traditional methods. Here we use geometric morphometric data and neural networks to identify from which species upper-jaw teeth from carcharhiniform sharks in the genus Carcharhinus originated, and these results are compared to more traditional classification methods. In addition to the methodological applications of this comparison, an ability to identify shark teeth would facilitate the incorporation of shark teeth's vast fossil record into evolutionary studies. Using geometric morphometric data originating from Naylor and Marcus (1994), we built two types of neural networks, multilayer perceptrons and radial basis function neural networks to classify teeth from C. acronotus, C. leucas, C. limbatus, and C. plumbeus, as well as classifying the teeth using linear discriminate analysis. All classification schemes were trained using the right upper-jaw teeth of 15 individuals. Between these three methods, the multilayer perceptron performed the best, followed by linear discriminate analysis, and then the radial basis function neural network. All three classification systems appear to be more accurate than previous efforts to classify Carcharhinus teeth using linear distances between landmarks and linear discriminate analysis. In all three classification systems, misclassified teeth tended to originate either near the symphysis or near the jaw angle, though an additional peak occurred between these two structures. To assess whether smaller training sets would lead to comparable accuracies, we used a multilayer perceptron to classify teeth from the same species but now based on a training set of right upper-jaw teeth from only five individuals. Although not as accurate as the network based on 15 individuals, the network performed favorably. As a final test, we built a multilayer perceptron to classify teeth from C. altimus, C. obscurus, and C. plumbeus, which have more similar upper-jaw teeth than the original four species, based on training sets of five individuals. Again, the classification system performed better than a system that combines linear measurements and discriminate function analysis. Given the high accuracies for all three systems, it appears that the use of geometric morphometric data has a great impact on the accuracy of the classification system, whereas the exact method of classification tends to make less of a difference. These results may be applicable to other systems and other morphological structures. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2013. / November 8, 2013. / Artificial Neural Network, Carcharhinus, Classification, Geometric Morphometrics, Linear Discriminate Analysis, Teeth / Includes bibliographical references. / Dennis E. Slice, Professor Directing Thesis; Anke Meyer-Baese, Committee Member.
44

Multi-GPU Solutions of Geophysical PDEs with Radial Basis Function-Generated Finite Differences

Unknown Date (has links)
Many numerical methods based on Radial Basis Functions (RBFs) are gaining popularity in the geosciences due to their competitive accuracy, functionality on unstructured meshes, and natural extension into higher dimensions. One method in particular, the Radial Basis Function-generated Finite Differences (RBF-FD), is drawing attention due to its comparatively low computational complexity versus other RBF methods, high-order accuracy (6th to 10th order is common), and parallel nature. Similar to classical Finite Differences (FD), RBF-FD computes weighted differences of stencil node values to approximate derivatives at stencil centers. The method differs from classical FD in that the test functions used to calculate the differentiation weights aren-dimensional RBFs rather than one-dimensional polynomials. This allows for generalization ton-dimensional space on completely scattered node layouts. Although RBF-FD was first proposed nearly a decade ago, it is only now gaining a critical mass to compete against well known competitors in modeling like FD, Finite Volume and Finite Element. To truly contend, RBF-FD must transition from single threaded MATLAB environments to large-scale parallel architectures. Many HPC systems around the world have made the transition to Graphics Processing Unit (GPU) accelerators as a solution for added parallelism and higher throughput. Some systems offer significantly more GPUs than CPUs. As the problem size,N, grows larger, it behooves us to work on parallel architectures, be it CPUs or GPUs. In addition to demonstrating the ability to scale to hundreds or thousands of compute nodes, this work introduces parallelization strategies that span RBF-FD across multi-GPU clusters. The stability and accuracy of the parallel implementation is verified through the explicit solution of two PDEs. Additionally, a parallel implementation for implicit solutions is introduced as part of continued research efforts. This work establishes RBF-FD as a contender in the arena of distributed HPC numerical methods. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester, 2013. / November 6, 2013. / High-order finite differencing, Multi-GPU computing, OpenCL, Parallel computing, Radial basis functions, RBF-FD / Includes bibliographical references. / Gordon Erlebacher, Professor Directing Dissertation; Mark Sussman, University Representative; Natasha Flyer, Committee Member; Dennis Slice, Committee Member; Ming Ye, Committee Member; Janet Peterson, Committee Member.
45

Objective Front Detection from Ocean Color Data

Unknown Date (has links)
We outline a new approach to objectively locate and define mesoscale oceanic features from satellite derived ocean color data. Modern edge detection algorithms are robust and accurate for most applications, oceanic satellite observations however introduce challenges that foil many differentiation based algorithms. The clouds, discontinuities, noise, and low variability of pertinent data prove confounding. In this work the input data is first quantized using a centroidal voronoi tesselation (CVT), removing noise and revealing the low variable fronts of interest. Clouds are then removed by assuming values of its surrounding neighbors, and the perimeters of these resulting cloudless regions localize the fronts to a small set. We then use the gradient of the quantized data as a compass to walk around the front and periodically select points to be knots for a Hermite spline. These Hermite splines yield an analytic representation of the fronts and provide practitioners with a convenient tool to calibrate their models. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2013. / November 18, 2013. / Edge Detection, Front Detection, Oceanography / Includes bibliographical references. / Gordon Erlebacher, Professor Co-Directing Thesis; Eric Chassignet, Professor Co-Directing Thesis; Ming Ye, Committee Member; Anke Meyer-Baese, Committee Member.
46

Reduced Order Modeling Using the Wavelet-Galerkin Approximation of Differential Equations

Unknown Date (has links)
Over the past few decades an increased interest in reduced order modeling approaches has led to its application in areas such as real time simulations and parameter studies among many others. In the context of this work reduced order modeling seeks to solve differential equations using substantially fewer degrees of freedom compared to a standard approach like the finite element method. The finite element method is a Galerkin method which typically uses piecewise polynomial functions to approximate the solution of a differential equation. Wavelet functions have recently become a relevant topic in the area of computational science due to their attractive properties including differentiability and multi-resolution. This research seeks to combine a wavelet-Galerkin method with a reduced order approach to approximate the solution to a differential equation with a given set of parameters. This work will focus on showing that using a reduced order approach in a wavelet-Galerkin setting is a viable option in determining a reduced order solution to a differential equation. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2013. / October 30, 2013. / Daubechies, Finite Element Method, Partial Differential Equation, Proper Orthogonal Decomposition, Reduced Order Modeling, Wavelet / Includes bibliographical references. / Janet Peterson, Professor Directing Thesis; Max Gunzburger, Committee Member; Ming Ye, Committee Member.
47

Toward Connecting Core-Collapse Supernova Theory with Observations

Unknown Date (has links)
We study the evolution of the collapsing core of a 15 solar mass blue supergiant supernova progenitor from the moment shortly after core bounce until 1.5 seconds later. We present a sample of two- and three-dimensional hydrodynamic models parameterized to match the explosion energetics of supernova SN 1987A. We focus on the characteristics of the flow inside the gain region and the interplay between hydrodynamics, self-gravity, and neutrino heating, taking into account uncertainty in the nuclear equation of state. We characterize the evolution and structure of the flow behind the shock in terms the accretion flow dynamics, shock perturbations, energy transport and neutrino heating effects, and convective and turbulent motions. We also analyze information provided by particle tracers embedded in the flow. Our models are computed with a high-resolution finite volume shock capturing hydrodynamic code. The code includes source terms due to neutrino-matter interactions from a light-bulb neutrino scheme that is used to prescribe the luminosities and energies of the neutrinos emerging from the core of the proto-neutron star. The proto-neutron star is excised from the computational domain, and its contraction is modeled by a time-dependent inner boundary condition. We find the spatial dimensionality of the models to be an important contributing factor in the explosion process. Compared to two-dimensional simulations, our three-dimensional models require lower neutrino luminosities to produce equally energetic explosions. We estimate that the convective engine in our models is $4$% more efficient in three dimensions than in two dimensions. We propose that this is due to the difference of morphology of convection between two- and three-dimensional models. Specifically, the greater efficiency of the convective engine found in three-dimensional simulations might be due to the larger surface-to-volume ratio of convective plumes, which aids in distributing energy deposited by neutrinos. We do not find evidence of the standing accretion shock instability in our models. Instead we identify a relatively long phase of quasi-steady convection below the shock, driven by neutrino heating. During this phase, the analysis of the energy transport in the post-shock region reveals characteristics closely resembling that of penetrative convection. We find that the flow structure grows from small scales and organizes into large, convective plumes on the size of the gain region. We use tracer particles to study the flow properties, and find substantial differences in residency times of fluid elements in the gain region between two-dimensional and three-dimensional models. These appear to originate at the base of the gain region and are due to differences in the structure of convection. We also identify differences in the evolution of energy of the fluid elements, how they are heated by neutrinos, and how they become gravitationally unbound. In particular, at the time when the explosion commences, we find that the unbound material has relatively long residency times in two-dimensional models, while in three dimensions a significant fraction of the explosion energy is carried by particles with relatively short residency times. We conduct a series of numerical experiments in which we methodically decrease the angular resolution in our three-dimensional models. We observe that the explosion energy decreases dramatically once the resolution is inadequate to capture the morphology of convection on large scales. Thus, we demonstrated that it is possible to connect successful, energetic, three-dimensional models with unsuccessful three-dimensional models just by decreasing numerical resolution, and thus the amount of resolved physics. This example shows that the role of dimensionality is secondary to correctly accounting for the basic physics of the explosion. The relatively low spatial resolution of current three-dimensional models allows for only rudimentary insights into the role of turbulence in driving the explosion. However, and contrary to some recent reports, we do not find evidence for turbulence being a key factor in reviving the stalled supernova shock. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2014. / April 15, 2014. / Convection, Hydrodynamics, Instabilities, Shock Waves, Supernovae / Includes bibliographical references. / Tomasz Plewa, Professor Directing Dissertation; Mark Sussman, University Representative; Anke Meyer-Baese, Committee Member; Gordon Erlebacher, Committee Member; Ionel M. Navon, Committee Member.
48

Binary White Dwarf Mergers: Weak Evidence for Prompt Detonations in High-Resolution Adaptive Mesh Simulations

Unknown Date (has links)
The origins of thermonuclear supernovae remain poorly understood--a troubling fact, given their importance in astrophysics and cosmology. A leading theory posits that these events arise from the merger of white dwarfs in a close binary system. In this study we examine the possibility of prompt ignition, in which a runaway fusion reaction is initiated in the early stages of the merger. We present a set of three-dimensional white dwarf merger simulations performed with the help of a high-resolution adaptive mesh refinement hydrocode. We consider three binary systems of different mass ratios composed of carbon/oxygen white dwarfs with total mass exceeding the Chandrasekhar mass. We additionally explore the effects of mesh resolution on important simulation parameters. We find that two distinct behaviors emerge depending on the progenitor mass ratio. For systems of components with differing masses, a boundary layer forms around the accretor. For systems of nearly equal mass, the merger product displays deep entraintment of each star into the other. We closely monitor thermonuclear burning that begins when sufficiently dense material is shocked during early stages of the merger process. Analysis of ignition times lead us to conclude that for binary systems with components of unequal mass whose combined mass is close to the Chandrasekhar limit, there is a negligible chance of prompt ignition. Simulations of similar systems with a combined mass of 2 solar masses suggest that prompt ignition may be possible, but require further study using higher-resolution. The system with components of nearly equal mass does not seem likely to undergo prompt ignition, and higher resolution simulations are unlikely to change this conclusion. We additionally find that white dwarf merger simulations require high resolution. Insufficient resolution can qualitatively change simulation outcomes, either by smoothing important fluctuations in density and temperature, or by altering the dynamics of the system such that additional physics processes, such as gravity, are incorrectly represented. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Spring Semester, 2014. / April 14, 2014. / Binaries: Close, Hydrodynamics: Instabilities, Stars: Accretion, White Dwarfs, Supernovae:General / Includes bibliographical references. / Tomasz Plewa, Professor Directing Thesis; Mark Sussman, Committee Member; Gordon Erlebacher, Committee Member.
49

Bayesian Neural Networks in Data-Intensive High Energy Physics Applications

Unknown Date (has links)
This dissertation studies a graphical processing unit (GPU) construction of Bayesian neural networks (BNNs) using large training data sets. The goal is to create a program for the mapping of phenomenological Minimal Supersymmetric Standard Model (pMSSM) parameters to their predictions. This would allow for a more robust method of studying the Minimal Supersymmetric Standard Model, which is of much interest at the Large Hadron Collider (LHC) experiment CERN. A systematic study of the speedup achieved in the GPU application compared to a Central Processing Unit (CPU) implementation are presented. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2014. / April 1, 2014. / Bayesian Neural Networks, GPU, pMSSM, Scientific Computing / Includes bibliographical references. / Anke Meyer-Baese, Professor Directing Dissertation; Harrison Prosper, Professor Directing Dissertation; Jorge Piekarewicz, University Representative; Sachin Shanbhag, Committee Member; Peter Beerli, Committee Member.
50

Improvements in Metadynamics Simulations: The Essential Energy Space Random Walk and the Wang-Landau Recursion

Unknown Date (has links)
Metadynamics is a popular tool to explore free energy landscapes and it has been use to elucidate various chemical or biochemical processes. The height of updating Gaussian function is very important for proper free energy convergence to the target free energy surface. Both higher and lower Gaussian heights have advantages and disadvantages, a balance is required. This thesis presents the implementation of the Wang-Landau recursion scheme in metadynamics simulations to adjust the height of the unit Gaussian function. Compared with classical fixed Gaussian heights, this dynamic adjustable method was demonstrated to efficiently yield better converged free energy surfaces. In addition, through combination with the realization of an energy space random walk, the Wang-Landau recursion scheme can be readily used to deal with the pseudoergodicity problem in molecular dynamic simulations. The use of this scheme is proven to efficiently and robustly obtain a biased free energy function within this thesis. / A Thesis Submitted to the School of Computational Science in Partial FulfiLlment of the Requirements for the Degree of Master of Science. / Summer Semester, 2008. / June 20, 2008. / Essential Energy Space Random Walk, Metadynamics Simulations, Wang-Landau Method / Includes bibliographical references. / Wei Yang, Professor Directing Thesis; Gordon Erlebacher, Committee Member; Janet Peterson, Committee Member.

Page generated in 0.0817 seconds