Spelling suggestions: "subject:"[een] NUMERICAL"" "subject:"[enn] NUMERICAL""
51 |
A Computational Method for Age-at-Death Estimation Based on the Pubic SymphysisUnknown Date (has links)
A significant component of forensic science is analyzing bones to assess the age at death of an individual. Forensic anthropologists often include the pubic symphysis in such studies. Subjective methods, such as the Suchey-Brooks method, are currently used to analyze the pubic symphysis. This thesis examines a more objective, quantitative method. The method analyzes 3D surface scans of the pubic symphysis and implements a thin plate spline algorithm which models the bending of a flat plane to approximately match the surface of the bone. The algorithm minimizes the bending energy required for this transformation. Results presented here show that there is a correlation between the minimum bending energy and the age at death of the individual. The method could be useful to medico-legal practitioners. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2012. / August 8, 2012. / Age estimation, Pubis Symphysis, Thin plate splines / Includes bibliographical references. / Dennis Slice, Professor Directing Thesis; John Burkardt, Committee Member; Ming Ye, Committee Member; Sachin Shanbhag, Committee Member.
|
52 |
Computational Modeling of Elastic Fields in Dislocation DynamicsUnknown Date (has links)
In the present work, we investigate the internal fields generated by the dislocation structures that form during the deformation of copper single crystals. In particular, we perform computational modeling of the statistical and morphological characteristics of the dislocation structures obtained by dislocation dynamics simulation method and compare the results with X-ray microscopy measurements of the same data. This comparison is performed for both the dislocation structure and their internal elastic fields for the cases of homogeneous deformation and indentation of copper single crystals. A direct comparison between dislocation dynamics predictions and X-ray measurements plays a key role in demonstrating the fidelity of discrete dislocation dynamics as a predictive computational mechanics tool and in understanding the X-ray data. For the homogeneous deformation case, dislocation dynamics simulations were performed under periodic boundary conditions and the internal fields of dislocations were computed by solving an elastic boundary value problem of many-dislocation system using the finite element method. The distribution and pair correlation functions of all internal elastic fields and the dislocation density were computed. For the internal stress field, the availability of such statistical information paves the way to the development of a density-based mobility law of dislocations in continuum dislocation dynamics models, by correlating the internal-stress statistics with dislocation velocity statistics. The statistical analysis of the lattice rotation and the dislocation density fields in the deformed crystal made possible the direct comparison with X-ray measurements of the same data. Indeed, a comparison between the simulation and experimental measurements has been possible, which revealed important aspects of similarity and differences between the simulation results and the experimental data. In the case of indentation, which represents a highly inhomogeneous deformation, a contact boundary value problem was solved in conjunction with a discrete-dislocation dynamics simulation model; the discrete dislocation dynamics simulation was thus enabled to handle finite domains under mixed traction/displacement boundary conditions. The load-displacement curves for indentation experiments were analyzed with regard to cross slip, indentation speed and indenter shape. The lattice distortion fields obtained by indentation simulations were directly compared with their experimental counterparts. Other indentation simulations were also carried out, giving insight into different aspects of micro-scale indentation deformation. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester, 2012. / November 2, 2012. / Copper, Dislocation Dynamics, Indentation, Plasticity, Size effects / Includes bibliographical references. / Anter El-Azab, Professor Directing Thesis; Leon van Dommelen, University Representative; Gordon Erlebacher, Committee Member; Ming Ye, Committee Member; Xiaoqiang Wang, Committee Member.
|
53 |
Thermal Conductivity and Self-Generation of Magnetic Fields in Discontinuous PlasmasUnknown Date (has links)
Hydrodynamic instabilities are the driving force behind complex fluid processes that occur from everyday scenarios to the most extreme physical conditions of the universe. The Rayleigh-Taylor instability (RTI) develops when a heavy fluid is accelerated by a light fluid, resulting in sinking spikes, rising bubbles, and material mixing. Laser experiments have observed features of RTI that cannot be explained with pure hydrodynamic models. For this computational study we have implemented and verified extended physics mod- ules for anisotropic thermal conduction and self-generated magnetic fields in the FLASH- based Proteus code using the Braginskii plasma theory. We have used this code to simulate RTI in a basic plasma physics context. We obtain results up to 35 nanoseconds (ns) at various resolutions and discuss convergence and computational challenges. We find that magnetic fields as high as 1-10 megagauss (MG) are genereated near the fluid interface. Thermal conduction turns out to be essentially isotropic in these conditions, but plays the dominant role in the evolution of the system by smearing out small-scale structure and reducing the RT growth rate. This may account for the relatively feature- less RT spikes seen in experiments. We do not, however, observe mass extensions in our simulations. Without thermal conductivity, the magnetic field has the effect of generating what appears to be an additional RT mode which results in new structure at later times, when compared to pure hydro models. Additional physics modules and 3-D simulations are needed to complete our Braginskii model of RTI. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester, 2012. / June 29, 2012. / astrophysics, computational, magnetic, physics, plasma, thermal / Includes bibliographical references. / Tomasz Plewa, Professor Directing Thesis; Michael Ionel Navon, Committee Member; Mark Sussman, Committee Member.
|
54 |
Sparse Motion AnalysisUnknown Date (has links)
Motion segmentation is an essential pre-processing task in many computer vision problems. In this dissertation, the motion segmentation problem is studied and analyzed. At first, we establish a framework for the accurate evaluation of the motion field produced by different algorithms. Based on the framework, we introduce a feature tracking algorithm based on RankBoost which automatically prunes bad trajectories. The algorithm is observed to outperform many feature trackers using different measures. Second, we develop three different motion segmentation algorithms. The first algorithm is based on spectral clustering. The affinity matrix is built from the angular information between different trajectories. We also propose a metric to select the best dimension of the lower dimensional space onto which the trajectories are projected. The second algorithm is based on learning. Using training examples, it obtains a ranking function to evaluate and compare a number of motion segmentations generated by different algorithms and pick the best one. The third algorithm is based on energy minimization using the Swendsen-Wang cut algorithm and the simulated annealing. It has a time complexity of $O(N^2)$, comparing to at least $O(N^3)$ for the spectral clustering based algorithms; also it could take generic forms of energy functions. We evaluate all three algorithms as well as several other state-of-the several other state-of-the-art methods on a standard benchmark and show competitive performance. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2013. / June 13, 2013. / Computer Vision, Machine Learning, Motion Segmentation, Object
Tracking / Includes bibliographical references. / Adrian Barbu, Professor Directing Thesis; Anke Meyer-Baese, Professor Co-Directing Thesis; Xiuwen Liu, University Representative; Dennis Slice, Committee Member; Xiaoqiang Wang, Committee Member.
|
55 |
Artificial Prediction Markets for Classification, Regression and Density EstimationUnknown Date (has links)
Prediction markets are forums of trade where contracts on the future outcomes of events are bought and sold. These contracts reward buyers based on correct predictions and thus give incentive to make accurate predictions. Prediction markets have successfully predicted the outcomes of sporting events, elections, scientific hypothesese, foreign affairs, etc... and have repeatedly demonstrated themselves to be more accurate than individual experts or polling [2]. Since prediction markets are aggregation mechanisms, they have garnered interest in the machine learning community. Artificial prediction markets have been successfully used to solve classification problems [34, 33]. This dissertation explores the underlying optimization problem in the classification market, as presented in [34, 33], proves that it is related to maximum log likelihood, relates the classification market to existing machine learning methods and further extends the idea to regression and density estimation. In addition, the results of empirical experiments are presented on a variety of UCI [25], LIAAD [49] and synthetic data to demonstrate the probability accuracy, prediction accuracy as compared to Random Forest [9] and Implicit Online Learning [32], and the loss function. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2013. / March 29, 2013. / Aggregation, Artificial Prediction Markets, Classification, Density
estimation, Machine Learning, Regression / Includes bibliographical references. / Adrian Barbu, Professor Directing Thesis; Anke Meyer-Baese, Professor Co-Directing Thesis; Debajyoti Sinha, University Representative; Ye Ming, Committee Member; Xiaoqiang Wang, Committee Member.
|
56 |
Improving Inference in Population Genetics Using StatisticsUnknown Date (has links)
My studies at Florida State University focused on using computers and statistics to solve problems in population genetics. I have created models and algorithms that have the potential to improve the statistical analysis of population genetics. Population genetical data is often noisy and thus requires the use of statistics in order to be able to draw meaning from the data. This dissertation consists of three main projects. The first project involves the parallel evaluation an model inference on multi-locus data sets. Bayes factors are used for model selection. We used thermodynamic integration to calculate these Bayes factors. To be able to take advantage of parallel processing and parallelize calculation across a high performance computer cluster, I developed a new method to split the Bayes factor calculation into independent units and then combine them later. The next project, the Transition Probability Structured Coalescence [TSPC], involved the creation of a continuous approximation to the discrete migration process used in the structured coalescent that is commonly used to infer migration rates in biological populations. Previous methods required the simulation of these migration events, but there is little power to estimate the time and occurrence of these events. In my method, they are replaced with a one dimensional numerical integration. The third project involved the development of a model for the inference of the time of speciation. Previous models used a set time to delineate a speciation and speciation was a point process. Instead, this point process is replaced with a parameterized speciation model where each lineage speciates according to a parameterized distribution. This is effectively a broader model that allows both very quick and slow speciation. It also includes the previous model as a limiting case. These three project, although rather independent of each other, improve the inference of population genetic models and thus allow better analyses of genetic data in fields such as phylogeography, conservation, and epidemiology. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2013. / March 26, 2013. / Includes bibliographical references. / Peter Beerli, Professor Directing Thesis; Anuj Srivastava, University Representative; Gordon Erlebacher, Committee Member; Alan Lemmon, Committee Member; Dennis Slice, Committee Member.
|
57 |
Peridynamic Modeling and Simulation of Polymer-Nanotube CompositesUnknown Date (has links)
In this document, we develop and demonstrate a framework for simulating the mechanics of polymer materials that are reinforced by carbon nanotubes. Our model utilizes peridynamic theory to describe the mechanical response of the polymer and polymer-nanotube interfaces. We benefit from the continuum formulation used in peridynamics because (1) it allows the polymer material to be coarse-grained to the scale of the reinforcing nanofibers, and (2) failure via nanotube pull-out and matrix tearing are possible based on energetic considerations alone (i.e. without special treatment). To reduce the degrees of freedom that must be simulated, the reinforcement effect of the nanotubes is represented by a mesoscale bead-spring model. This approach permits the arbitrary placement of reinforcement ''strands'' in the problem domain and motivates the need for irregular quadrature point distributions, which have not yet been explored in the peridynamic setting. We address this matter in detail and report on aspects of mesh sensitivity that we uncovered in peridynamic simulations. Using a manufactured solution, we study the effects of quadrature point placement on the accuracy of the solution scheme in one and two dimensions. We demonstrate that square grids and the generator points of a centroidal Voronoi tessellation (CVT) support solutions of similar accuracy, but CVT grids have desirable characteristics that may justify the additional computational cost required for their construction. Impact simulations provide evidence that CVT grids support fracture patterns that resemble those obtained on higher resolution cubic Cartesian grids with a reduced computational burden. With the efficacy of irregular meshing schemes established, we exercise our model by dynamically stretching a cylindrical specimen composed of the polymer-nanotube composite. We vary the number of reinforcements, alignment of the filler, and the properties of the polymer-nanotube interface. Our results suggest that enhanced reinforcement requires an interfacial stiffness that exceeds that of the neat polymer. We confirm that the reinforcement is most effective when a nanofiber is aligned with the applied deformation, least effective when a nanofiber is aligned transverse to the applied deformation, and achieves intermediate values for other orientations. Sample configurations containing two fibers are also investigated. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester, 2013. / November 6, 2013. / Composites, Multiscale, Nanotube, Nonlocal, Peridynamics, Polymer / Includes bibliographical references. / Sachin Shanbhag, Professor Directing Dissertation; Okenwa Okoli, University Representative; Gordon Erlebacher, Committee Member; Tomasz Plewa, Committee Member; William Oates, Committee Member.
|
58 |
The Integration of Artificial Neural Networks and Geometric Morphometrics to Classify Teeth from Carcharhinus SpUnknown Date (has links)
The advent of geometric morphometrics and the revitalization of artificial neural networks have created powerful new tools to classify morphological structures to groups. Although these two approaches have already been combined, there has been less attention on how such combinations perform relative to more traditional methods. Here we use geometric morphometric data and neural networks to identify from which species upper-jaw teeth from carcharhiniform sharks in the genus Carcharhinus originated, and these results are compared to more traditional classification methods. In addition to the methodological applications of this comparison, an ability to identify shark teeth would facilitate the incorporation of shark teeth's vast fossil record into evolutionary studies. Using geometric morphometric data originating from Naylor and Marcus (1994), we built two types of neural networks, multilayer perceptrons and radial basis function neural networks to classify teeth from C. acronotus, C. leucas, C. limbatus, and C. plumbeus, as well as classifying the teeth using linear discriminate analysis. All classification schemes were trained using the right upper-jaw teeth of 15 individuals. Between these three methods, the multilayer perceptron performed the best, followed by linear discriminate analysis, and then the radial basis function neural network. All three classification systems appear to be more accurate than previous efforts to classify Carcharhinus teeth using linear distances between landmarks and linear discriminate analysis. In all three classification systems, misclassified teeth tended to originate either near the symphysis or near the jaw angle, though an additional peak occurred between these two structures. To assess whether smaller training sets would lead to comparable accuracies, we used a multilayer perceptron to classify teeth from the same species but now based on a training set of right upper-jaw teeth from only five individuals. Although not as accurate as the network based on 15 individuals, the network performed favorably. As a final test, we built a multilayer perceptron to classify teeth from C. altimus, C. obscurus, and C. plumbeus, which have more similar upper-jaw teeth than the original four species, based on training sets of five individuals. Again, the classification system performed better than a system that combines linear measurements and discriminate function analysis. Given the high accuracies for all three systems, it appears that the use of geometric morphometric data has a great impact on the accuracy of the classification system, whereas the exact method of classification tends to make less of a difference. These results may be applicable to other systems and other morphological structures. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2013. / November 8, 2013. / Artificial Neural Network, Carcharhinus, Classification, Geometric Morphometrics, Linear Discriminate Analysis, Teeth / Includes bibliographical references. / Dennis E. Slice, Professor Directing Thesis; Anke Meyer-Baese, Committee Member.
|
59 |
Multi-GPU Solutions of Geophysical PDEs with Radial Basis Function-Generated Finite DifferencesUnknown Date (has links)
Many numerical methods based on Radial Basis Functions (RBFs) are gaining popularity in the geosciences due to their competitive accuracy, functionality on unstructured meshes, and natural extension into higher dimensions. One method in particular, the Radial Basis Function-generated Finite Differences (RBF-FD), is drawing attention due to its comparatively low computational complexity versus other RBF methods, high-order accuracy (6th to 10th order is common), and parallel nature. Similar to classical Finite Differences (FD), RBF-FD computes weighted differences of stencil node values to approximate derivatives at stencil centers. The method differs from classical FD in that the test functions used to calculate the differentiation weights aren-dimensional RBFs rather than one-dimensional polynomials. This allows for generalization ton-dimensional space on completely scattered node layouts. Although RBF-FD was first proposed nearly a decade ago, it is only now gaining a critical mass to compete against well known competitors in modeling like FD, Finite Volume and Finite Element. To truly contend, RBF-FD must transition from single threaded MATLAB environments to large-scale parallel architectures. Many HPC systems around the world have made the transition to Graphics Processing Unit (GPU) accelerators as a solution for added parallelism and higher throughput. Some systems offer significantly more GPUs than CPUs. As the problem size,N, grows larger, it behooves us to work on parallel architectures, be it CPUs or GPUs. In addition to demonstrating the ability to scale to hundreds or thousands of compute nodes, this work introduces parallelization strategies that span RBF-FD across multi-GPU clusters. The stability and accuracy of the parallel implementation is verified through the explicit solution of two PDEs. Additionally, a parallel implementation for implicit solutions is introduced as part of continued research efforts. This work establishes RBF-FD as a contender in the arena of distributed HPC numerical methods. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Fall Semester, 2013. / November 6, 2013. / High-order finite differencing, Multi-GPU computing, OpenCL, Parallel computing, Radial basis functions, RBF-FD / Includes bibliographical references. / Gordon Erlebacher, Professor Directing Dissertation; Mark Sussman, University Representative; Natasha Flyer, Committee Member; Dennis Slice, Committee Member; Ming Ye, Committee Member; Janet Peterson, Committee Member.
|
60 |
Objective Front Detection from Ocean Color DataUnknown Date (has links)
We outline a new approach to objectively locate and define mesoscale oceanic features from satellite derived ocean color data. Modern edge detection algorithms are robust and accurate for most applications, oceanic satellite observations however introduce challenges that foil many differentiation based algorithms. The clouds, discontinuities, noise, and low variability of pertinent data prove confounding. In this work the input data is first quantized using a centroidal voronoi tesselation (CVT), removing noise and revealing the low variable fronts of interest. Clouds are then removed by assuming values of its surrounding neighbors, and the perimeters of these resulting cloudless regions localize the fronts to a small set. We then use the gradient of the quantized data as a compass to walk around the front and periodically select points to be knots for a Hermite spline. These Hermite splines yield an analytic representation of the fronts and provide practitioners with a convenient tool to calibrate their models. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2013. / November 18, 2013. / Edge Detection, Front Detection, Oceanography / Includes bibliographical references. / Gordon Erlebacher, Professor Co-Directing Thesis; Eric Chassignet, Professor Co-Directing Thesis; Ming Ye, Committee Member; Anke Meyer-Baese, Committee Member.
|
Page generated in 0.0646 seconds