X-ray diffraction tomography (XDT) is used to probe material composition of objects, providing improved contrast between materials compared to conventional transmission based computed tomography (CT). In this work, a small angle approximation to Bragg's Equation of diffraction is coupled with parallelized computing using Graphics Processing Units (GPUs) to accelerate XDT simulations. The approximation gives rise to a simple yet useful proportionality between momentum transfer, radial distance of diffracted signal with respect to incoming beam's location, and depth of material, so that ray tracing may be parallelized. NVIDIA's OptiX ray-tracing engine, a parallelized pipeline for GPUs, is employed to perform XDT by tracing rays in a virtual space, (x,y,zv), where zv is a virtual distance proportional to momentum transfer. The advantage gained in this approach is that ray tracing in this domain requires only 3D surface meshes, yielding calculations without the need of voxels. The simulated XDT projections demonstrate high consistency with voxel models, with a normalized mean square difference less than 0.66%, and ray-tracing times two orders of magnitude less than previously reported voxel-based GPU ray tracing results. Due to an accelerated simulation time, XDT projections of objects with three spatial dimensions (4D tensor) have also been reported, demonstrating the feasibility for largescale high-dimensional tensor tomography simulations.
Identifer | oai:union.ndltd.org:ucf.edu/oai:stars.library.ucf.edu:etd2020-1143 |
Date | 01 January 2020 |
Creators | Ulseth, Joseph |
Publisher | STARS |
Source Sets | University of Central Florida |
Language | English |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Electronic Theses and Dissertations, 2020- |
Page generated in 0.0014 seconds