Return to search

Parallelized X-Ray Tracing with GPU Ray-Tracing Engine

X-ray diffraction tomography (XDT) is used to probe material composition of objects, providing improved contrast between materials compared to conventional transmission based computed tomography (CT). In this work, a small angle approximation to Bragg's Equation of diffraction is coupled with parallelized computing using Graphics Processing Units (GPUs) to accelerate XDT simulations. The approximation gives rise to a simple yet useful proportionality between momentum transfer, radial distance of diffracted signal with respect to incoming beam's location, and depth of material, so that ray tracing may be parallelized. NVIDIA's OptiX ray-tracing engine, a parallelized pipeline for GPUs, is employed to perform XDT by tracing rays in a virtual space, (x,y,zv), where zv is a virtual distance proportional to momentum transfer. The advantage gained in this approach is that ray tracing in this domain requires only 3D surface meshes, yielding calculations without the need of voxels. The simulated XDT projections demonstrate high consistency with voxel models, with a normalized mean square difference less than 0.66%, and ray-tracing times two orders of magnitude less than previously reported voxel-based GPU ray tracing results. Due to an accelerated simulation time, XDT projections of objects with three spatial dimensions (4D tensor) have also been reported, demonstrating the feasibility for largescale high-dimensional tensor tomography simulations.

Identiferoai:union.ndltd.org:ucf.edu/oai:stars.library.ucf.edu:etd2020-1143
Date01 January 2020
CreatorsUlseth, Joseph
PublisherSTARS
Source SetsUniversity of Central Florida
LanguageEnglish
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceElectronic Theses and Dissertations, 2020-

Page generated in 0.0015 seconds