The past decade has seen a transition of Graphics Processing Units (GPUs) from special purpose graphics processors, to general purpose computational accelerators. GPUs have been investigated to utilize their highly parallel architecture to accelerate the computation of the Transmission Line Matrix (TLM) methods in two and three dimensions. The design utilizes two GPU programming languages, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), to code the TLM methods for NVIDIA GPUs. The GPU accelerated two-dimensional shunt node TLM method (2D-TLM) achieves 340 million nodes per second (MNodes/sec) of performance which is 25 times faster than a commercially available 2D-TLM solver. Initial attempts to adapt the three-dimensional Symmetrical Condensed Node (3D-SCN) TLM method resulted in a peak performance of 47 MNodes/sec or7 times in speed-up. Further efforts to improve the 3D-SCN TLM algorithm, as well as investigating advanced GPU optimization strategies resulted in performances accelerated to 530 MNodes/sec, or 120 times speed-up compared to a commercially available 3D-SCN TLM solver.
Identifer | oai:union.ndltd.org:uvic.ca/oai:dspace.library.uvic.ca:1828/2941 |
Date | 11 August 2010 |
Creators | Rossi, Filippo Vincenzo |
Contributors | So, Poman Pok-Man |
Source Sets | University of Victoria |
Language | English, English |
Detected Language | English |
Type | Thesis |
Rights | Available to the World Wide Web |
Page generated in 0.0021 seconds