Return to search

Linking Scheme code to data-parallel CUDA-C code

In Compute Unified Device Architecture (CUDA), programmers must manage memory operations, synchronization, and utility functions of Central Processing Unit programs that control and issue data-parallel general purpose programs running on a Graphics Processing Unit (GPU). NVIDIA Corporation developed the CUDA framework to enable and develop data-parallel programs for GPUs to accelerate scientific and engineering applications by providing a language extension of C called CUDA-C. A foreign-function interface comprised of Scheme and CUDA-C constructs extends the Gambit Scheme compiler and enables linking of Scheme and data-parallel CUDA-C code to support high-performance parallel computation with reasonably low overhead in runtime. We provide six test cases — implemented both in Scheme and CUDA-C — in order to evaluate performance of our implementation in Gambit and to show 0–35% overhead in the usual case. Our work enables Scheme programmers to develop expressive programs that control and issue data-parallel programs running on GPUs, while also reducing hands-on memory management.

Identiferoai:union.ndltd.org:USASK/oai:ecommons.usask.ca:10388/ETD-2013-12-1416
Date2013 December 1900
ContributorsDutchyn, Christopher
Source SetsUniversity of Saskatchewan Library
LanguageEnglish
Detected LanguageEnglish
Typetext, thesis

Page generated in 0.0025 seconds