Return to search

Big Data causing Big (TLB) Problems: Taming Random Memory Accesses on the GPU

GPUs are increasingly adopted for large-scale database processing, where data accesses represent the major part of the computation. If the data accesses are irregular, like hash table accesses or random sampling, the GPU performance can suffer. Especially when scaling such accesses beyond 2GB of data, a performance decrease of an order of magnitude is encountered. This paper analyzes the source of the slowdown through extensive micro-benchmarking, attributing the root cause to the Translation Lookaside Buffer (TLB). Using the micro-benchmarks, the TLB hierarchy and structure are fully analyzed on two different GPU architectures, identifying never-before-published TLB sizes that can be used for efficient large-scale application tuning. Based on the gained knowledge, we propose a TLB-conscious approach to mitigate the slowdown for algorithms with irregular memory access. The proposed approach is applied to two fundamental database operations - random sampling and hash-based grouping - showing that the slowdown can be dramatically reduced, and resulting in a performance increase of up to 13×.

Identiferoai:union.ndltd.org:DRESDEN/oai:qucosa:de:qucosa:79459
Date13 June 2022
CreatorsKarnagel, Tomas, Ben-Nun, Tal, Werner, Matthias, Habich, Dirk, Lehner, Wolfgang
PublisherACM
Source SetsHochschulschriftenserver (HSSS) der SLUB Dresden
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/acceptedVersion, doc-type:conferenceObject, info:eu-repo/semantics/conferenceObject, doc-type:Text
Rightsinfo:eu-repo/semantics/openAccess
Relation978-1-4503-5025-9, 6, 10.1145/3076113.3076115, info:eu-repo/grantAgreement/Deutsche Forschungsgemeinschaft/Exzellenzcluster/194636624//Zentrum für Perspektiven in der Elektronik Dresden/EXC 1056, info:eu-repo/grantAgreement/Deutsche Forschungsgemeinschaft/Schwerpunktprogramme/214420555//Software für Exascale Computing/SPP 1648

Page generated in 0.0028 seconds