The ever-growing demand for more computing power forces hardware vendors to put an increasing number of multiprocessors into a single server system, which usually exhibits a non-uniform memory access (NUMA). In-memory database systems running on NUMA platforms face several issues such as the increased latency and the decreased bandwidth when accessing remote main memory. To cope with these NUMA-related issues, a DBMS has to allow flexible data partitioning and data placement at runtime.
In this demonstration, we present ERIS, our NUMA-aware in-memory storage engine. ERIS uses an adaptive partitioning approach that exploits the topology of the underlying NUMA platform and significantly reduces NUMA-related issues. We demonstrate throughput numbers and hardware performance counter evaluations of ERIS and a NUMA-unaware index for different workloads and configurations. All experiments are conducted on a standard server system as well as on a system consisting of 64 multiprocessors, 512 cores, and 8 TBs main memory.
Identifer | oai:union.ndltd.org:DRESDEN/oai:qucosa:de:qucosa:80393 |
Date | 12 August 2022 |
Creators | Kiefer, Tim, Kissinger, Thomas, Schlegel, Benjamin, Habich, Dirk, Molka, Daniel, Lehner, Wolfgang |
Publisher | ACM |
Source Sets | Hochschulschriftenserver (HSSS) der SLUB Dresden |
Language | English |
Detected Language | English |
Type | info:eu-repo/semantics/acceptedVersion, doc-type:conferenceObject, info:eu-repo/semantics/conferenceObject, doc-type:Text |
Rights | info:eu-repo/semantics/openAccess |
Relation | 978-1-4503-2376-5, 10.1145/2588555.2594524, info:eu-repo/grantAgreement/Deutsche Forschungsgemeinschaft/Sonderforschungsbereich/164481002//Highly Adaptive Energy-Efficient Computing/HAEC |
Page generated in 0.0017 seconds