Spelling suggestions: "subject:"column story"" "subject:"kolumn story""
1 |
Hybride Datenbankarchitekturen am Beispiel der neuen SAP In-Memory-TechnologieFärber, Franz, Jäcksch, Bernhard, Lemke, Christian, Grpße, Philipp, Lehner, Wolfgang 20 January 2023 (has links)
Die Verfügbarkeit neuer Technologien wie Multi-Core, SSD oder große Hauptspeicherkapazitäten bieten eine Gelegenheit, die klassischen Architekturansätze von Datenbanksystemen zu überdenken und an bestimmten Stellen zu korrigieren. In diesem Beitrag stellen wir die Grobstruktur der neuen hauptspeicherzentrierten SAP Technologie als einen Ansatz einer kommerziellen Umsetzung moderner Architekturkonzepte vor. Zentrales Design-Kriterium ist dabei ein hybrider Ansatz, um eine möglichst hohe Anzahl von Anforderungsvarianten optimal zu unterstützen. Nach einer Einleitung führt der Artikel durch die wichtigsten Architekturkomponenten und illustriert den grundsätzlichen Aufbau des Systems. Für einen „deep dive“ werden zwei Bereiche in Teil 3 und 4 des Artikels im Detail diskutiert. Dabei greift der Artikel zum einen den Aspekt der physischen Optimierung im Kontext eines hauptspeicherzentrierten Systems auf und diskutiert unterschiedliche Komprimierungs- und Sortierungskriterien, wie sie im klassischen disk-zentrierten Ansatz nicht zu finden sind. Zum anderen wird die Unterstützung von Planungsanwendungen skizziert, wodurch ein Einblick in die spezifische Unterstützung einer Anwendungsdomäne („business planning“) und die prinzipiellen Erweiterungen für komplexe Operationen zur direkten Unterstützung von darauf aufbauender Planungsfunktionalität gezeigt werden.
|
2 |
Analytical Query Processing Based on Continuous Compression of IntermediatesDamme, Patrick 02 October 2020 (has links)
Nowadays, increasingly large amounts of data are being collected in numerous areas ranging from science to industry. To gain valueable insights from these data, the importance of Online Analytical Processing (OLAP) workloads is constantly growing. At the same time, the hardware landscape is continuously evolving. On the one hand, the increasing capacities of DRAM allow database systems to store their entire data in main memory. Furthermore, the performance of microprocessors has improved tremendously in recent years through the use of sophisticated hardware techniques, such as Single Instruction Multiple Data (SIMD) extensions promising hitherto unknown processing speeds. On the other hand, the main memory bandwidth has not increased proportionately, such that the data access is now the main bottleneck for an efficient data processing.
To face these developments, in-memory column-stores have emerged as a new database architecture. These systems store each attribute of a relation separately in memory as a contiguous sequence of values. It is state-of-the-art to encode all values as integers and apply lossless lightweight integer compression to reduce the data size. This offers several advantages ranging from lower transfer times between RAM and CPU over a better utilization of the cache hierarchy to fast direct processing of compressed data. However, compression also incurs a certain computational overhead. State-of-the-art systems focus on the compression of base data. However, intermediate results generated during the execution of complex analytical queries can exceed the base data in number and total size. Since in in-memory systems, accessing intermediates is as expensive as accessing base data, intermediates should be handled as efficiently as possible, too.
While there are approaches trying to avoid intermediates whenever it is possible, we envision the orthogonal approach of efficiently representing intermediates using lightweight integer compression algorithms to reduce memory accesses. More precisely, our vision is a balanced query processing based on lightweight compression of intermediate results in in-memory column-stores. That means, all intermediates shall be represented using a suitable lightweight integer compression algorithm and processed by compression-enabled query operators to avoid a full decompression, whereby compression shall be used in a balanced way to ensure that its benefits outweigh its costs.
In this thesis, we address all important aspects of this vision. We provide an extensive overview of existing lightweight integer compression algorithms and conduct a systematical experimental survey of several of these algorithms to gain a deep understanding of their behavior. We propose a novel compression-enabled processing model for in-memory column-stores allowing a continuous compression of intermediates. Additionally, we develop novel cost-based strategies for a compression-aware secondary query optimization to make effective use of our processing model. Our end-to-end evaluation using the famous Star Schema Benchmark shows that our envisioned compression of intermediates can improve both the memory footprint and the runtime of complex analytical queries significantly.:1 Introduction
1.1 Contributions
1.2 Outline
2 Lightweight Integer Compression
2.1 Foundations
2.1.1 Disambiguation of Lightweight Integer Compression
2.1.2 Overview of Lightweight Integer Compression
2.1.3 State-of-the-Art in Lightweight Integer Compression
2.2 Experimental Survey
2.2.1 Related Work
2.2.2 Experimental Setup and Methodology
2.2.3 Evaluation of the Impact of the Data Characteristics
2.2.4 Evaluation of the Impact of the Hardware Characteristics
2.2.5 Evaluation of the Impact of the SIMD Extension
2.3 Summary and Discussion
3 Processing Compressed Intermediates
3.1 Processing Model for Compressed Intermediates
3.1.1 Related Work
3.1.2 Description of the Underlying Processing Model
3.1.3 Integration of Compression into Query Operators
3.1.4 Integration of Compression into the Overall Query Execution
3.1.5 Efficient Implementation
3.1.6 Evaluation
3.2 Direct Integer Morphing Algorithms
3.2.1 Related Work
3.2.2 Integer Morphing Algorithms
3.2.3 Example Algorithms
3.2.4 Evaluation
3.3 Summary and Discussion
4 Compression-Aware Query Optimization Strategies
4.1 Related Work
4.2 Compression-Aware Secondary Query Optimization
4.2.1 Compression-Level: Selecting a Suitable Algorithm
4.2.2 Operator-Level: Selecting Suitable Input/Output Formats
4.2.3 QEP-Level: Selecting Suitable Formats for All Involved Columns
4.3 Evaluation
4.3.1 Compression-Level: Selecting a Suitable Algorithm
4.3.2 Operator-Level: Selecting Suitable Input/Output Formats
4.3.3 Lessons Learned
4.4 Summary and Discussion
5 End-to-End Evaluation
5.1 Experimental Setup and Methodology
5.2 A Simple OLAP Query
5.3 Complex OLAP Queries: The Star Schema Benchmark
5.4 Summary and Discussion
6 Conclusion
6.1 Summary of this Thesis
6.2 Directions for Future Work
Bibliography
List of Figures
List of Tables
|
3 |
SIMD-MIMD cocktail in a hybrid memory glass: shaken, not stirredZarubin, Mikhail, Damme, Patrick, Krause, Alexander, Habich, Dirk, Lehner, Wolfgang 23 November 2021 (has links)
Hybrid memory systems consisting of DRAM and NVRAM offer a great opportunity for column-oriented data systems to persistently store and to efficiently process columnar data completely in main memory. While vectorization (SIMD) of query operators is state-of-the-art to increase the single-thread performance, it has to be combined with thread-level parallelism (MIMD) to satisfy growing needs for higher performance and scalability. However, it is not well investigated how such a SIMD-MIMD interplay could be leveraged efficiently in hybrid memory systems. On the one hand, we deliver an extensive experimental evaluation of typical workloads on columnar data in this paper. We reveal that the choice of the most performant SIMD version differs greatly for both memory types. Moreover, we show that the throughput of concurrent queries can be boosted (up to 2x) when combining various SIMD flavors in a multi-threaded execution. On the other hand, to enable that optimization, we propose an adaptive SIMD-MIMD cocktail approach incurring only a negligible runtime overhead.
|
4 |
Resiliency Mechanisms for In-Memory Column StoresKolditz, Till 15 February 2019 (has links)
The key objective of database systems is to reliably manage data, while high query throughput and low query latency are core requirements. To date, database research activities mostly concentrated on the second part. However, due to the constant shrinking of transistor feature sizes, integrated circuits become more and more unreliable and transient hardware errors in the form of multi-bit flips become more and more prominent. In a more recent study (2013), in a large high-performance cluster with around 8500 nodes, a failure rate of 40 FIT per DRAM device was measured. For their system, this means that every 10 hours there occurs a single- or multi-bit flip, which is unacceptably high for enterprise and HPC scenarios. Causes can be cosmic rays, heat, or electrical crosstalk, with the latter being exploited actively through the RowHammer attack. It was shown that memory cells are more prone to bit flips than logic gates and several surveys found multi-bit flip events in main memory modules of today's data centers. Due to the shift towards in-memory data management systems, where all business related data and query intermediate results are kept solely in fast main memory, such systems are in great danger to deliver corrupt results to their users. Hardware techniques can not be scaled to compensate the exponentially increasing error rates. In other domains, there is an increasing interest in software-based solutions to this problem, but these proposed methods come along with huge runtime and/or storage overheads. These are unacceptable for in-memory data management systems.
In this thesis, we investigate how to integrate bit flip detection mechanisms into in-memory data management systems. To achieve this goal, we first build an understanding of bit flip detection techniques and select two error codes, AN codes and XOR checksums, suitable to the requirements of in-memory data management systems. The most important requirement is effectiveness of the codes to detect bit flips. We meet this goal through AN codes, which exhibit better and adaptable error detection capabilities than those found in today's hardware. The second most important goal is efficiency in terms of coding latency. We meet this by introducing a fundamental performance improvements to AN codes, and by vectorizing both chosen codes' operations. We integrate bit flip detection mechanisms into the lowest storage layer and the query processing layer in such a way that the remaining data management system and the user can stay oblivious of any error detection. This includes both base columns and pointer-heavy index structures such as the ubiquitous B-Tree. Additionally, our approach allows adaptable, on-the-fly bit flip detection during query processing, with only very little impact on query latency. AN coding allows to recode intermediate results with virtually no performance penalty. We support our claims by providing exhaustive runtime and throughput measurements throughout the whole thesis and with an end-to-end evaluation using the Star Schema Benchmark. To the best of our knowledge, we are the first to present such holistic and fast bit flip detection in a large software infrastructure such as in-memory data management systems. Finally, most of the source code fragments used to obtain the results in this thesis are open source and freely available.:1 INTRODUCTION
1.1 Contributions of this Thesis
1.2 Outline
2 PROBLEM DESCRIPTION AND RELATED WORK
2.1 Reliable Data Management on Reliable Hardware
2.2 The Shift Towards Unreliable Hardware
2.3 Hardware-Based Mitigation of Bit Flips
2.4 Data Management System Requirements
2.5 Software-Based Techniques For Handling Bit Flips
2.5.1 Operating System-Level Techniques
2.5.2 Compiler-Level Techniques
2.5.3 Application-Level Techniques
2.6 Summary and Conclusions
3 ANALYSIS OF CODING TECHNIQUES
3.1 Selection of Error Codes
3.1.1 Hamming Coding
3.1.2 XOR Checksums
3.1.3 AN Coding
3.1.4 Summary and Conclusions
3.2 Probabilities of Silent Data Corruption
3.2.1 Probabilities of Hamming Codes
3.2.2 Probabilities of XOR Checksums
3.2.3 Probabilities of AN Codes
3.2.4 Concrete Error Models
3.2.5 Summary and Conclusions
3.3 Throughput Considerations
3.3.1 Test Systems Descriptions
3.3.2 Vectorizing Hamming Coding
3.3.3 Vectorizing XOR Checksums
3.3.4 Vectorizing AN Coding
3.3.5 Summary and Conclusions
3.4 Comparison of Error Codes
3.4.1 Effectiveness
3.4.2 Efficiency
3.4.3 Runtime Adaptability
3.5 Performance Optimizations for AN Coding
3.5.1 The Modular Multiplicative Inverse
3.5.2 Faster Softening
3.5.3 Faster Error Detection
3.5.4 Comparison to Original AN Coding
3.5.5 The Multiplicative Inverse Anomaly
3.6 Summary
4 BIT FLIP DETECTING STORAGE
4.1 Column Store Architecture
4.1.1 Logical Data Types
4.1.2 Storage Model
4.1.3 Data Representation
4.1.4 Data Layout
4.1.5 Tree Index Structures
4.1.6 Summary
4.2 Hardened Data Storage
4.2.1 Hardened Physical Data Types
4.2.2 Hardened Lightweight Compression
4.2.3 Hardened Data Layout
4.2.4 UDI Operations
4.2.5 Summary and Conclusions
4.3 Hardened Tree Index Structures
4.3.1 B-Tree Verification Techniques
4.3.2 Justification For Further Techniques
4.3.3 The Error Detecting B-Tree
4.4 Summary
5 BIT FLIP DETECTING QUERY PROCESSING
5.1 Column Store Query Processing
5.2 Bit Flip Detection Opportunities
5.2.1 Early Onetime Detection
5.2.2 Late Onetime Detection
5.2.3 Continuous Detection
5.2.4 Miscellaneous Processing Aspects
5.2.5 Summary and Conclusions
5.3 Hardened Intermediate Results
5.3.1 Materialization of Hardened Intermediates
5.3.2 Hardened Bitmaps
5.4 Summary
6 END-TO-END EVALUATION
6.1 Prototype Implementation
6.1.1 AHEAD Architecture
6.1.2 Diversity of Physical Operators
6.1.3 One Concrete Operator Realization
6.1.4 Summary and Conclusions
6.2 Performance of Individual Operators
6.2.1 Selection on One Predicate
6.2.2 Selection on Two Predicates
6.2.3 Join Operators
6.2.4 Grouping and Aggregation
6.2.5 Delta Operator
6.2.6 Summary and Conclusions
6.3 Star Schema Benchmark Queries
6.3.1 Query Runtimes
6.3.2 Improvements Through Vectorization
6.3.3 Storage Overhead
6.3.4 Summary and Conclusions
6.4 Error Detecting B-Tree
6.4.1 Single Key Lookup
6.4.2 Key Value-Pair Insertion
6.5 Summary
7 SUMMARY AND CONCLUSIONS
7.1 Future Work
A APPENDIX
A.1 List of Golden As
A.2 More on Hamming Coding
A.2.1 Code examples
A.2.2 Vectorization
BIBLIOGRAPHY
LIST OF FIGURES
LIST OF TABLES
LIST OF LISTINGS
LIST OF ACRONYMS
LIST OF SYMBOLS
LIST OF DEFINITIONS
|
5 |
Efficient Transaction Processing in SAP HANA Database: The End of a Column Store MythSikka, Vishal, Färber, Franz, Lehner, Wolfgang, Cha, Sang Kyun, Peh, Thomas, Bornhövd, Christof 11 August 2022 (has links)
The SAP HANA database is the core of SAP's new data management platform. The overall goal of the SAP HANA database is to provide a generic but powerful system for different query scenarios, both transactional and analytical, on the same data representation within a highly scalable execution environment. Within this paper, we highlight the main features that differentiate the SAP HANA database from classical relational database engines. Therefore, we outline the general architecture and design criteria of the SAP HANA in a first step. In a second step, we challenge the common belief that column store data structures are only superior in analytical workloads and not well suited for transactional workloads. We outline the concept of record life cycle management to use different storage formats for the different stages of a record. We not only discuss the general concept but also dive into some of the details of how to efficiently propagate records through their life cycle and moving database entries from write-optimized to read-optimized storage formats. In summary, the paper aims at illustrating how the SAP HANA database is able to efficiently work in analytical as well as transactional workload environments.
|
6 |
MorphStore — In-Memory Query Processing based on Morphing Compressed Intermediates LIVEHabich, Dirk, Damme, Patrick, Ungethüm, Annett, Pietrzyk, Johannes, Krause, Alexander, Hildebrandt, Juliana, Lehner, Wolfgang 15 September 2022 (has links)
In this demo, we present MorphStore, an in-memory column store with a novel compression-aware query processing concept. Basically, compression using lightweight integer compression algorithms already plays an important role in existing in-memory column stores, but mainly for base data. The continuous handling of compression from the base data to the intermediate results during query processing has already been discussed, but not investigated in detail since the computational effort for compression as well as decompression is often assumed to exceed the benefits of a reduced transfer cost between CPU and main memory. However, this argument increasingly loses its validity as we are going to show in our demo. Generally, our novel compression-aware query processing concept is characterized by the fact that we are able to speed up the query execution by morphing compressed intermediate results from one scheme to another scheme to dynamically adapt to the changing data characteristics during query processing. Our morphing decisions are made using a cost-based approach.
|
7 |
Workload- and Data-based Automated Design for a Hybrid Row-Column Storage Model and Bloom Filter-Based Query Processing for Large-Scale DICOM Data Management / Conception automatisée basée sur la charge de travail et les données pour un modèle de stockage hybride ligne-colonne et le traitement des requêtes à l’aide de filtres de Bloom pour la gestion de données DICOM à grande échelleNguyen, Cong-Danh 04 May 2018 (has links)
Dans le secteur des soins de santé, les données d'images médicales toujours croissantes, le développement de technologies d'imagerie, la conservation à long terme des données médicales et l'augmentation de la résolution des images entraînent une croissance considérable du volume de données. En outre, la variété des dispositifs d'acquisition et la différence de préférences des médecins ou d'autres professionnels de la santé ont conduit à une grande variété de données. Bien que la norme DICOM (Digital Imaging et Communication in Medicine) soit aujourd'hui largement adoptée pour stocker et transférer les données médicales, les données DICOM ont toujours les caractéristiques 3V du Big Data: volume élevé, grande variété et grande vélocité. En outre, il existe une variété de charges de travail, notamment le traitement transactionnel en ligne (en anglais Online Transaction Processing, abrégé en OLTP), le traitement analytique en ligne (anglais Online Analytical Processing, abrégé en OLAP) et les charges de travail mixtes. Les systèmes existants ont des limites concernant ces caractéristiques des données et des charges de travail. Dans cette thèse, nous proposons de nouvelles méthodes efficaces pour stocker et interroger des données DICOM. Nous proposons un modèle de stockage hybride des magasins de lignes et de colonnes, appelé HYTORMO, ainsi que des stratégies de stockage de données et de traitement des requêtes. Tout d'abord, HYTORMO est conçu et mis en œuvre pour être déployé sur un environnement à grande échelle afin de permettre la gestion de grandes données médicales. Deuxièmement, la stratégie de stockage de données combine l'utilisation du partitionnement vertical et un stockage hybride pour créer des configurations de stockage de données qui peuvent réduire la demande d'espace de stockage et augmenter les performances de la charge de travail. Pour réaliser une telle configuration de stockage de données, l'une des deux approches de conception de stockage de données peut être appliquée: (1) conception basée sur des experts et (2) conception automatisée. Dans la première approche, les experts créent manuellement des configurations de stockage de données en regroupant les attributs des données DICOM et en sélectionnant une disposition de stockage de données appropriée pour chaque groupe de colonnes. Dans la dernière approche, nous proposons un cadre de conception automatisé hybride, appelé HADF. HADF dépend des mesures de similarité (entre attributs) qui prennent en compte les impacts des informations spécifiques à la charge de travail et aux données pour générer automatiquement les configurations de stockage de données: Hybrid Similarity (combinaison pondérée de similarité d'accès d'attribut et de similarité de densité d'attribut) les attributs dans les groupes de colonnes; Inter-Cluster Access Similarity est utilisé pour déterminer si deux groupes de colonnes seront fusionnés ou non (pour réduire le nombre de jointures supplémentaires); et Intra-Cluster Access La similarité est appliquée pour décider si un groupe de colonnes sera stocké dans une ligne ou un magasin de colonnes. Enfin, nous proposons une stratégie de traitement des requêtes adaptée et efficace construite sur HYTORMO. Il considère l'utilisation des jointures internes et des jointures externes gauche pour empêcher la perte de données si vous utilisez uniquement des jointures internes entre des tables partitionnées verticalement. De plus, une intersection de filtres Bloom (intersection of Bloom filters, abrégé en ) est appliqué pour supprimer les données non pertinentes des tables d'entrée des opérations de jointure; cela permet de réduire les coûts d'E / S réseau. (...) / In the health care industry, the ever-increasing medical image data, the development of imaging technologies, the long-term retention of medical data and the increase of image resolution are causing a tremendous growth in data volume. In addition, the variety of acquisition devices and the difference in preferences of physicians or other health-care professionals have led to a high variety in data. Although today DICOM (Digital Imaging and Communication in Medicine) standard has been widely adopted to store and transfer the medical data, DICOM data still has the 3Vs characteristics of Big Data: high volume, high variety and high velocity. Besides, there is a variety of workloads including Online Transaction Processing (OLTP), Online Analytical Processing (OLAP) and mixed workloads. Existing systems have limitations dealing with these characteristics of data and workloads. In this thesis, we propose new efficient methods for storing and querying DICOM data. We propose a hybrid storage model of row and column stores, called HYTORMO, together with data storage and query processing strategies. First, HYTORMO is designed and implemented to be deployed on large-scale environment to make it possible to manage big medical data. Second, the data storage strategy combines the use of vertical partitioning and a hybrid store to create data storage configurations that can reduce storage space demand and increase workload performance. To achieve such a data storage configuration, one of two data storage design approaches can be applied: (1) expert-based design and (2) automated design. In the former approach, experts manually create data storage configurations by grouping attributes and selecting a suitable data layout for each column group. In the latter approach, we propose a hybrid automated design framework, called HADF. HADF depends on similarity measures (between attributes) that can take into consideration the combined impact of both workload- and data-specific information to generate data storage configurations: Hybrid Similarity (a weighted combination of Attribute Access and Density Similarity measures) is used to group the attributes into column groups; Inter-Cluster Access Similarity is used to determine whether two column groups will be merged together or not (to reduce the number of joins); and Intra-Cluster Access Similarity is applied to decide whether a column group will be stored in a row or a column store. Finally, we propose a suitable and efficient query processing strategy built on top of HYTORMO. It considers the use of both inner joins and left-outer joins. Furthermore, an Intersection Bloom filter () is applied to reduce network I/O cost.We provide experimental evaluations to validate the benefits of the proposed methods over real DICOM datasets. Experimental results show that the mixed use of both row and column stores outperforms a pure row store and a pure column store. The combined impact of both workload-and data-specific information is helpful for HADF to be able to produce good data storage configurations. Moreover, the query processing strategy with the use of the can improve the execution time of an experimental query up to 50% when compared to the case where no is applied.
|
Page generated in 0.06 seconds