Spelling suggestions: "subject:"datenbanksystem""
1 |
Analyse für die Zitierungshäufigkeit für die Datenbankkonferenz BTWKöpcke, Hanna, Rahm, Erhard 06 February 2019 (has links)
Anhand von Zitierungszahlen wird oftmals versucht, die Bedeutung von wissenschaftlichen Publikationen, Zeitschriften oder ganzer Forschungsinstitute zu evaluieren. Ein häufig verwendeter Indikator für die Bedeutung einer Zeitschrift ist beispielsweise der »Impaktfaktor« [Amin & Mabe 2000]. Impaktfaktoren für zahlreiche wissenschaftliche Zeitschriften werden jährlich durch Thomson ISI im Journal Citation Report (JCR) veröffentlicht. Forschungsergebnisse aus dem Informatikbereich »Datenbanken« werden allerdings hauptsächlich auf Konferenzen veröffentlicht, die nicht von den JCR-Zitierungsdatenbanken erfasst
werden. In einer kürzlich durchgeführten Auswertung [Rahm & Thor 2005] zeigten wir, dass Publikationen in den international führenden Datenbankkonferenzen SIGMOD und VLDB deutlich häufiger als Veröffentlichungen in den führenden Zeitschriften TODS und VLDB Journal zitiert werden. Für den deutschsprachigen
Bereich fehlen solche Analysen bisher. Wir stellen deshalb in dieser Arbeit die Ergebnisse einer Zitationsanalyse für die Datenbanktagungsreihe BTW vor.
Die BTW ist die bedeutendste Tagung zu Datenbanken und deren Anwendungen im deutschsprachigen Raum. Sie wird seit 1985 alle zwei Jahre durchgeführt. Die Abkürzung »BTW« stand bis 2001 für »Datenbanksysteme in Büro, Technik und Wissenschaft«; seit der 10. BTW-Tagung 2003 in Leipzig lautet die Tagungsbezeichnung »Datenbanksysteme für Business, Technologie und Web«. Wir evaluieren in unserer Analyse alle Publikationen, die im Rahmen der BTW in den Jahren 1985 bis 2003 erschienen sind. Die Analyse basiert auf einer Integration und Bereinigung von Daten der Quellen DBLP und Google Scholar.
Der Beitrag ist wie folgt gegliedert: Abschnitt 2 informiert über die Datenquellen und die Durchführung der Analyse. Die Ergebnisse der Analyse werden in Abschnitt 3 präsentiert.
|
2 |
Simulation objektrelationaler Join-Verfahren in parallelen DatenbanksystemenBessonow, Lew 20 October 2017 (has links)
Im Bereich der parallelen Anfragebearbeitung in objektrelationalen Datenbanksystemen (ORDBS) ist noch nicht geklärt, welche Algorithmen aus dem relationalen bzw. aus dem objektorientierten Modell übernommen werden sollten. In dieser Arbeit werden relationale Hash- und Merge-Join-Verfahren dem objektorientierten Referenz-Join gegenübergestellt; außerdem wird ein spezielles Datenallokationsschema vorgeschlagen, das die parallele Join-Verarbeitung unterstützt. Die Leistungsbewertung der Verfahren erfolgt innerhalb eines umfassenden Simulationssystems für parallele Datenbaken ('SimPaD'). Als Anwendungsbeispiel wird der Bucky-Benchmark nach Carey et al. auf einer Shared-Disk-Architektur betrachtet.
|
3 |
Flexibility in Data ManagementVoigt, Hannes 07 March 2014 (has links) (PDF)
With the ongoing expansion of information technology, new fields of application requiring data management emerge virtually every day. In our knowledge culture increasing amounts of data and work force organized in more creativity-oriented ways also radically change traditional fields of application and question established assumptions about data management. For instance, investigative analytics and agile software development move towards a very agile and flexible handling of data. As the primary facilitators of data management, database systems have to reflect and support these developments. However, traditional database management technology, in particular relational database systems, is built on assumptions of relatively stable application domains. The need to model all data up front in a prescriptive database schema earned relational database management systems the reputation among developers of being inflexible, dated, and cumbersome to work with. Nevertheless, relational systems still dominate the database market. They are a proven, standardized, and interoperable technology, well-known in IT departments with a work force of experienced and trained developers and administrators.
This thesis aims at resolving the growing contradiction between the popularity and omnipresence of relational systems in companies and their increasingly bad reputation among developers. It adapts relational database technology towards more agility and flexibility. We envision a descriptive schema-comes-second relational database system, which is entity-oriented instead of schema-oriented; descriptive rather than prescriptive. The thesis provides four main contributions: (1)~a flexible relational data model, which frees relational data management from having a prescriptive schema; (2)~autonomous physical entity domains, which partition self-descriptive data according to their schema properties for better query performance; (3)~a freely adjustable storage engine, which allows adapting the physical data layout used to properties of the data and of the workload; and (4)~a self-managed indexing infrastructure, which autonomously collects and adapts index information under the presence of dynamic workloads and evolving schemas. The flexible relational data model is the thesis\' central contribution. It describes the functional appearance of the descriptive schema-comes-second relational database system. The other three contributions improve components in the architecture of database management systems to increase the query performance and the manageability of descriptive schema-comes-second relational database systems. We are confident that these four contributions can help paving the way to a more flexible future for relational database management technology.
|
4 |
Flexibility in Data ManagementVoigt, Hannes 03 March 2014 (has links)
With the ongoing expansion of information technology, new fields of application requiring data management emerge virtually every day. In our knowledge culture increasing amounts of data and work force organized in more creativity-oriented ways also radically change traditional fields of application and question established assumptions about data management. For instance, investigative analytics and agile software development move towards a very agile and flexible handling of data. As the primary facilitators of data management, database systems have to reflect and support these developments. However, traditional database management technology, in particular relational database systems, is built on assumptions of relatively stable application domains. The need to model all data up front in a prescriptive database schema earned relational database management systems the reputation among developers of being inflexible, dated, and cumbersome to work with. Nevertheless, relational systems still dominate the database market. They are a proven, standardized, and interoperable technology, well-known in IT departments with a work force of experienced and trained developers and administrators.
This thesis aims at resolving the growing contradiction between the popularity and omnipresence of relational systems in companies and their increasingly bad reputation among developers. It adapts relational database technology towards more agility and flexibility. We envision a descriptive schema-comes-second relational database system, which is entity-oriented instead of schema-oriented; descriptive rather than prescriptive. The thesis provides four main contributions: (1)~a flexible relational data model, which frees relational data management from having a prescriptive schema; (2)~autonomous physical entity domains, which partition self-descriptive data according to their schema properties for better query performance; (3)~a freely adjustable storage engine, which allows adapting the physical data layout used to properties of the data and of the workload; and (4)~a self-managed indexing infrastructure, which autonomously collects and adapts index information under the presence of dynamic workloads and evolving schemas. The flexible relational data model is the thesis\' central contribution. It describes the functional appearance of the descriptive schema-comes-second relational database system. The other three contributions improve components in the architecture of database management systems to increase the query performance and the manageability of descriptive schema-comes-second relational database systems. We are confident that these four contributions can help paving the way to a more flexible future for relational database management technology.
|
5 |
Graph Processing in Main-Memory Column StoresParadies, Marcus 29 May 2017 (has links) (PDF)
Evermore, novel and traditional business applications leverage the advantages of a graph data model, such as the offered schema flexibility and an explicit representation of relationships between entities. As a consequence, companies are confronted with the challenge of storing, manipulating, and querying terabytes of graph data for enterprise-critical applications. Although these business applications operate on graph-structured data, they still require direct access to the relational data and typically rely on an RDBMS to keep a single source of truth and access.
Existing solutions performing graph operations on business-critical data either use a combination of SQL and application logic or employ a graph data management system. For the first approach, relying solely on SQL results in poor execution performance caused by the functional mismatch between typical graph operations and the relational algebra. To the worse, graph algorithms expose a tremendous variety in structure and functionality caused by their often domain-specific implementations and therefore can be hardly integrated into a database management system other than with custom coding. Since the majority of these enterprise-critical applications exclusively run on relational DBMSs, employing a specialized system for storing and processing graph data is typically not sensible. Besides the maintenance overhead for keeping the systems in sync, combining graph and relational operations is hard to realize as it requires data transfer across system boundaries.
A basic ingredient of graph queries and algorithms are traversal operations and are a fundamental component of any database management system that aims at storing, manipulating, and querying graph data. Well-established graph traversal algorithms are standalone implementations relying on optimized data structures. The integration of graph traversals as an operator into a database management system requires a tight integration into the existing database environment and a development of new components, such as a graph topology-aware optimizer and accompanying graph statistics, graph-specific secondary index structures to speedup traversals, and an accompanying graph query language.
In this thesis, we introduce and describe GRAPHITE, a hybrid graph-relational data management system. GRAPHITE is a performance-oriented graph data management system as part of an RDBMS allowing to seamlessly combine processing of graph data with relational data in the same system. We propose a columnar storage representation for graph data to leverage the already existing and mature data management and query processing infrastructure of relational database management systems. At the core of GRAPHITE we propose an execution engine solely based on set operations and graph traversals.
Our design is driven by the observation that different graph topologies expose different algorithmic requirements to the design of a graph traversal operator. We derive two graph traversal implementations targeting the most common graph topologies and demonstrate how graph-specific statistics can be leveraged to select the optimal physical traversal operator. To accelerate graph traversals, we devise a set of graph-specific, updateable secondary index structures to improve the performance of vertex neighborhood expansion. Finally, we introduce a domain-specific language with an intuitive programming model to extend graph traversals with custom application logic at runtime. We use the LLVM compiler framework to generate efficient code that tightly integrates the user-specified application logic with our highly optimized built-in graph traversal operators.
Our experimental evaluation shows that GRAPHITE can outperform native graph management systems by several orders of magnitude while providing all the features of an RDBMS, such as transaction support, backup and recovery, security and user management, effectively providing a promising alternative to specialized graph management systems that lack many of these features and require expensive data replication and maintenance processes.
|
6 |
Graph Processing in Main-Memory Column StoresParadies, Marcus 03 February 2017 (has links)
Evermore, novel and traditional business applications leverage the advantages of a graph data model, such as the offered schema flexibility and an explicit representation of relationships between entities. As a consequence, companies are confronted with the challenge of storing, manipulating, and querying terabytes of graph data for enterprise-critical applications. Although these business applications operate on graph-structured data, they still require direct access to the relational data and typically rely on an RDBMS to keep a single source of truth and access.
Existing solutions performing graph operations on business-critical data either use a combination of SQL and application logic or employ a graph data management system. For the first approach, relying solely on SQL results in poor execution performance caused by the functional mismatch between typical graph operations and the relational algebra. To the worse, graph algorithms expose a tremendous variety in structure and functionality caused by their often domain-specific implementations and therefore can be hardly integrated into a database management system other than with custom coding. Since the majority of these enterprise-critical applications exclusively run on relational DBMSs, employing a specialized system for storing and processing graph data is typically not sensible. Besides the maintenance overhead for keeping the systems in sync, combining graph and relational operations is hard to realize as it requires data transfer across system boundaries.
A basic ingredient of graph queries and algorithms are traversal operations and are a fundamental component of any database management system that aims at storing, manipulating, and querying graph data. Well-established graph traversal algorithms are standalone implementations relying on optimized data structures. The integration of graph traversals as an operator into a database management system requires a tight integration into the existing database environment and a development of new components, such as a graph topology-aware optimizer and accompanying graph statistics, graph-specific secondary index structures to speedup traversals, and an accompanying graph query language.
In this thesis, we introduce and describe GRAPHITE, a hybrid graph-relational data management system. GRAPHITE is a performance-oriented graph data management system as part of an RDBMS allowing to seamlessly combine processing of graph data with relational data in the same system. We propose a columnar storage representation for graph data to leverage the already existing and mature data management and query processing infrastructure of relational database management systems. At the core of GRAPHITE we propose an execution engine solely based on set operations and graph traversals.
Our design is driven by the observation that different graph topologies expose different algorithmic requirements to the design of a graph traversal operator. We derive two graph traversal implementations targeting the most common graph topologies and demonstrate how graph-specific statistics can be leveraged to select the optimal physical traversal operator. To accelerate graph traversals, we devise a set of graph-specific, updateable secondary index structures to improve the performance of vertex neighborhood expansion. Finally, we introduce a domain-specific language with an intuitive programming model to extend graph traversals with custom application logic at runtime. We use the LLVM compiler framework to generate efficient code that tightly integrates the user-specified application logic with our highly optimized built-in graph traversal operators.
Our experimental evaluation shows that GRAPHITE can outperform native graph management systems by several orders of magnitude while providing all the features of an RDBMS, such as transaction support, backup and recovery, security and user management, effectively providing a promising alternative to specialized graph management systems that lack many of these features and require expensive data replication and maintenance processes.
|
7 |
Adaptive Energy-Control for In-Memory Database SystemsKissinger, Thomas, Habich, Dirk, Lehner, Wolfgang 30 May 2022 (has links)
The ever-increasing demand for scalable database systems is limited by their energy consumption, which is one of the major challenges in research today. While existing approaches mainly focused on transaction-oriented disk-based database systems, we are investigating and optimizing the energy consumption and performance of data-oriented scale-up in-memory database systems that make heavy use of the main power consumers, which are processors and main memory. We give an in-depth energy analysis of a current mainstream server system and show that modern processors provide a rich set of energy-control features, but lack the capability of controlling them appropriately, because of missing application-specific knowledge. Thus, we propose the Energy-Control Loop (ECL) as an DBMS-integrated approach for adaptive energy-control on scale-up in-memory database systems that obeys a query latency limit as a soft constraint and actively optimizes energy efficiency and performance of the DBMS. The ECL relies on adaptive workload-dependent energy profiles that are continuously maintained at runtime. In our evaluation, we observed energy savings ranging from 20% to 40% for a real-world load profile.
|
8 |
AHEAD: Adaptable Data Hardening for On-the-Fly Hardware Error Detection during Database Query ProcessingKolditz, Till, Habich, Dirk, Lehner, Wolfgang, Werner, Matthias, de Bruijn, S. T. J. 13 June 2022 (has links)
We have already known for a long time that hardware components are not perfect and soft errors in terms of single bit flips happen all the time. Up to now, these single bit flips are mainly addressed in hardware using general-purpose protection techniques. However, recent studies have shown that all future hardware components become less and less reliable in total and multi-bit flips are occurring regularly rather than exceptionally. Additionally, hardware aging effects will lead to error models that change during run-time. Scaling hardware-based protection techniques to cover changing multi-bit flips is possible, but this introduces large performance, chip area, and power overheads, which will become non-affordable in the future. To tackle that, an emerging research direction is employing protection techniques in higher software layers like compilers or applications. The available knowledge at these layers can be efficiently used to specialize and adapt protection techniques. Thus, we propose a novel adaptable and on-the-fly hardware error detection approach called AHEAD for database systems in this paper. AHEAD provides configurable error detection in an end-to-end fashion and reduces the overhead (storage and computation) compared to other techniques at this level. Our approach uses an arithmetic error coding technique which allows query processing to completely work on hardened data on the one hand. On the other hand, this enables on-the-fly detection during query processing of (i) errors that modify data stored in memory or transferred on an interconnect and (ii) errors induced during computations. Our exhaustive evaluation clearly shows the benefits of our AHEAD approach.
|
9 |
Diversity of Processing Units: An Attempt to Classify the Plethora of Modern Processing UnitsWolfgang, Lehner, Ungethüm, Annett, Habich, Dirk 16 June 2023 (has links)
Recent hardware developments are providing a plethora of alternatives to well-known general-purpose processing units. This development reaches into all major directions, i.e., into high-speed and low latency communications systems, novel memory components as well as a zoo of different processing units in addition to the traditional CPU-style processors. While all developments have great impact on the design of database systems, we will try—in the context of this Kurz Erklärt—to categorize recent advances in the context of processing units and comment on the impact on database systems.
|
10 |
Integer Compression in NVRAM-centric Data Stores: Comparative Experimental Analysis to DRAMZarubin, Mikhail, Damme, Patrick, Kissinger, Thomas, Habich, Dirk, Lehner, Wolfgang, Willhalm, Thomas 01 September 2022 (has links)
Lightweight integer compression algorithms play an important role in in-memory database systems to tackle the growing gap between processor speed and main memory bandwidth. Thus, there is a large number of algorithms to choose from, while different algorithms are tailored to different data characteristics. As we show in this paper, with the availability of byte-addressable non-volatile random-access memory (NVRAM), a novel type of main memory with specific characteristics increases the overall complexity in this domain. In particular, we provide a detailed evaluation of state-of-the-art lightweight integer compression schemes and database operations on NVRAM and compare it with DRAM. Furthermore, we reason about possible deployments of middle- and heavyweight approaches for better adaptation to NVRAM characteristics. Finally, we investigate a combined approach where both volatile and non-volatile memories are used in a cooperative fashion that is likely to be the case for hybrid and NVRAM-centric database systems.
|
Page generated in 0.066 seconds