• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 11
  • 7
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 116
  • 54
  • 53
  • 52
  • 52
  • 48
  • 42
  • 33
  • 30
  • 28
  • 20
  • 18
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Developing Reusable and Reconfigurable Real-Time Software using Aspects and Components

Tešanović, Aleksandra January 2006 (has links)
Our main focus in this thesis is on providing guidelines, methods, and tools for design, configuration, and analysis of configurable and reusable real-time software, developed using a combination of aspect-oriented and component-based software development. Specifically, we define a reconfigurable real-time component model (RTCOM) that describes how a real-time component, supporting aspects and enforcing information hiding, could efficiently be designed and implemented. In this context, we outline design guidelines for development of real-time systems using components and aspects, thereby facilitating static configuration of the system, which is preferred for hard real-time systems. For soft real-time systems with high availability requirements we provide a method for dynamic system reconfiguration that is especially suited for resourceconstrained real-time systems and it ensures that components and aspects can be added, removed, or exchanged in a system at run-time. Satisfaction of real-time constraints is essential in the real-time domain and, for real-time systems built of aspects and components, analysis is ensured by: (i) a method for aspectlevel worst-case execution time analysis; (ii) a method for formal verification of temporal properties of reconfigurable real-time components; and (iii) a method for maintaining quality of service, i.e., the specified level of performance, during normal system operation and after dynamic reconfiguration. We have implemented a tool set with which the designer can efficiently configure a real-time system to meet functional requirements and analyze it to ensure that non-functional requirements in terms of temporal constraints and available memory are satisfied. In this thesis we present a proof-of-concept implementation of a configurable embedded real-time database, called COMET. The implementation illustrates how our methods and tools can be applied, and demonstrates that the proposed solutions have a positive impact in facilitating efficient development of families of realtime systems.
82

Scalable and Highly Available Database Systems in the Cloud

Minhas, Umar Farooq January 2013 (has links)
Cloud computing allows users to tap into a massive pool of shared computing resources such as servers, storage, and network. These resources are provided as a service to the users allowing them to “plug into the cloud” similar to a utility grid. The promise of the cloud is to free users from the tedious and often complex task of managing and provisioning computing resources to run applications. At the same time, the cloud brings several additional benefits including: a pay-as-you-go cost model, easier deployment of applications, elastic scalability, high availability, and a more robust and secure infrastructure. One important class of applications that users are increasingly deploying in the cloud is database management systems. Database management systems differ from other types of applications in that they manage large amounts of state that is frequently updated, and that must be kept consistent at all scales and in the presence of failure. This makes it difficult to provide scalability and high availability for database systems in the cloud. In this thesis, we show how we can exploit cloud technologies and relational database systems to provide a highly available and scalable database service in the cloud. The first part of the thesis presents RemusDB, a reliable, cost-effective high availability solution that is implemented as a service provided by the virtualization platform. RemusDB can make any database system highly available with little or no code modifications by exploiting the capabilities of virtualization. In the second part of the thesis, we present two systems that aim to provide elastic scalability for database systems in the cloud using two very different approaches. The three systems presented in this thesis bring us closer to the goal of building a scalable and reliable transactional database service in the cloud.
83

Scalable and Highly Available Database Systems in the Cloud

Minhas, Umar Farooq January 2013 (has links)
Cloud computing allows users to tap into a massive pool of shared computing resources such as servers, storage, and network. These resources are provided as a service to the users allowing them to “plug into the cloud” similar to a utility grid. The promise of the cloud is to free users from the tedious and often complex task of managing and provisioning computing resources to run applications. At the same time, the cloud brings several additional benefits including: a pay-as-you-go cost model, easier deployment of applications, elastic scalability, high availability, and a more robust and secure infrastructure. One important class of applications that users are increasingly deploying in the cloud is database management systems. Database management systems differ from other types of applications in that they manage large amounts of state that is frequently updated, and that must be kept consistent at all scales and in the presence of failure. This makes it difficult to provide scalability and high availability for database systems in the cloud. In this thesis, we show how we can exploit cloud technologies and relational database systems to provide a highly available and scalable database service in the cloud. The first part of the thesis presents RemusDB, a reliable, cost-effective high availability solution that is implemented as a service provided by the virtualization platform. RemusDB can make any database system highly available with little or no code modifications by exploiting the capabilities of virtualization. In the second part of the thesis, we present two systems that aim to provide elastic scalability for database systems in the cloud using two very different approaches. The three systems presented in this thesis bring us closer to the goal of building a scalable and reliable transactional database service in the cloud.
84

Doplnění a optimalizace temporálního rozšíření pro PostgreSQL / Completion and Optimization of a Temporal Extension for PostgreSQL

Koroncziová, Dominika January 2016 (has links)
This thesis focuses on a implemenation of a a temporal data support within traditional relational environment of PostgreSQL system. I pick up on Radek Jelínek's thesis and an extension developed by him. I've analyzed the extension from functional, practical and performance perspectives. Based on my results, I've designed and implemented changes to the original extension. The work also contains implementation details as well as performance comparison results between the new and the original extensions.
85

Materialized Views in the Presence of Reporting Functions

Lehner, Wolfgang, Habich, Dirk, Just, Michael 15 June 2022 (has links)
Materialized views are a well-known optimization strategy with the potential for massive improvements in query processing time, especially for aggregation queries over large tables. To realize this potential, the query optimizer has to know how and when to exploit materialized views. Reporting functions represent a novel technique to formulate sequence-oriented queries in SQL. They provide a column-wise ordering, partitioning, and windowing mechanism for aggregation functions and therefore extend the well-known way of grouping and applying simple aggregation functions. Up to now, current work has not considered the frequently used reporting functions in data warehouse environments. In this paper, we introduce materialized reporting function views and show how to rewrite queries with reporting functions as well as aggregation queries to this new kind of materialized views. We demonstrate the efficiency of our approach with a large number of experiments.
86

Cinderella - Adaptive Online Partitioning of Irregularly Structured Data

Herrmann, Kai, Voigt, Hannes, Lehner, Wolfgang 01 July 2021 (has links)
In an increasing number of use cases, databases face the challenge of managing irregularly structured data. Irregularly structured data is characterized by a quickly evolving variety of entities without a common set of attributes. These entities do not show enough regularity to be captured in a traditional database schema. A common solution is to centralize the diverse entities in a universal table. Usually, this leads to a very sparse table. Although today's techniques allow efficient storage of sparse universal tables, query efficiency is still a problem. Queries that reference only a subset of attributes have to read the whole universal table including many irrelevant entities. One possible solution is to use a partitioning of the table, which allows pruning partitions of irrelevant entities before they are touched. Creating and maintaining such a partitioning manually is very laborious or even infeasible, due to the enormous complexity. Thus an autonomous solution is desirable. In this paper, we define the Online Partitioning Problem for irregularly structured data and present Cinderella. Cinderella is an autonomous online algorithm for horizontal partitioning of irregularly structured entities in universal tables. It is designed to keep its overhead low by incrementally assigning entities to partitions while they are touched anyway during modifications. The achieved partitioning allows queries that retrieve only entities with a subset of attributes easily pruning partitions of irrelevant entities. Cinderella increases the locality of queries and reduces query execution cost.
87

Adaptive work placement for query processing on heterogeneous computing resources

Karnagel, Thomas, Habich, Dirk, Wolfgang 10 November 2022 (has links)
The hardware landscape is currently changing from homogeneous multi-core systems towards heterogeneous systems with many di↵erent computing units, each with their own characteristics. This trend is a great opportunity for database systems to increase the overall performance if the heterogeneous resources can be utilized eciently. To achieve this, the main challenge is to place the right work on the right computing unit. Current approaches tackling this placement for query processing assume that data cardinalities of intermediate results can be correctly estimated. However, this assumption does not hold for complex queries. To overcome this problem, we propose an adaptive placement approach being independent of cardinality estimation of intermediate results. Our approach is incorporated in a novel adaptive placement sequence. Additionally, we implement our approach as an extensible virtualization layer, to demonstrate the broad applicability with multiple database systems. In our evaluation, we clearly show that our approach significantly improves OLAP query processing on heterogeneous hardware, while being adaptive enough to react to changing cardinalities of intermediate query results.
88

Online Bit Flip Detection for In-Memory B-Trees on Unreliable Hardware

Kolditz, Tim, Kissinger, Thomas, Schlegel, Benjamin, Habich, Dirk, Lehner, Wolfgang 25 August 2022 (has links)
Hardware vendors constantly decrease the feature sizes of integrated circuits to obtain better performance and energy efficiency. Due to cosmic rays, low voltage or heat dissipation, hardware -- both processors and memory -- becomes more and more unreliable as the error rate increases. From a database perspective bit flip errors in main memory will become a major challenge for modern in-memory database systems, which keep all their enterprise data in volatile, unreliable main memory. Although existing hardware error control techniques like ECC-DRAM are able to detect and correct memory errors, their detection and correction capabilities are limited. Moreover, hardware error correction faces major drawbacks in terms of acquisition costs, additional memory utilization, and latency. In this paper, we argue that slightly increasing data redundancy at the right places by incorporating context knowledge already increases error detection significantly. We use the B-Tree -- as a widespread index structure -- as an example and propose various techniques for online error detection and thus increase its overall reliability. In our experiments, we found that our techniques can detect more errors in less time on commodity hardware compared to non-resilient B-Trees running in an ECC-DRAM environment. Our techniques can further be easily adapted for other data structures and are a first step in the direction of resilient database systems which can cope with unreliable hardware.
89

Data Structure Engineering For Byte-Addressable Non-Volatile Memory

Oukid, Ismail, Lehner, Wolfgang 30 June 2022 (has links)
Storage Class Memory (SCM) is emerging as a viable alternative to traditional DRAM, alleviating its scalability limits, both in terms of capacity and energy consumption, while being non-volatile. Hence, SCM has the potential to become a universal memory, blurring well-known storage hierarchies. However, along with opportunities, SCM brings many challenges. In this tutorial we will dissect SCM challenges and provide an in-depth view of existing programming models that circumvent them, as well as novel data structures that stem from these models. We will also elaborate on fail-safety testing challenges -- an often overlooked, yet important topic. Finally, we will discuss SCM emulation techniques for end-to-end testing of SCM-based software components. In contrast to surveys investigating the use of SCM in database systems, this tutorial is designed as a programming guide for researchers and professionals interested in leveraging SCM in database systems.
90

High-Throughput BitPacking Compression

Lisa, Nusrat Jahan, Nguyen, Tuan Duy Anh, Habich, Dirk, Kumar, Akash, Lehner, Wolfgang 03 July 2023 (has links)
To efficiently support analytical applications from a data management perspective, in-memory column store database systems are state-of-the art. In this kind of database system, lossless lightweight integer compression schemes are crucial to keep the memory storage as low as possible and to speedup query processing. In this specific compression domain, BitPacking is one of the most frequently applied compression scheme. However, (de) compression should not come with any additional cost during run time, but should be provided transparently without compromising the overall system performance. To achieve that, we focus on acceleration of BitPacking using Field Programmable Gate Arrays (FPGAs). Therefore, we outline several FPGA designs for BitPacking in this paper. As we are going to show in our evaluation, our specific designs provide the BitPacking compression scheme with high-throughput.

Page generated in 0.0569 seconds