• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 18
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 18
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

SCIT: A Schema Change Interpretation Tool for Dynamic-Schema Data Warehouses

Hai, Rihan Hai, Theodorou, Vasileios, Thiele, Maik, Lehner, Wolfgang 03 February 2023 (has links)
Data Warehouses (DW) have to continuously adapt to evolving business requirements, which implies structure modification (schema changes) and data migration requirements in the system design. However, it is challenging for designers to control the performance and cost overhead of different schema change implementations. In this paper, we demonstrate SCIT, a tool for DW designers to test and implement different logical design alternatives in a two-fold manner. As a main functionality, SCIT translates common DW schema modifications into directly executable SQL scripts for relational database systems, facilitating design and testing automation. At the same time, SCIT assesses changes and recommends alternative design decisions to help designers improve logical designs and avoid common dimensional modeling pitfalls and mistakes. This paper serves as a walk-through of the system features, showcasing the interaction with the tool’s user interface in order to easily and effectively modify DW schemata and enable schema change analysis.
52

SCINTRA: A Model for Quantifying Inconsistencies in Grid-Organized Sensor Database Systems

Schlesinger, Lutz, Lehner, Wolfgang 12 January 2023 (has links)
Sensor data sets are usually collected in a centralized sensor database system or replicated cached in a distributed system to speed up query evaluation. However, a high data refresh rate disallows the usage of traditional replicated approaches with its strong consistency property. Instead we propose a combination of grid computing technology with sensor database systems. Each node holds cached data of other grid members. Since cached information may become stale fast, the access to outdated data may sometimes be acceptable if the user has knowledge about the degree of inconsistency if unsynchronized data are combined. The contribution of this paper is the presentation and discussion of a model for describing inconsistencies in grid organized sensor database systems.
53

SynopSys: Large Graph Analytics in the SAP HANA Database Through Summarization

Rudolf, Michael, Paradies, Marcus, Bornhövd, Christof, Lehner, Wolfgang 19 September 2022 (has links)
Graph-structured data is ubiquitous and with the advent of social networking platforms has recently seen a significant increase in popularity amongst researchers. However, also many business applications deal with this kind of data and can therefore benefit greatly from graph processing functionality offered directly by the underlying database. This paper summarizes the current state of graph data processing capabilities in the SAP HANA database and describes our efforts to enable large graph analytics in the context of our research project SynopSys. With powerful graph pattern matching support at the core, we envision OLAP-like evaluation functionality exposed to the user in the form of easy-to-apply graph summarization templates. By combining them, the user is able to produce concise summaries of large graph-structured datasets. We also point out open questions and challenges that we plan to tackle in the future developments on our way towards large graph analytics.
54

Návrh a implementace databázové aplikace pro portál Andromedia s využitím relační databázové technologie / Design and system implementation for portal Andromedia with using relation database technology

Velický, Tomáš January 2014 (has links)
Design and system implementation for portal Andromedia with using relation database Diploma thesis Author: Ing. Tomáš Velický Supevisor: PhDr. Mgr. Jan Pokorný, Ph.D. Abstrakt The aim of this diploma thesis is the design, implementation and development of database applications for portal Andromedia using database technology. The solution is designed as an information base for continuing professional education. The purpose of the databank is gradually collect information of trends and analyzes in the field of continuing education, ideas and practices, methods, teaching aids, case studies, tests and other guidelines and sources of information. Author documented the results of the analysis of the problem and created solution using methods of project management solutions. The new solution is graphic design, database design and description using the RF model, and UML diagrams. Technologies used in the design databank are the programming language PHP and MySQL relational database. The conclusion evaluates the success of database applications implementation, where is the most interesting findings that the application is used very often in the exam period, when it was mostly used by students to obtain the most suitable materials for self-study. Subsequently, it proposes steps to further improve the sustainability...
55

Energy-Efficient In-Memory Database Computing

Lehner, Wolfgang 27 June 2013 (has links) (PDF)
The efficient and flexible management of large datasets is one of the core requirements of modern business applications. Having access to consistent and up-to-date information is the foundation for operational, tactical, and strategic decision making. Within the last few years, the database community sparked a large number of extremely innovative research projects to push the envelope in the context of modern database system architectures. In this paper, we outline requirements and influencing factors to identify some of the hot research topics in database management systems. We argue that—even after 30 years of active database research—the time is right to rethink some of the core architectural principles and come up with novel approaches to meet the requirements of the next decades in data management. The sheer number of diverse and novel (e.g., scientific) application areas, the existence of modern hardware capabilities, and the need of large data centers to become more energy-efficient will be the drivers for database research in the years to come.
56

Databases on Demand (DBoD)

Poley, Christoph 02 June 2009 (has links) (PDF)
In einer der letzten Ausgaben haben wir ausführlich über das Projekt DBoD berichtet. Darin wurde detailliert auf den Inhalt von DBoD, die Benutzergruppen an den Hochschulen, auf technische Details und die Verknüpfung it dem Datenbank-Informationssystem (DBIS) eingegangen. DBoD wird von der Europäischen Union im Rahmen des Europäischen Fonds für regionale Entwicklung (EFRE) gefördert. Seitdem ist ein halbes Jahr vergangen. Zeit, in der sich DBoD von einem Projekt in den Startlöchern hin zu einem anerkannten Bibliotheksdienst im Produktivbetrieb entwickelt hat, ein Produkt aus Sachsen für ganz Sachsen. Und genau hier liegt der entscheidende Vorteil gegenüber vorhandenen Lösungen für das Betreiben von CD/DVD-ROM-Datenbanken.
57

EIT: Escalonador Inteligente de Transa??es

Holanda, Maristela Terto de 09 July 2007 (has links)
Made available in DSpace on 2014-12-17T14:54:47Z (GMT). No. of bitstreams: 1 MaristelaTH.pdf: 1199381 bytes, checksum: 3ac5475d7ab0bbef50336ba75e95c88e (MD5) Previous issue date: 2007-07-09 / In order to guarantee database consistency, a database system should synchronize operations of concurrent transactions. The database component responsible for such synchronization is the scheduler. A scheduler synchronizes operations belonging to different transactions by means of concurrency control protocols. Concurrency control protocols may present different behaviors: in general, a scheduler behavior can be classified as aggressive or conservative. This paper presents the Intelligent Transaction Scheduler (ITS), which has the ability to synchronize the execution of concurrent transactions in an adaptive manner. This scheduler adapts its behavior (aggressive or conservative), according to the characteristics of the computing environment in which it is inserted, using an expert system based on fuzzy logic. The ITS can implement different correctness criteria, such as conventional (syntactic) serializability and semantic serializability. In order to evaluate the performance of the ITS in relation to others schedulers with exclusively aggressive or conservative behavior, it was applied in a dynamic environment, such as a Mobile Database Community (MDBC). An MDBC simulator was developed and many sets of tests were run. The experimentation results, presented herein, prove the efficiency of the ITS in synchronizing transactions in a dynamic environment / Para garantir a consist?ncia do banco de dados, um sistema de banco de dados deve sincronizar as opera??es das transa??es concorrentes executadas sobre esse banco. O componente do sistema de banco de dados respons?vel por tal sincroniza??o ? o escalonador. O escalonador sincroniza opera??es de diferentes transa??es atrav?s dos protocolos de controle de concorr?ncia. Os protocolos de controle de concorr?ncia podem apresentar diferentes comportamentos: em geral, esse comportamento do escalonador pode ser classificado como agressivo ou conservador. Esta tese apresenta o Escalonador Inteligente de Transa??es (EIT), o qual tem a habilidade de sincronizar a execu??o das transa??es concorrentes de maneira adaptativa. Este escalonador adapta seu comportamento (agressivo ou conservador) de acordo com as caracter?sticas do ambiente computacional onde est? inserido, usando um sistema especialista baseado em l?gica fuzzy. O EIT foi desenvolvido para trabalhar com protocolos baseados nos crit?rios de corretude de serializabilidade convencional e serializabilidade sem?ntica. Para avaliar o desempenho do EIT em rela??o aos escalonadores com comportamento exclusivamente conservador ou agressivo, ele foi usado em um ambiente din?mico, uma Comunidade de Banco de Dados M?veis (MDBC Mobile Database Community). Foi implementado um simulador de MDBC e um conjunto de testes foi executado. Os resultados obtidos provaram a efici?ncia do EIT, um escalonador inteligente, quando utilizado em um ambiente din?mico de banco de dados
58

Analytical Query Processing Based on Continuous Compression of Intermediates

Damme, Patrick 02 October 2020 (has links)
Nowadays, increasingly large amounts of data are being collected in numerous areas ranging from science to industry. To gain valueable insights from these data, the importance of Online Analytical Processing (OLAP) workloads is constantly growing. At the same time, the hardware landscape is continuously evolving. On the one hand, the increasing capacities of DRAM allow database systems to store their entire data in main memory. Furthermore, the performance of microprocessors has improved tremendously in recent years through the use of sophisticated hardware techniques, such as Single Instruction Multiple Data (SIMD) extensions promising hitherto unknown processing speeds. On the other hand, the main memory bandwidth has not increased proportionately, such that the data access is now the main bottleneck for an efficient data processing. To face these developments, in-memory column-stores have emerged as a new database architecture. These systems store each attribute of a relation separately in memory as a contiguous sequence of values. It is state-of-the-art to encode all values as integers and apply lossless lightweight integer compression to reduce the data size. This offers several advantages ranging from lower transfer times between RAM and CPU over a better utilization of the cache hierarchy to fast direct processing of compressed data. However, compression also incurs a certain computational overhead. State-of-the-art systems focus on the compression of base data. However, intermediate results generated during the execution of complex analytical queries can exceed the base data in number and total size. Since in in-memory systems, accessing intermediates is as expensive as accessing base data, intermediates should be handled as efficiently as possible, too. While there are approaches trying to avoid intermediates whenever it is possible, we envision the orthogonal approach of efficiently representing intermediates using lightweight integer compression algorithms to reduce memory accesses. More precisely, our vision is a balanced query processing based on lightweight compression of intermediate results in in-memory column-stores. That means, all intermediates shall be represented using a suitable lightweight integer compression algorithm and processed by compression-enabled query operators to avoid a full decompression, whereby compression shall be used in a balanced way to ensure that its benefits outweigh its costs. In this thesis, we address all important aspects of this vision. We provide an extensive overview of existing lightweight integer compression algorithms and conduct a systematical experimental survey of several of these algorithms to gain a deep understanding of their behavior. We propose a novel compression-enabled processing model for in-memory column-stores allowing a continuous compression of intermediates. Additionally, we develop novel cost-based strategies for a compression-aware secondary query optimization to make effective use of our processing model. Our end-to-end evaluation using the famous Star Schema Benchmark shows that our envisioned compression of intermediates can improve both the memory footprint and the runtime of complex analytical queries significantly.:1 Introduction 1.1 Contributions 1.2 Outline 2 Lightweight Integer Compression 2.1 Foundations 2.1.1 Disambiguation of Lightweight Integer Compression 2.1.2 Overview of Lightweight Integer Compression 2.1.3 State-of-the-Art in Lightweight Integer Compression 2.2 Experimental Survey 2.2.1 Related Work 2.2.2 Experimental Setup and Methodology 2.2.3 Evaluation of the Impact of the Data Characteristics 2.2.4 Evaluation of the Impact of the Hardware Characteristics 2.2.5 Evaluation of the Impact of the SIMD Extension 2.3 Summary and Discussion 3 Processing Compressed Intermediates 3.1 Processing Model for Compressed Intermediates 3.1.1 Related Work 3.1.2 Description of the Underlying Processing Model 3.1.3 Integration of Compression into Query Operators 3.1.4 Integration of Compression into the Overall Query Execution 3.1.5 Efficient Implementation 3.1.6 Evaluation 3.2 Direct Integer Morphing Algorithms 3.2.1 Related Work 3.2.2 Integer Morphing Algorithms 3.2.3 Example Algorithms 3.2.4 Evaluation 3.3 Summary and Discussion 4 Compression-Aware Query Optimization Strategies 4.1 Related Work 4.2 Compression-Aware Secondary Query Optimization 4.2.1 Compression-Level: Selecting a Suitable Algorithm 4.2.2 Operator-Level: Selecting Suitable Input/Output Formats 4.2.3 QEP-Level: Selecting Suitable Formats for All Involved Columns 4.3 Evaluation 4.3.1 Compression-Level: Selecting a Suitable Algorithm 4.3.2 Operator-Level: Selecting Suitable Input/Output Formats 4.3.3 Lessons Learned 4.4 Summary and Discussion 5 End-to-End Evaluation 5.1 Experimental Setup and Methodology 5.2 A Simple OLAP Query 5.3 Complex OLAP Queries: The Star Schema Benchmark 5.4 Summary and Discussion 6 Conclusion 6.1 Summary of this Thesis 6.2 Directions for Future Work Bibliography List of Figures List of Tables
59

Databases on Demand (DBoD): Der Weg von einer CD/DVD-Datenbank zur Online-Ressource

Poley, Christoph 02 June 2009 (has links)
In einer der letzten Ausgaben haben wir ausführlich über das Projekt DBoD berichtet. Darin wurde detailliert auf den Inhalt von DBoD, die Benutzergruppen an den Hochschulen, auf technische Details und die Verknüpfung it dem Datenbank-Informationssystem (DBIS) eingegangen. DBoD wird von der Europäischen Union im Rahmen des Europäischen Fonds für regionale Entwicklung (EFRE) gefördert. Seitdem ist ein halbes Jahr vergangen. Zeit, in der sich DBoD von einem Projekt in den Startlöchern hin zu einem anerkannten Bibliotheksdienst im Produktivbetrieb entwickelt hat, ein Produkt aus Sachsen für ganz Sachsen. Und genau hier liegt der entscheidende Vorteil gegenüber vorhandenen Lösungen für das Betreiben von CD/DVD-ROM-Datenbanken.
60

Přístup k objektovým datům databáze Oracle 10g z jazyka Java / Access to Oracle 10g Object Data from Java

Novák, Michal Unknown Date (has links)
This diploma thesis deals with object extensions of Oracle database 10g system and describes access from Java environment.

Page generated in 0.0567 seconds