• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Query Support for Multi-Dimensional and Dynamic Databases

Apaydin, Tan 29 September 2008 (has links)
No description available.
2

[en] QEEF: AN EXTENSIBLE QUERY EXECUTION ENGINE / [pt] QEEF: UMA MÁQUINA DE EXECUÇÃO DE CONSULTAS

FAUSTO VERAS MARANHAO AYRES 30 June 2004 (has links)
[pt] O processamento de consultas em Sistemas de Gerência de Banco de Dados tradicionais tem sido largamente estudado na literatura e utilizado comercialmente com enorme sucesso. Isso é devido, em parte, à eficiência das Máquinas de Execução de Consultas (MEC) no suporte ao modelo de execução tradicional. Porém, o surgimento de novos cenários de aplicação, principalmente em conseqüência do modelo computacional da web, motivou a pesquisa de novos modelos de execução, tais como: modelo adaptável e modelo contínuo, além da pesquisa de modelos de dados semi-estruturados, tal como o XML, ambos não suportados pelas MEC tradicionais. O objetivo desta tese consiste no desenvolvimento de uma MEC extensível frente a diferentes modelos de execução e de dados. Adicionalmente, esta proposta trata de maneira ortogonal o modelo de execução e o modelo de dados, o que permite a avaliação de planos de execução de consultas (PEC) com fragmentos em diferentes modelos. Utilizou-se a técnica de framework de software para a especificação da MEC extensível, produzindo o framework QEEF (Query Execution Engine Framework). A extensibilidade da solução reflete-se em um meta-modelo, denominado QUEM (QUery Execution Meta-model), capaz de exprimir diferentes modelos em um meta-PEC. O framework QEEF pré-processa um meta-PEC e produz um PEC final a ser avaliado pela MEC instanciada. Como parte da validação desta proposta, instanciou-se o QEEF para diferentes modelos de execução e de dados. / [en] Querying processing in traditional Database Management Systems (DBMS) has been extensively studied in the literature and adopted in industry. Such success is, in part, due to the performance of their Query Execution Engines (QEE) for supporting the traditional query execution model. The advent of new query scenarios, mainly due to the web computational model, has motivate the research on new execution models such as: adaptive and continuous, and on semistructured data models, such as XML, both not natively supported by traditional query engines. This thesis proposes the development of an extensible QEE adapted to the new execution and data models. Achieving this goal, we use a software design approach based on framework technique to produce the Query Execution Engine Framework (QEEF). Moreover, we address the question of the orthogonality between execution and data models, witch allows for executing query execution plans (QEP) with fragments in different models. The extensibility of our solution is specified by in a QEP by an execution meta- model named QUEM (QUery Execution Meta-model) used to express different models in a meta-QEP. During query evaluation, the latter is pre-processed by the QEEF producing a final QEP to be evaluated by the running QEE. The QEEF is instantiated for different execution and data models as part of the validation of this proposal.
3

Anfragebearbeitung auf Mehrkern-Rechnerarchitekturen

Huber, Frank 24 May 2012 (has links)
Der Trend zu immer mehr parallelen Recheneinheiten innerhalb eines Prozessors stellt an die Softwareentwicklung neue Herausforderungen. Um die vorhandenen Ressourcen auszulasten und die stetige Steigerung der Parallelität in einen Leistungszuwachs umzusetzen, muss Software von der sequentiellen Verarbeitung in eine hochgradig parallele Verarbeitung übergehen. Diese Arbeit untersucht, wie solch eine parallele Verarbeitung in Bezug auf Relationale Datenbankmanagementsysteme umzusetzen ist. Dazu wird zunächst der gesamte Prozess der Anfragebearbeitung betrachtet und vier Problembereiche identifiziert, die für das Ziel der parallelen Anfragebearbeitung auf Mehrkern-Rechnerarchitekturen maßgeblich sind. Diese Bereiche sind die Hardware selbst, das physische Datenmodell sowie die Anfrageausführung und -optimierung. Diese vier Bereiche werden innerhalb eines Rahmenwerkes betrachtet. Nach einer Einführung, wird sich die Arbeit zunächst mit Grundlagen befassen. Dazu werden die Hardwarebestandteile Speicher und Prozessor betrachtet und ihre Funktionsweise erläutert. Auf diesem Wissen aufbauend, wird ein Hardwaremodell definiert. Es ermöglicht eine von der jeweiligen Hardwarearchitektur unabhängige Softwareentwicklung, ohne den Verlust an Funktionalität und Leistung. Im Weiteren wird das physische Datenmodell untersucht und analysiert, wie das physische Datenmodell eine optimale Anfrageausführung unterstützen kann. Die verwendeten Datenstrukturen müssen dafür einen effizienten und parallelen Zugriff erlauben. Die Analyse führt zur Entwicklung eines neuartigen Indexes, der die datenparallele Abarbeitung nutzt. Gefolgt wird dieser Teil von der Anfrageausführung, in der ein neues Anfrageausführungsmodell entwickelt wird, das auf der Verwendung des Taskkonzepts beruht und eine hohe und sehr leicht gewichtige Parallelität erlaubt. Den Abschluss stellt die Anfrageoptimierung dar, worin verschiedene Ideen für die Optimierung der Ressourcenverwaltung präsentiert werden. / The upcoming generation of many-core architectures poses several new challenges for software development: Software design and software implementation has to change from sequential execution to a highly parallel execution, such that it takes full advantage of the steadily growing number of cores on a single processor. With this thesis, we investigate such highly parallel program execution in the context of relational database management systems (RDBMSs). We consider the complete process of query processing and identify four problem areas which are crucial for efficient parallel query processing on many-core architectures. These four areas are: Hardware, physical data model, query execution, and query optimization. Furthermore, we present a framework which covers all four parts, one after another. First, we give a detailed survey of computer hardware with a special focus on memory and processors. Based on this survey we propose a hardware model. Our abstraction aims to simplify the task of software development on many-core hardware. Based on the hardware model, we investigate physical data models and evaluate how the physical data model may support optimal query execution by providing efficient and parallelizable data structures. Additionally, we design a new index structure that utilizes data parallel execution by using SIMD operations. The next layer within our framework is query execution, for which we present a new task based query execution model. Our query execution model allows for a lightweight parallelism. Finally, we cover query optimization by explaining approaches for optimizing resource utilization on a query local point of view as well as query global point of view.
4

A C++ Distributed Database Select-project-join Queryprocessor On A Hpc Cluster

Ceran, Erhan 01 May 2012 (has links) (PDF)
High performance computer clusters have become popular as they are more scalable, affordable and reliable than their centralized counterparts. Database management systems are particularly suitable for distributed architectures / however distributed DBMS are still not used widely because of the design difficulties. In this study, we aim to help overcome these difficulties by implementing a simulation testbed for a distributed query plan processor. This testbed works on our departmental HPC cluster machine and is able to perform select, project and join operations. A data generation module has also been implemented which preserves the foreign key and primary key constraints in the database schema. The testbed has capability to measure, simulate and estimate the response time of a given query execution plan using specified communication network parameters. Extensive experimental work is performed to show the correctness of the produced results. The estimated execution time costs are also compared with the actual run-times obtained from the testbed to verify the proposed estimation functions. Thus, we make sure that these estimation iv functions can be used in distributed database query optimization and distributed database design tools.
5

To share or not to share vector registers?

Pietrzyk, Johannes, Krause, Alexander, Habich, Dirk, Lehner, Wolfgang 04 June 2024 (has links)
Query execution techniques in database systems constantly adapt to novel hardware features to achieve high query performance, in particular for analytical queries. In recent years, vectorization based on the Single Instruction Multiple Data parallel paradigm has been established as a state-of-the-art approach to increase single-query performance. However, since concurrent analytical queries running in parallel often access the same columns and perform a same set of vectorized operations, data accesses and computations among different queries may be executed redundantly. Various techniques have already been proposed to avoid such redundancy, ranging from concurrent scans via the construction of materialized views to applying multiple query optimization techniques. Continuing this line of research, we investigate the opportunity of sharing vector registers for concurrently running queries in analytical scenarios in this paper. In particular, our novel sharing approach relies on processing data elements of different queries together within a single vector register. As we are going to show, sharing vector registers to optimize the execution of concurrent analytical queries can be very beneficial in single-threaded as well as multi-thread environments. Therefore, we demonstrate the feasibility and applicability of such a novel work sharing strategy and thus open up a wide spectrum of future research opportunities.
6

Evaluation of Generic GraphQL Servers for Accessing Legacy Databases / Evaluation of Generic GraphQL Servers for Accessing Legacy Databases

Ismail, Muhammad January 2022 (has links)
Over a few years back, REST APIs were considered standard web APIs, which nowhave a strong competitor. REST APIs provide some excellent features like stateless serversand structured access to resources. However, over time, it doesn’t offer flexibility withthe access of data and client changing requirements. In 2015 GraphQL was introduced byFacebook, which overcomes the problems with the REST and provides more flexibility andefficiency to the client requirements. For example, remove the over and under fetching.To change the existing APIs into GraphQL APIs require considerable time and effort.Therefore, some server implementation tools are developed to reduce the developmentcost and time. A few of these tools generate GraphQL schema and server implementationsautomatically over a legacy database.This master thesis studies tools that automatically generate GraphQL server implementationover legacy databases and evaluate such generated GraphQL server’s performance.First, we find some GraphQL server implementation tools such as Hasura andPostGraphile and then compare the server’s performance using benchmark methodology.Secondly, we run an experiment on a computer system and use the performance metricsfor assessment. The results of our experiment concluded that PostGraphile has higherthroughput and low query execution time as compared to Hasura. In most of the querytemplates from the benchmark, PostGraphile outperforms Hasura.
7

A Query, a Minute: Evaluating Performance Isolation in Cloud Databases

Kiefer, Tim, Schön, Hendrik, Habich, Dirk, Lehner, Wolfgang 02 February 2023 (has links)
Several cloud providers offer reltional databases as part of their portfolio. It is however not obvious how resource virtualization and sharing, which is inherent to cloud computing, influence performance and predictability of these cloud databases. Cloud providers give little to no guarantees for consistent execution or isolation from other users. To evaluate the performance isolation capabilities of two commercial cloud databases, we ran a series of experiments over the course of a week (a query, a minute) and report variations in query response times. As a baseline, we ran the same experiments on a dedicated server in our data center. The results show that in the cloud single outliers are up to 31 times slower than the average. Additionally, one can see a point in time after which the average performance of all executed queries improves by 38 %.
8

A Performance Comparison of Auto-Generated GraphQL Server Implementations / En jämförelse av automatiskt genererade GraphQL server implementationer

Larsson, Markus, Ångström, David January 2020 (has links)
As databases and traffic over the internet is becoming larger by the day, the performance of sending information has become a target of great importance. In past years, other software architectural styles such as REST have been used as it is a reliable framework and works really well when one has a dependable internet connection. In 2015, the querying language GraphQL was released by Facebook to the public as an alternative to REST. GraphQL made improvements in fetching data by for example removing the possibility of under- and overfitting. This means that a client only gets the data which they have requested, nothing more, nothing less. To create a GraphQL schema and server implementation requires time, effort and knowledge. This is however a requirement to run GraphQL over your current legacy database. For this reason multiple server implementation tools have been created by vendors to reduce development time and instead auto-generates a GraphQL schema and server implementation using an already existing database. This bachelor thesis will pick, run and compare the benchmarks of the two different server implementation tools Hasura and PostGraphile. This is done using a benchmark methodology based on technical difficulties (choke points). The result of our benchmark suggests that the throughput is larger for Hasura compared to PostGraphile whilst the query execution time as well as query response time is similar. PostGraphile is better at paging without offset as well as ordering, but on all other cases Hasura outperforms PostGraphile or shows similar results. / Linköping GraphQL Benchmark (LinGBM)
9

Sampling-based Techniques for Interactive Exploration of Large Datasets

Kamat, Niranjan Ganesh 18 September 2018 (has links)
No description available.
10

Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates

Idris, Muhammad 05 March 2019 (has links) (PDF)
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends, and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain query results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flows of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation), however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems.In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of queries that are based on the relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g. timestamp attributes) along with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints while non-temporal comparisons can either be equality or inequality constraints. Hence these systems mostly process inequality joins. As a starting point for our research, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kinds of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub)results incur high update latency and systems that materialize (sub)results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems and present a main-memory data representation that allows to enumerate query (sub)results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLRs).We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries), 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only), 3) they take space linear in the size of the database, 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm called the Dynamic Yannakakis (DYN) algorithm. Then, we present the generalization of the DYN algorithm to the class of acyclic queries featuring multi-way Theta-joins with projections and call it Generalized DYN (GDYN). We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of DYN and GDYN over DCLRs are based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present algorithms to test a conjunctive query featuring Theta-joins for acyclicity and to generate GJTs for such queries. We extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to testing a conjunctive query featuring multi-way Theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic.GDYN is hence a unified framework based on DCLRs that enables processing of queries that appear in streaming systems as well as in BI systems in a unified main-memory model and addresses the space-time trade-off. We instantiate GDYN to the particular case where all Theta-joins involve only equalities and inequalities and call this instantiation IEDYN. We implement DYN and IEDYN as query compilers that generate executable programs in the Scala programming language and provide all the necessary data structures and their maintenance and enumeration methods in a continuous stream processing model. We evaluate DYN and IEDYN against state-of-the-art BI and streaming systems on both industrial and synthetically generated benchmarks. We show that DYN and IEDYN outperform the existing systems by over an order of magnitude efficiency in both memory footprint and update processing time. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished

Page generated in 0.1169 seconds