• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 507
  • 79
  • 36
  • 29
  • 22
  • 15
  • 11
  • 10
  • 9
  • 8
  • 6
  • 6
  • 5
  • 4
  • 3
  • Tagged with
  • 870
  • 286
  • 264
  • 221
  • 201
  • 169
  • 152
  • 133
  • 129
  • 128
  • 124
  • 116
  • 103
  • 101
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

INTERACTIVE VISUAL QUERYING AND ANALYSIS FOR URBAN TRAJECTORY DATA

AL-Dohuki, Shamal Mohammed Ameen 16 April 2019 (has links)
No description available.
262

Dynamic First Match : Reducing Resource Consumption of First Match Queries in MySQL NDB Cluster

Kumar, Hara January 2020 (has links)
Dynamic First Match is a learned heuristic that reduces the resource consumption of first match queries in a multi-threaded, distributed relational database, while having a minimal effect on latency. Traditional first match range scans occur in parallel across all data fragments simultaneously. This could potentially return many redundant results. Dynamic First Match reduced this redundancy by learning to scan only a portion of the data fragments first, before scanning the remaining fragments with a pruned data set. Benchmark tests show that Dynamic First Match could reduce resource consumption of first match queries containing first match range scans by over 40% while having a minimal effect on latency. / Dynamisk Första Match är en lärd heuristik som minskar resursförbrukningen för första match frågor i en flertrådad och distribuerad relationsdatabas, samtidigt som den har en minimal effekt på latens. Första match frågor resulterar i många intervallavsökningar. Traditionellt intervallskanningarna körs parallellt över alla datafragment samtidigt. Detta kan potentiellt ge många överflödiga resultat. Dynamisk Första Match minskade denna redundans genom att lära sig att bara skanna en del av datafragmenten innan återstående datafragmenten skannades med en beskuren datamängd. Jämförelsetester visar att Dynamisk Första Match kan minska resursförbrukningen för första match frågor med intervallavsökningar med över 40% samtidigt som den har en minimal effekt på latens.
263

Efficient exploitation of similar subexpressions for query processing

Zhou, Jingren, Larson, Per-Ake, Freytag, Johann Christoph, Lehner, Wolfgang 13 December 2022 (has links)
Complex queries often contain common or similar subexpressions, either within a single query or among multiple queries submitted as a batch. If so, query execution time can be improved by evaluating a common subexpression once and reusing the result in multiple places. However, current query optimizers do not recognize and exploit similar subexpressions, even within the same query. We present an efficient, scalable, and principled solution to this long-standing optimization problem. We introduce a light-weight and effective mechanism to detect potential sharing opportunities among expressions. Candidate covering subexpressions are constructed and optimization is resumed to determine which, if any, such subexpressions to include in the final query plan. The chosen subexpression(s) are computed only once and the results are reused to answer other parts of queries. Our solution automatically applies to optimization of query batches, nested queries, and maintenance of multiple materialized views. It is the first comprehensive solution covering all aspects of the problem: detection, construction, and cost-based optimization. Experiments on Microsoft SQL Server show significant performance improvements with minimal overhead.
264

Implementing and Evaluating MQL<sub>AIP</sub>: A Metabolism Query Language

Patel, Gajendra 18 November 2010 (has links)
No description available.
265

Actions with Conjunctive Queries:: Projection, Conflict Detection and Verification

Koopmann, Patrick 20 June 2022 (has links)
Description Logic actions specify adaptations of description logic interpretations based on some preconditions defined using a description logic. We consider DL actions in which preconditions can be specified using DL axioms as well as using conjunctive queries, and combinatiosn thereof. We investigate complexity bounds for the executability and the projection problem for these actions, which respectively ask whether an action can be executed on models of an interpretation, and which entailments are satisfied after an action has been executed on this model. In addition, we consider a set of new reasoning tasks concerned with conflicts and interactions that may arise if two action are executed at the same time. Since these problems have not been investigated before for Description Logic actions, we investigate the complexity of these tasks both for actions with conjunctive queries and without those. Finally, we consider the verification problem for Golog programs formulated over our famility of actions. Our complexity analysis considers several expressive DLs, and we provide tight complexity bounds for those for which the exact complexity of conjunctive query entailment is known.
266

Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates

Idris, Muhammad 10 April 2019 (has links)
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain these results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flow (or set of flows) of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation); however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems. In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of updates that is based on relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow the automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g., timestamp attributes) along-with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints, while non-temporal comparisons can either be equality or inequality constraints, hence these systems mostly process inequality joins. As starting point, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kind of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub) results incur high update latency and systems that materialize (sub) results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems, and present a main-memory data representation that allows to enumerate query (sub) results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLR). We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries); 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only); 3) they take space linear in the size of the database; 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm. Then, we present the generalization of thiw algorithm to the class of acyclic queries featuring multi-way theta-joins with projections. We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of dynamic algorithms over DCLRs is based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present the algorithms to test a conjunctive query featuring theta-joins for acyclicity and to generate GJTs for such queries. To do this, we extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to test a conjunctive query featuring multi-way theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic. We implemented our algorithms in a query compiler that takes as input the SQL queries and generates Scala executable code – a trigger program to process queries and maintain under updates. We tested our approach against state of the art main-memory BI and CEP systems. Our evaluation results have shown that our DCLRs based approach is over an order of magnitude efficient than existing systems for both memory footprint and update processing cost. We have also shown that the enumeration of query results without materialization in DCLRs is comparable (and in some cases efficient) as compared to enumerating from materialized query results.
267

Answering “Why Empty?” and “Why So Many?” queries in graph databases

Vasilyeva, Elena, Thiele, Maik, Bornhövd, Christof, Lehner, Wolfgang 04 July 2023 (has links)
Graph databases provide schema-flexible storage and support complex, expressive queries. However, the flexibility and expressiveness in these queries come at additional costs: queries can result in unexpected empty answers or too many answers, which are difficult to resolve manually. To address this, we introduce subgraph-based solutions for graph queries “Why Empty?” and “Why So Many?” that give an answer about which part of a graph query is responsible for an unexpected result. We also extend our solutions to consider the specifics of the used graph model and to increase efficiency and experimentally evaluate them in an in-memory column database.
268

Implementation of data flow query language on a handheld device

Evangelista, Mark A. 03 1900 (has links)
Approved for public release; distribution is unlimited / Handheld devices have evolved significantly from mere simple organizers to more powerful handheld computers that are capable of network connectivity, giving it the ability to send e-mail, browse the World Wide Web, and query remote databases. However, handheld devices, because of its design philosophy, are limited in terms of size, memory, and processing power compared to desktop computers. This thesis investigates the use of Data Flow Query Language (DFQL) in querying local and remote databases from a handheld device. Creating Standard Query Language (SQL) queries can be a complex undertaking; and trying to create one on a handheld device with a small screen only adds to its complexity. However, by using DFQL, the user can submit queries with an easy to use graphical user interface. Although handheld devices are currently more powerful than earlier PCs, they still require applications with a small footprint, which is a limiting factor for software developed. This thesis will also investigate the best division of labor between handheld device and remote servers. / Sergeant, United States Army
269

[en] QEEF: AN EXTENSIBLE QUERY EXECUTION ENGINE / [pt] QEEF: UMA MÁQUINA DE EXECUÇÃO DE CONSULTAS

FAUSTO VERAS MARANHAO AYRES 30 June 2004 (has links)
[pt] O processamento de consultas em Sistemas de Gerência de Banco de Dados tradicionais tem sido largamente estudado na literatura e utilizado comercialmente com enorme sucesso. Isso é devido, em parte, à eficiência das Máquinas de Execução de Consultas (MEC) no suporte ao modelo de execução tradicional. Porém, o surgimento de novos cenários de aplicação, principalmente em conseqüência do modelo computacional da web, motivou a pesquisa de novos modelos de execução, tais como: modelo adaptável e modelo contínuo, além da pesquisa de modelos de dados semi-estruturados, tal como o XML, ambos não suportados pelas MEC tradicionais. O objetivo desta tese consiste no desenvolvimento de uma MEC extensível frente a diferentes modelos de execução e de dados. Adicionalmente, esta proposta trata de maneira ortogonal o modelo de execução e o modelo de dados, o que permite a avaliação de planos de execução de consultas (PEC) com fragmentos em diferentes modelos. Utilizou-se a técnica de framework de software para a especificação da MEC extensível, produzindo o framework QEEF (Query Execution Engine Framework). A extensibilidade da solução reflete-se em um meta-modelo, denominado QUEM (QUery Execution Meta-model), capaz de exprimir diferentes modelos em um meta-PEC. O framework QEEF pré-processa um meta-PEC e produz um PEC final a ser avaliado pela MEC instanciada. Como parte da validação desta proposta, instanciou-se o QEEF para diferentes modelos de execução e de dados. / [en] Querying processing in traditional Database Management Systems (DBMS) has been extensively studied in the literature and adopted in industry. Such success is, in part, due to the performance of their Query Execution Engines (QEE) for supporting the traditional query execution model. The advent of new query scenarios, mainly due to the web computational model, has motivate the research on new execution models such as: adaptive and continuous, and on semistructured data models, such as XML, both not natively supported by traditional query engines. This thesis proposes the development of an extensible QEE adapted to the new execution and data models. Achieving this goal, we use a software design approach based on framework technique to produce the Query Execution Engine Framework (QEEF). Moreover, we address the question of the orthogonality between execution and data models, witch allows for executing query execution plans (QEP) with fragments in different models. The extensibility of our solution is specified by in a QEP by an execution meta- model named QUEM (QUery Execution Meta-model) used to express different models in a meta-QEP. During query evaluation, the latter is pre-processed by the QEEF producing a final QEP to be evaluated by the running QEE. The QEEF is instantiated for different execution and data models as part of the validation of this proposal.
270

OLAP query optimization and result visualization / Optimisation de requêtes OLAP et visualisation des résultats

Simonenko, Ekaterina 16 September 2011 (has links)
Nous explorons différents aspects des entrepôts de données et d’OLAP, le point commun de nos recherches étant le modèle fonctionnel pour l'analyse de données. Notre objectif principal est d'utiliser ce modèle dans l'étude de trois aspects différents, mais liés:- l'optimisation de requêtes par réécriture et la gestion du cache,- la visualisation du résultat d'une requête OLAP,- le mapping d'un schéma relationnel en BCNF vers un schéma fonctionnel. L'optimisation de requêtes et la gestion de cache sont des problèmes cruciaux dans l'évaluation de requêtes en général, et les entrepôts de données en particulier; et la réécriture de requêtes est une des techniques de base pour l'optimisation de requêtes. Nous établissons des conditions d'implication de requêtes analytiques, en utilisant le pré-ordre partiel sur l'ensemble de requêtes, et nous définissons un algorithme sain et complet de réécriture ainsi que une stratégie de gestion de cache optimisée, tous les deux basés sur le modèle fonctionnel.Le deuxième aspect important que nous explorons dans cette thèse est celui de la visualisation du résultat. Nous démontrons l'importance pour la visualisation de reproduire des propriétés essentielles de données qui sont les dépendances fonctionnelles. Nous montrons que la connexion, existante entre les données et leur visualisation, est précisément la connexion entre leurs représentations fonctionnelles. Nous dérivons alors un cadre technique, ayant pour objectif d'établir une telle connexion pour un ensemble de données et un ensemble de visualisations. En plus d'analyse du processus de visualisation, nous utilisons le modèle fonctionnel comme un guide pour la visualisation interactive, et définissons ce qu'on appelle la visualisation paramétrique. Le troisième aspect important de notre travail est l'expérimentation des résultats obtenus dans cette thèse. Les résultats de cette thèse peuvent être utilisés afin d’analyser les données contenues dans une table en Boyce-Codd Normal Form (BCNF), étant donné que le schéma de la table peut être transformé aisément en un schéma fonctionnel. Nous présentons une telle transformation (mapping) dans cette thèse. Une fois le schéma relationnel transformé en un schéma fonctionnel, nous pouvons profiter des résultats sur l'optimisation et la visualisation de requêtes. Nous avons utilisé cette transformation dans l’implémentation de deux prototypes dans le cadre de deux projets différents. / In this thesis, we explore different aspects of Data Warehousing and OLAP, the common point of our proposals being the functional model for data analysis. Our main objective is to use that model in studying three different, but related aspects:- query optimization through rewriting and cache management,- query result visualization,- mapping of a relational BCNF schema to a functional schema.Query optimization and cache management is a crucial issue in query processing in general, and in data warehousing in particular; and query rewriting is one of the basic techniques for query optimization. We establish derivability conditions for analytic functional queries, using a partial pre-order over the set of queries. Then we provide a sound and complete rewriting algorithm, as well as an optimized cache management strategy, both based on the underlying functional model.A second important aspect that we explore in the thesis is that of query result visualization. We show the importance for the visualization to reflect such essential features of the dataset as functional dependencies. We show that the connection existing between data and visualization is precisely the connection between their functional representations. We then define a framework, whose objective is to establish such a connection for a given dataset and a set of visualizations. In addition to the analysis of the visualization process, we use the functional data model as a guide for interactive visualization, and define what we call a parametric visualization. A third important aspect of our work is experimentation with the results obtained in the thesis. In order to be able to analyze the data contained in a Boyce-Codd Normal Form (BCNF) table, one can use the results obtained in this thesis, provided that the schema of the table can be mapped to a functional schema. We present such a mapping in this thesis. Once the relational schema has been transformed into a functional schema, we can take advantage of the query optimization and result visualization results presented in the thesis. We have used this transformation in the implementation of two prototypes in the context of two different projects.

Page generated in 0.0319 seconds