Spelling suggestions: "subject:"“query aptimization”"" "subject:"“query anoptimization”""
11 |
Integrating Fuzzy Decisioning Models With Relational Database ConstructsDurham, Erin-Elizabeth A 18 December 2014 (has links)
Human learning and classification is a nebulous area in computer science. Classic decisioning problems can be solved given enough time and computational power, but discrete algorithms cannot easily solve fuzzy problems. Fuzzy decisioning can resolve more real-world fuzzy problems, but existing algorithms are often slow, cumbersome and unable to give responses within a reasonable timeframe to anything other than predetermined, small dataset problems. We have developed a database-integrated highly scalable solution to training and using fuzzy decision models on large datasets. The Fuzzy Decision Tree algorithm is the integration of the Quinlan ID3 decision-tree algorithm together with fuzzy set theory and fuzzy logic. In existing research, when applied to the microRNA prediction problem, Fuzzy Decision Tree outperformed other machine learning algorithms including Random Forest, C4.5, SVM and Knn. In this research, we propose that the effectiveness with which large dataset fuzzy decisions can be resolved via the Fuzzy Decision Tree algorithm is significantly improved when using a relational database as the storage unit for the fuzzy ID3 objects, versus traditional storage objects. Furthermore, it is demonstrated that pre-processing certain pieces of the decisioning within the database layer can lead to much swifter membership determinations, especially on Big Data datasets. The proposed algorithm uses the concepts inherent to databases: separated schemas, indexing, partitioning, pipe-and-filter transformations, preprocessing data, materialized and regular views, etc., to present a model with a potential to learn from itself. Further, this work presents a general application model to re-architect Big Data applications in order to efficiently present decisioned results: lowering the volume of data being handled by the application itself, and significantly decreasing response wait times while allowing the flexibility and permanence of a standard relational SQL database, supplying optimal user satisfaction in today's Data Analytics world. We experimentally demonstrate the effectiveness of our approach.
|
12 |
Query Optimization for On-Demand Information Extraction Tasks over Text DatabasesFarid, Mina H. 12 March 2012 (has links)
Many modern applications involve analyzing large amounts of data that comes from unstructured text documents. In its original format, data contains information that, if extracted, can give more insight and help in the decision-making process. The ability to answer structured SQL queries over unstructured data allows for more complex data analysis. Querying unstructured data can be accomplished with the help of information extraction (IE) techniques. The traditional way is by using the Extract-Transform-Load (ETL) approach, which performs all possible extractions over the document corpus and stores the extracted relational results in a data warehouse. Then, the extracted data is queried. The ETL approach produces results that are out of date and causes an explosion in the number of possible relations and attributes to extract. Therefore, new approaches to perform extraction on-the-fly were developed; however, previous efforts relied on specialized extraction operators, or particular IE algorithms, which limited the optimization opportunities of such queries.
In this work, we propose an on-line approach that integrates the engine of the database management system with IE systems using a new type of view called extraction views. Queries on text documents are evaluated using these extraction views, which get populated at query-time with newly extracted data. Our approach enables the optimizer to apply all well-defined optimization techniques. The optimizer selects the best execution plan using a defined cost model that considers a user-defined balance between the cost and quality of extraction, and we explain the trade-off between the two factors. The main contribution is the ability to run on-demand information extraction to consider latest changes in the data, while avoiding unnecessary extraction from irrelevant text documents.
|
13 |
Query Optimization in Dynamic EnvironmentsEl-Helw, Amr January 2012 (has links)
Most modern applications deal with very large amounts of data. Having to deal with such huge amounts of data is in itself a challenge. This challenge is complicated even more by the fact that, in many cases, this data is constantly changing and evolving. For instance, relational databases that handle the data of day-to-day transactional applications often have tables with very high data change rates. It is not uncommon to even have temporary or volatile tables that get created from scratch and completely dropped over the course of one query workload.
This dissertation focuses on optimizing structured queries over dynamic and constantly changing data sets. Our work address this issue, and some of the challenges related to it.
We address the issue of database statistics becoming stale and inaccurate due to constantly changing data. We introduce ways to automatically analyze the existing statistics and recommend and collect the necessary statistics to optimize a single query or a query workload.
We introduce a mechanism to automate the recommendation and collection of statistical views for a given query workload. We also compare two methods of using these statistical views in selectivity estimation. We evaluate our methods and techniques with experimental studies using prototypes that we built into commercial database systems.
|
14 |
L'interaction au service de l'optimisation à grande échelle des entrepôts de données relationnels / /Kerkad, Amira 11 December 2013 (has links)
La technologie de base de données est un environnement adéquat pour l’interaction. Elle peutconcerner plusieurs composantes du SGBD : (a) les données, (b) les requêtes, (c) les techniques d’optimisationet (d) les supports de stockage. Au niveau des données, les corrélations entre les attributs sont très communesdans les données du monde réel, et ont été exploitées pour définir les vues matérialisées et les index. Au niveaurequêtes, l’interaction a été massivement étudiée sous le problème d’optimisation multi-requêtes. Les entrepôtsde données avec leurs jointures en étoile augmentent le taux d’interaction. L’interaction des requêtes a étéemployée pour la sélection des techniques d’optimisation comme les index. L’interaction contribue égalementdans la sélection multiple des techniques d’optimisation comme les vues matérialisées, les index, lepartitionnement et le clustering. Dans les études existantes, l’interaction concerne une seule composante. Danscette thèse, nous considérons l’interaction multi-composante, avec trois techniques d’optimisation, où chacuneconcerne une composante : l’ordonnancement des requêtes (niveau requêtes), la fragmentation horizontale(niveau données) et la gestion du buffer (niveau support de stockage). L’ordonnancement des requêtes (OR)consiste à définir un ordre d’exécution optimal pour les requêtes pour permettre à quelques requêtes debénéficier des données pré-calculées. La fragmentation horizontale (FH) divise les instances de chaque relationen sous-ensembles disjoints. La gestion du buffer (GB) consiste à allouer et remplacer les données dans l’espacebuffer disponible pour réduire le coût de la charge. Habituellement, ces problèmes sont traités soit de façonisolée ou par paire comme la GB et l’OR. Cependant, ces problèmes sont similaires et complémentaires. Uneformalisation profonde pour le scénario hors-ligne et en-ligne des problèmes est fournie et un ensembled’algorithmes avancés inspirés du comportement naturel des abeilles sont proposés. Nos propositions sontvalidées en utilisant un simulateur et un SGBD réel (Oracle) avec le banc d’essai star schema benchmark àgrande échelle. / The database technology is an adequate environment for the interaction. It may concern severalcomponents of the DBMS: (a) the data, (b) the queries, (c) the optimization techniques and (d) the devices. Atthe data level, correlations between attributes are extremely common in the real world relational data, and havebeen exploited to define materialized views and indexes. At the query level, interaction has been massivelystudied under the problem of multi-query optimization. The data warehouses with their star join queriesincrease the rate of the interaction. The query interaction has been used for selecting optimization techniquessuch as indexes. The interaction also contributes in selecting multiple optimization techniques such asmaterialized views, indexes, data partitioning and the clustering. In existing studies, the interaction concernsonly one component. In this thesis, we consider the multi-component interaction, with three optimizationtechniques, where each one concerns one component: the query scheduling (query level), the horizontal datapartitioning (data level) and the buffer management (device level). The query scheduling (QS) consists indefining an optimal order of executing queries to allow some queries to get benefit from already processed data.The horizontal data partitioning (HDP) divides the instances of each relation into disjoint subsets. The buffermanagement (BM) consists in allocating and replacing data in the buffer pool to lower the cost of queries.Usually, these problems are treated either in isolation or pairwise such as BM and QS. However, these problemsare similar and complementary. A deep formalization for off-line and online scenario of these problems is givenand advanced algorithms inspired from natural bees behavior are proposed. Our proposal has been validatedusing a simulator and real DBMS (Oracle) using a large scale of star schema benchmark.
|
15 |
Shared Complex Event Trend AggregationRozet, Allison M. 07 May 2020 (has links)
Streaming analytics deploy Kleene pattern queries to detect and aggregate event trends against high-rate data streams. Despite increasing workloads, most state-of-the-art systems process each query independently, thus missing cost-saving sharing opportunities. Sharing complex event trend aggregation poses several technical challenges. First, the execution of nested and diverse Kleene patterns is difficult to share. Second, we must share aggregate computation without the exponential costs of constructing the event trends. Third, not all sharing opportunities are beneficial because sharing aggregation introduces overhead. We propose a novel framework, Muse (Multi-query Snapshot Execution), that shares aggregation queries with Kleene patterns while avoiding expensive trend construction. It adopts an online sharing strategy that eliminates re-computations for shared sub-patterns. To determine the beneficial sharing plan, we introduce a cost model to estimate the sharing benefit and design the Muse refinement algorithm to efficiently select robust sharing candidates from the search space. Finally, we explore optimization decisions to further improve performance. Our experiments over a wide range of scenarios demonstrate that Muse increases throughput by 4 orders of magnitude compared to state-of-the-art approaches with negligible memory requirements.
|
16 |
Accelerating SPARQL Queries and Analytics on RDF DataAl-Harbi, Razen 09 November 2016 (has links)
The complexity of SPARQL queries and RDF applications poses great challenges on distributed RDF management systems. SPARQL workloads are dynamic and con- sist of queries with variable complexities. Hence, systems that use static partitioning su↵er from communication overhead for workloads that generate excessive communi- cation. Concurrently, RDF applications are becoming more sophisticated, mandating analytical operations that extend beyond SPARQL queries. Being primarily designed and optimized to execute SPARQL queries, which lack procedural capabilities, exist- ing systems are not suitable for rich RDF analytics.
This dissertation tackles the problem of accelerating SPARQL queries and RDF analytics on distributed shared-nothing RDF systems. First, a distributed RDF en- gine, coined AdPart, is introduced. AdPart uses lightweight hash partitioning for sharding triples using their subject values; rendering its startup overhead very low. The locality-aware query optimizer of AdPart takes full advantage of the partition- ing to (i) support the fully parallel processing of join patterns on subjects and (ii) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. By exploiting hash- based locality, AdPart achieves better or comparable performance to systems that employ sophisticated partitioning schemes.
To cope with workloads dynamism, AdPart is extended to dynamically adapt to workload changes. AdPart monitors the data access patterns and dynamically redis- tributes and replicates the instances of the most frequent patterns among workers.Consequently, the communication cost for future queries is drastically reduced or even
eliminated. Experiments with synthetic and real data verify that AdPart starts faster than all existing systems and gracefully adapts to the query load.
Finally, to support and accelerate rich RDF analytical tasks, a vertex-centric RDF analytics framework is proposed. The framework, named SPARTex, bridges the gap between RDF and graph processing. To do so, SPARTex: (i) implements a generic SPARQL operator as a vertex-centric program. The operator is coupled with an optimizer that generates e cient execution plans. (ii) It allows SPARQL to invoke vertex-centric programs as stored procedures. Finally, (iii) it provides a unified in- memory data store that allows the persistence of intermediate results. Consequently, SPARTex can e ciently support RDF analytical tasks consisting of complex pipeline of operators.
|
17 |
Constructing Accurate Synopses for Database Query Optimization and Re-optimizationYu, Feng 01 May 2013 (has links) (PDF)
Fast and accurate estimations for complex queries are profoundly beneficial for large databases with heavy workloads. The most widely adopted query optimizers use synopses to tune up the databases in manners of optimization and re-optimization. From Chapter 1 to Chapter 3, we focus on the synopses for query optimization. We propose a statistical summary for a database, called CS2 (Correlated Sample Synopsis), to provide rapid and accurate result size estimations for all queries with joins and arbitrary selections. Unlike the state-of-the-art techniques, CS2 does not completely rely on simple random samples, but mainly consists of correlated sample tuples that retain join relationships with less storage. We introduce a statistical technique, called reverse sample, and design an innovative estimator, called reverse estimator, to fully utilize correlated sample tuples for query estimation. We prove both theoretically and empirically that the reverse estimator is unbiased and accurate using CS2. Extensive experiments on multiple datasets show that CS2 is fast to construct and derives more accurate estimations than existing methods with the same space budget. In Chapter 4, we focus on the synopses for query re-optimization on repetitive queries. Repetitive queries refer to those queries that are likely to be executed repeatedly in the future, such as those that are used to generate periodic reports, perform routine maintenance, summarize data for analysis, etc. They can constitute a large part of daily activities of a database system and deserve more optimization efforts. In this paper, we propose to collect information about how tuples are joined in a query, called the query or join trace, during execution of a query. We intend to use this join trace to compute the selectivities of joins in all join orders for the query. We use existing operators, as well as new operators, to gather such information. We show that the trace gathered from a query is sufficient to compute the exact selectivities of all plans of the query. To reduce the overheads of generating a trace, we propose a sampling scheme that generates only a sample of the trace. Experimental results have shown that with only a small sample of the trace, accurate estimates of join selectivities can be obtained. The sample trace makes re-estimation of join selectivities of a repetitive query efficient and accurate.
|
18 |
A Hybrid Cost Model for Evaluating Query Execution PlansWang, Ning 22 January 2024 (has links)
Query optimization aims to select a query execution plan among all query paths for a given query. The query optimization of traditional relational database management systems (RDBMSs) relies on estimating the cost of the alternative query plans in the query plan search space provided by a cost model. The classic cost model (CCM) may lead the optimizer to choose query plans with poor execution time due to inaccurate cardinality estimations and simplifying assumptions. A learned cost model (LCM) based on machine learning does not rely on such estimations and learns the cost from runtime. While learned cost models are shown to improve the average performance, they may not guarantee that optimal performance will be consistently achieved. In addition, the query plans generated using the LCM may not necessarily outperform the query plans generated with the CCM. This thesis proposes a hybrid approach to solve this problem by striking a balance between the LCM and the CCM. The hybrid model uses the LCM when it is expected to be reliable in selecting a good plan and falls back to the CCM otherwise. The evaluation results of the hybrid model demonstrate promising performance, indicating potential for successful use in future applications.
|
19 |
BINDING HASH TECHNIQUE FOR XML QUERY OPTIMIZATIONBRANT, MICHAEL J. 20 July 2006 (has links)
No description available.
|
20 |
QUERYING GRAPH STRUCTURED RDF DATAQiao, Shi 27 January 2016 (has links)
No description available.
|
Page generated in 0.1231 seconds