Spelling suggestions: "subject:"“query aptimization”"" "subject:"“query anoptimization”""
41 |
KEEPING TRACK OF NETWORK FLOWS: AN INEXPENSIVE AND FLEXIBLE SOLUTIONFedyukin, Alexander V. January 2005 (has links)
No description available.
|
42 |
New techniques for efficiently discovering frequent patternsJin, Ruoming 01 August 2005 (has links)
No description available.
|
43 |
SEEDEEP: A System for Exploring and Querying Deep Web Data SourcesWang, Fan 27 September 2010 (has links)
No description available.
|
44 |
Database Optimization and Evaluation : A case study in the chemical management domainAkbary, Rocky January 2024 (has links)
Effective database management has become essential for modern organizations, especially to reduce costs while maintaining optimal performance. This project explores practical strategies to reduce response times, improve resource efficiency and improve data integrity in a database for the chemical management sector. The techniques include normalization, data type optimization, and query optimization using indexes. Tools like EXPLAIN are used to understand the optimizer's logic regarding the selection of scan types and how it can be influenced to make better decisions. MySqlSlap is used for load testing to verify the effects of the changes, such as reduced latency, improved memory management, and improved resource utilization.
|
45 |
HyQoZ - Optimisation de requêtes hybrides basée sur des contrats SLA / HyQoZ – SLA-aware hybrid query optimizationLopez-Enriquez, Carlos-Manuel 23 October 2014 (has links)
On constate aujourd’hui une explosion de la quantité de données largement distribuées et produites par différents dispositifs (e.g. capteurs, dispositifs informatiques, réseaux, processus d’analyse) à travers de services dits de données. Dans ce contexte, il s’agit d’évaluer des requêtes dites hybrides car ils intègrent des aspects de requêtes classiques, mobiles et continues fournies par des services de données, statiques ou mobiles, en mode push ou pull. L’objectif de ma thèse est de proposer une approche pour l’optimisation de ces requêtes hybrides basée sur des préférences multicritère (i.e. SLA – Service Level Agreement). Le principe consiste à combiner les services de données et calcule pour construire un évaluateur de requêtes adapté au SLA requis par l’utilisateur, tout en considérant les conditions de QoS des services et du réseau. / Today we are witnesses of the explosion of data producer massively by largely distributed of data produced by different devices (e.g. sensors, personal computers, laptops, networks) by means of data services. In this context, It is about evaluate queries named hybrid because they entails aspects related with classic queries, mobile and continuous provided by static or nomad data services in mode push or pull. The objective of my thesis is to propose an approach to optimize hybrid queries based in multi-criteria preferences (i.e. SLA – Service Level Agreement). The principle is to combine data services to construct a query evaluator adapted to the preferences expressed in the SLA whereas the state of services and network is considered as QoS measures.
|
46 |
Dynamic Optimization and Migration of Continuous Queries Over Data StreamsZhu, Yali 23 August 2006 (has links)
"Continuous queries process real-time streaming data and output results in streams for a wide range of applications. Due to the fluctuating stream characteristics, a streaming database system needs to dynamically adapt query execution. This dissertation proposes novel solutions to continuous query adaptation in three core areas, namely dynamic query optimization, dynamic plan migration and partitioned query adaptation. Runtime query optimization needs to efficiently generate plans that satisfy both CPU and memory resource constraints. Existing work focus on minimizing intermediate query results, which decreases memory and CPU usages simultaneously. However, doing so cannot assure that both resource constraints are being satisfied, because memory and CPU can be either positively or negatively correlated. This part of the dissertation proposes efficient optimization strategies that utilize both types of correlations to search the entire query plan space in polynomial time when a typical exhaustive search would take at least exponential time. Extensive experimental evaluations have demonstrated the effectiveness of the proposed strategies. Dynamic plan migration is concerned with on-the-fly transition from one continuous plan to a semantically equivalent yet more efficient plan. It is a must to guarantee the continuation and repeatability of dynamic query optimization. However, this research area has been largely neglected in the current literature. The second part of this dissertation proposes migration strategies that dynamically migrate continuous queries while guaranteeing the integrity of the query results, meaning there are no missing, duplicate or incorrect results. The extensive experimental evaluations show that the proposed strategies vary significantly in terms of output rates and memory usages given distinct system configurations and stream workloads. Partitioned query processing is effective to process continuous queries with large stateful operators in a distributed system. Dynamic load redistribution is necessary to balance uneven workload across machines due to changing stream properties. However, existing solutions generally assume static query plans without runtime query optimization. This part of the dissertation evaluates the benefits of applying query optimization in partitioned query processing and shows dramatic performance improvement of more than 300%. Several load balancing strategies are then proposed to consider the heterogeneity of plan shapes across machines caused by dynamic query optimization. The effectiveness of the proposed strategies is analyzed through extensive experiments using a cluster."
|
47 |
VAMANA : A High Performance, Scalable and Cost Driven XPath EngineRaghavan, Venkatesh 05 May 2004 (has links)
Many applications are migrating or beginning to make use native XML data. We anticipate that queries will emerge that emphasize the structural semantics of XML query languages like XPath and XQuery. This brings a need for an efficient query engine and database management system tailored for XML data similar to traditional relational engines. While mapping large XML documents into relational database systems while possible, poses difficulty in mapping XML queries to the less powerful relational query language SQL and creates a data model mismatch between relational tables and semi-structured XML data. Hence native solutions to efficiently store and query XML data are being developed recently. However, most of these systems thus far fail to demonstrate scalability with large document sizes, to provide robust support for the XPath query language nor to adequately address costing with respect to query optimization. In this thesis, we propose a novel cost-driven XPath engine to support the scalable evaluation of ad-hoc XPath expressions called VAMANA. VAMANA makes use of an efficient XML repository for storing and indexing large XML documents called the Multi-Axis Storage Structure (MASS) developed at WPI. VAMANA extensively uses indexes for query evaluation by considering index-only plans. To the best of our knowledge, it is the only XML query engine that supports an index plan approach for large XML documents. Our index-oriented query plans allow queries to be evaluated while reading only a fraction of the data, as all tuples for a particular context node are clustered together. The pipelined query framework minimizes the cost of handing intermediate data during query processing. Unlike other native solutions, VAMANA provides support for all 13 XPath axes. Our schema independent cost model provides dynamically calculated statistics that are then used for intelligent cost-based transformations, further improving performance. Our optimization strategy for increasing execution time performance is affirmed through our experimental studies on XMark benchmark data. VAMANA query execution is significantly faster than leading available XML query engines.
|
48 |
Power-Performance Tradeoffs in Database SystemsXu, Zichen 02 July 2009 (has links)
With the total energy consumption of computing systems increasing at a steep rate, much attention had been paid to the design of energy-efficient computing systems and applications. So far, database system design has focused on improving the performance of query processing. The objective of this study is to explore the potential of energy conservation in relational database management systems. The hypothesis is: by modifying the query optimizer in a Database management system (DBMS) to take the energy cost of query plans into consideration, we will be able to reduce the energy usage of database servers and control the tradeoffs between energy consumption and system performance. In this thesis, we provide an in-depth anatomy of typical queries in various benchmarks and qualitatively analyze the energy profile of such queries. The results of extensive experiments show that power savings in the range of 11% to 22% can be achieved by equipping the DBMS with a simple query optimizer that selects query plans based on both estimated processing time and energy requirements. We advocate more research efforts be invested into the design and evaluation of power-aware DBMSs in hope to reach higher level of energy efficiency.
|
49 |
Optimization and Execution of Complex Scientific QueriesFomkin, Ruslan January 2009 (has links)
Large volumes of data produced and shared within scientific communities are analyzed by many researchers to investigate different scientific theories. Currently the analyses are implemented in traditional programming languages such as C++. This is inefficient for research productivity, since it is difficult to write, understand, and modify such programs. Furthermore, programs should scale over large data volumes and analysis complexity, which further complicates code development. This Thesis investigates the use of database technologies to implement scientific applications, in which data are complex objects describing measurements of independent events and the analyses are selections of events by applying conjunctions of complex numerical filters on each object separately. An example of such an application is analyses for the presence of Higgs bosons in collision events produced by the ATLAS experiment. For efficient implementation of such an ATLAS application, a new data stream management system SQISLE is developed. In SQISLE queries are specified over complex objects which are efficiently streamed from sources through the query engine. This streaming approach is compared with the conventional approach to load events into a database before querying. Since the queries implementing scientific analyses are large and complex, novel techniques are developed for efficient query processing. To obtain efficient plans for such queries SQISLE implements runtime query optimization strategies, which during query execution collect runtime statistics for a query, reoptimize the query using the collected statistics, and dynamically switch optimization strategies. The cost-based optimization utilizes a novel cost model for aggregate functions over nested subqueries. To alleviate estimation errors in large queries the fragments are decomposed into conjunctions of subqueries over which runtime statistics are measured. Performance is further improved by query transformation, view materialization, and partial evaluation. ATLAS queries in SQISLE using these query processing techniques perform close to or better than hard-coded C++ implementations of the same analyses. Scientific data are often stored in Grids, which manage both storage and computational resources. This Thesis includes a framework POQSEC that utilizes Grid resources to scale scientific queries over large data volumes by parallelizing the queries and shipping the data management system itself, e.g. SQISLE, to Grid computational nodes for the parallel query execution.
|
50 |
Answering Object Queries over Knowledge Bases with Expressive Underlying Description LogicsWu, Jiewen January 2013 (has links)
Many information sources can be viewed as collections of objects and descriptions about objects. The relationship between objects is often characterized by a set of constraints that semantically encode background knowledge of some domain. The most straightforward and fundamental way to access information in these repositories is to search for objects that satisfy certain selection criteria. This work considers a description logics (DL) based representation of such information sources and object queries, which allows for automated reasoning over the constraints accompanying objects. Formally, a knowledge base K=(T, A) captures constraints in the terminology (a TBox) T, and objects with their descriptions in the assertions (an ABox) A, using some DL dialect L. In such a setting, object descriptions are L-concepts and object identifiers correspond to individual names occurring in K. Correspondingly, object queries are the well known problem of instance retrieval in the underlying DL knowledge base K, which returns the identifiers of qualifying objects.
This work generalizes instance retrieval over knowledge bases to provide users with answers in which both identifiers and descriptions of qualifying objects are given. The proposed query paradigm, called assertion retrieval, is favoured over instance retrieval since it provides more informative answers to users. A more compelling reason is related to performance: assertion retrieval enables a transfer of basic relational database techniques, such as caching and query rewriting, in the context of an assertion retrieval algebra.
The main contributions of this work are two-fold: one concerns optimizing the fundamental reasoning task that underlies assertion retrieval, namely, instance checking, and the other establishes a query compilation framework based on the assertion retrieval algebra. The former is necessary because an assertion retrieval query can entail a large volume of instance checking requests in the form of K|= a:C, where "a" is an individual name and "C" is a L-concept. This work thus proposes a novel absorption technique, ABox absorption, to improve instance checking. ABox absorption handles knowledge bases that have an expressive underlying dialect L, for instance, that requires disjunctive knowledge. It works particularly well when knowledge bases contain a large number of concrete domain concepts for object descriptions.
This work further presents a query compilation framework based on the assertion retrieval algebra to make assertion retrieval more practical. In the framework, a suite of rewriting rules is provided to generate a variety of query plans, with a focus on plans that avoid reasoning w.r.t. the background knowledge bases when sufficient cached results of earlier requests exist. ABox absorption and the query compilation framework have been implemented in a prototypical system, dubbed CARE Assertion Retrieval Engine (CARE). CARE also defines a simple yet effective cost model to search for the best plan generated by query rewriting. Empirical studies of CARE have shown that the proposed techniques in this work make assertion retrieval a practical application over a variety of domains.
|
Page generated in 0.0826 seconds