Spelling suggestions: "subject:"queda optimized"" "subject:"queda optimize""
1 |
'n Ondersoek na en bydraes tot navraaghantering en -optimering deur databasisbestuurstelsels / L. MullerMuller, Leslie January 2006 (has links)
The problems associated with the effective design and uses of databases are increasing. The
information contained in a database is becoming more complex and the size of the data is
causing space problems. Technology must continually develop to accommodate this growing
need. An inquiry was conducted in order to find effective guidelines that could support queries
in general in terms of performance and productivity. Two database management systems were
researched to compare die theoretical aspects with the techniques implemented in practice.
Microsoft SQL Sewer and MySQL were chosen as the candidates and both were put under
close scrutiny. The systems were researched to uncover the methods employed by each to
manage queries. The query optimizer forms the basis for each of these systems and manages
the parsing and execution of any query. The methods employed by each system for storing
data were researched.
The way that each system manages table joins, uses indices and chooses optimal execution
plans were researched. Adjusted algorithms were introduced for various index processes like
B+ trees and hash indexes.
Guidelines were compiled that are independent of the database management systems and help
to optimize relational databases. Practical implementations of queries were used to acquire and
analyse the execution plan for both MySQL and SQL Sewer. This plan along with a few other
variables such as execution time is discussed for each system. A model is used for both
database management systems in this experiment. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2007.
|
2 |
'n Ondersoek na en bydraes tot navraaghantering en -optimering deur databasisbestuurstelsels / L. MullerMuller, Leslie January 2006 (has links)
The problems associated with the effective design and uses of databases are increasing. The
information contained in a database is becoming more complex and the size of the data is
causing space problems. Technology must continually develop to accommodate this growing
need. An inquiry was conducted in order to find effective guidelines that could support queries
in general in terms of performance and productivity. Two database management systems were
researched to compare die theoretical aspects with the techniques implemented in practice.
Microsoft SQL Sewer and MySQL were chosen as the candidates and both were put under
close scrutiny. The systems were researched to uncover the methods employed by each to
manage queries. The query optimizer forms the basis for each of these systems and manages
the parsing and execution of any query. The methods employed by each system for storing
data were researched.
The way that each system manages table joins, uses indices and chooses optimal execution
plans were researched. Adjusted algorithms were introduced for various index processes like
B+ trees and hash indexes.
Guidelines were compiled that are independent of the database management systems and help
to optimize relational databases. Practical implementations of queries were used to acquire and
analyse the execution plan for both MySQL and SQL Sewer. This plan along with a few other
variables such as execution time is discussed for each system. A model is used for both
database management systems in this experiment. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2007.
|
3 |
Evaluating the effect of cardinality estimates on two state-of-the-art query optimizer's selection of access method / En utvärdering av kardinalitetsuppskattningens påverkan på två state-of-the-art query optimizers val av metod för att hämta dataBarksten, Martin January 2016 (has links)
This master thesis concern relational databases and their query optimizer’s sensitivity to cardinality estimates and the e!ect the quality of the estimate has on the number of different access methods used for the same relation. Two databases are evaluated — PostgreSQL and MariaDB — on a real-world dataset to provide realistic results. The evaluation was done via a tool implemented in Clojure and tests were conducted on a query and subsets of it with varying sample sizes used when estimating cardinality. The results indicate that MariaDB’s query optimizer is less sensitive to cardinality estimates and for all tests select the same access methods, regardless of the quality of the cardinality estimate. This stands in contrast to PostgreSQL’s query optimizer which will vary between using an index or doing a full table scan depending on the estimated cardinality. Finally, it is also found that the predicate value used in the query a!ects the access method used. Both PostgreSQL and MariaDB are found sensitive to this property, with MariaDB having the largest number of di!erent access methods used depending on predicate value. / Detta examensarbete behandlar relationella databaseer och hur stor påverkan kvaliteten på den uppskattade kardinaliteten har på antalet olika metoder som används för att hämta data från samma relation. Två databaser testades — PostgreSQL och MariaDB — på ett verkligt dataset för att ge realistiska resultat. Utvärderingen gjordes med hjälp av ett verktyg implementerat i Clojure och testerna gjordes på en query, och delvarianter av den, med varierande stora sample sizes för kardinalitetsuppskattningen. Resultaten indikerar att MariaDBs query optimizer inte påverkas av kardinalitetsuppskattningen, för alla testerna valde den samma metod för att hämta datan. Detta skiljer sig mot PostgreSQLs query optimizer som varierade mellan att använda sig av index eller göra en full table scan beroende på den uppskattade kardinaliteten. Slutligen pekade även resultaten på att båda databasernas query optimizers varierade metod för att hämta data beroende på värdet i predikatet som användes i queryn.
|
4 |
Efficiently Approximating Query Optimizer DiagramsDey, Atreyee 08 1900 (has links)
Modern database systems use a query optimizer to identify the most efficient strategy, called “query execution plan”, to execute declarative SQL queries. The role of the query optimizer is especially critical for the complex decision-support queries featured in current data warehousing and data mining applications.
Given an SQL query template that is parametrized on the selectivities of the participating base relations and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. Complementary to the plan-diagrams are cost and cardinality diagrams which graphically plot the estimated execution costs and cardinalities respectively, over the query parameter space. These diagrams are collectively known as optimizer diagrams. Optimizer diagrams have proved to be a powerful tool for the analysis and redesign of modern optimizers, and are gaining interest in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force approaches are used for producing fine-grained diagrams on high-dimensional query templates.
In this thesis, we investigate strategies for efficiently producing close approximations to complex optimizer diagrams. Our techniques are customized for different classes of optimizers, ranging from the generic Class I optimizers that provide only the optimal plan for a query, to Class II optimizers that also support costing of sub-optimal plans and Class III optimizers which offer enumerated rank-ordered lists of plans in addition to both the former features.
For approximating plan diagrams for Class I optimizers, we first present database oblivious techniques based on classical random sampling in conjunction with nearest neighbor (NN) inference scheme. Next we propose grid sampling algorithms which consider database specific knowledge such as(a) the structural differences between the operator trees of plans on the grid locations and (b) parametric query optimization principle. These algorithms become more efficient when modified to exploit the sub-optimal plan costing feature available with Class II optimizers. The final algorithm developed for Class III optimizers assume plan cost monotonicity and utilize the rank-ordered lists of plans to efficiently generate completely accurate optimizer diagrams. Subsequently, we provide a relaxed variant, which trades quality of approximation, for reduction in diagram generation overhead. Our proposed algorithms are capable of terminating according to user given error bound for plan diagram approximation.
For approximating cost diagrams, our strategy is based on linear least square regression performed on a mathematical model of plan cost behavior over the parameter space, in conjunction with interpolation techniques. Game theoretic and linear programming approaches have been employed to further reduce the error in cost approximation.
For approximating cardinality diagrams, we propose a novel parametrized mathematical model as a function of selectivities for characterizing query cardinality behavior. The complete cardinality model is constructed by clustering the data points according to their cardinality values and subsequently fitting the model through linear least square regression technique separately for each cluster. For non-sampled data points the cardinality values are estimated by first determining the cluster they belong to and then interpolating the cardinality value according to the suitable model.
Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate optimizer diagrams while incurring no more than 20% of the computational overheads of the exhaustive approach. Infact, for full-featured optimizers, we can guarantee zero error optimizer diagrams which usually require less than 10% overheads. Our results exhibit that (a) the approximation is materially faithful to the features of the exact optimizer diagram, with the errors thinly spread across the picture and Largely confined to the plan transition boundaries and (b) the cost increase at the non-sampled point due to assignment of sub-optimal plan is also limited.
These approximation techniques have been implemented in the publicly available Picasso optimizer visualizer tool. We have also modified PostgreSQL’s optimizer to incorporate costing of sub-optimal plans and enumerating rank-ordered lists of plans. In addition to these, we have designed estimators for predicting the time overhead involved in approximating optimizer diagrams with regard to user given error bounds.
In summary, this thesis demonstrates that accurate approximations to exact optimizer diagrams can indeed be obtained cheaply and consistently, with typical overheads being an order of magnitude lower than the brute-force approach. We hope that our results will encourage database vendors to incorporate the foreign-plan-costing and plan-rank-list features in their optimizer APIs.
|
Page generated in 0.0668 seconds