1 |
Workload Adaptation in Autonomic Database Management SystemsNiu, Baoning 30 January 2008 (has links)
Workload adaptation is a performance management process in which an autonomic database management system (DBMS) efficiently makes use of its resources by filtering or controlling the workload presented to it in order to meet its Service Level Objectives (SLOs). It is a challenge to adapt multiple workloads with complex resource requirements towards their performance goals while taking their business importance into account. This thesis studies approaches and techniques for workload adaptation.
First we build a general framework for workload adaptation in autonomic DBMSs, which is composed of two processes, namely workload detection and workload control. The processes are in turn made up of four functional components - workload characterization, performance modeling, workload control, and system monitoring.
We then implement a query scheduler that performs workload adaptation in a DBMS, as the test bed to prove the effectiveness of the framework. The query scheduler manages multiple classes of queries to meet their performance goals by allocating DBMS resources through admission control in the presence of workload fluctuation. The resource allocation plan is derived by maximizing the objective function that encapsulates the performance goals of all classes and their importance to the business. First-principle performance models are used to predict the performance under the new resource allocation plan. Experiments with IBM® DB2® are conducted to show the effectiveness of the framework.
The effectiveness of the workload adaptation depends on the accuracy of the performance prediction. Finally we introduce a tracking filter (Kalman filter) to improve the accuracy of the performance prediction. Experimental results show that the approach is able to reduce the number of unpredicted SLO violations and prediction errors. / Thesis (Ph.D, Computing) -- Queen's University, 2008-01-28 21:22:25.139
|
2 |
Workload-based optimization of integration processesBöhm, Matthias, Wloka, Uwe, Habich, Dirk, Lehner, Wolfgang 03 July 2023 (has links)
The efficient execution of integration processes between distributed, heterogeneous data sources and applications is a challenging research area of data management. These integration processes are an abstraction for workflow-based integration tasks, used in EAI servers and WfMS. The major problem are significant workload changes during runtime. The performance of integration processes strongly depends on those dynamic workload characteristics, and hence workload-based optimization is important. However, existing approaches of workflow optimization only address the rule-based optimization and disregard changing workload characteristics. To overcome the problem of inefficient process execution in the presence of workload shifts, here, we present an approach for the workload-based optimization of instance-based integration processes and show that significant execution time reductions are possible.
|
3 |
On-demand re-optimization of integration flowsBöhm, Matthias, Habich, Dirk, Lehner, Wolfgang 04 July 2023 (has links)
Integration flows are used to propagate data between heterogeneous operational systems or to consolidate data into data warehouse infrastructures. In order to meet the increasing need of up-to-date information, many messages are exchanged over time. The efficiency of those integration flows is therefore crucial to handle the high load of messages and to reduce message latency. State-of-the-art strategies to address this performance bottleneck are based on incremental statistic maintenance and periodic cost-based re-optimization. This also achieves adaptation to unknown statistics and changing workload characteristics, which is important since integration flows are deployed for long time horizons. However, the major drawbacks of periodic re-optimization are many unnecessary re-optimization steps and missed optimization opportunities due to adaptation delays. In this paper, we therefore propose the novel concept of on-demand re-optimization. We exploit optimality conditions from the optimizer in order to (1) monitor optimality of the current plan, and (2) trigger directed re-optimization only if necessary. Furthermore, we introduce the PlanOptimalityTree as a compact representation of optimality conditions that enables efficient monitoring and exploitation of these conditions. As a result and in contrast to existing work, re-optimization is immediately triggered but only if a new plan is certain to be found. Our experiments show that we achieve near-optimal re-optimization overhead and fast workload adaptation.
|
Page generated in 0.0955 seconds