• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • Tagged with
  • 11
  • 11
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

SQL Query Disassembler: An Approach to Managing the Execution of Large SQL Queries

Meng, Yabin 25 September 2007 (has links)
In this thesis, we present an approach to managing the execution of large queries that involves the decomposition of large queries into an equivalent set of smaller queries and then scheduling the smaller queries so that the work is accomplished with less impact on other queries. We describe a prototype implementation of our approach for IBM DB2™ and present a set of experiments to evaluate the effectiveness of the approach. / Thesis (Master, Computing) -- Queen's University, 2007-09-17 22:05:05.304
2

AUTONOMIC WORKLOAD MANAGEMENT FOR DATABASE MANAGEMENT SYSTEMS

Zhang, Mingyi 07 May 2014 (has links)
In today’s database server environments, multiple types of workloads, such as on-line transaction processing, business intelligence and administrative utilities, can be present in a system simultaneously. Workloads may have different levels of business importance and distinct performance objectives. When the workloads execute concurrently on a database server, interference may occur and result in the workloads failing to meet the performance objectives and the database server suffering severe performance degradation. To evaluate and classify the existing workload management systems and techniques, we develop a taxonomy of workload management techniques. The taxonomy categorizes workload management techniques into multiple classes and illustrates a workload management process. We propose a general framework for autonomic workload management for database management systems (DBMSs) to dynamically monitor and control the flow of the workloads and help DBMSs achieve the performance objectives without human intervention. Our framework consists of multiple workload management techniques and performance monitor functions, and implements the monitor–analyze–plan–execute loop suggested in autonomic computing principles. When a performance issue arises, our framework provides the ability to dynamically detect the issue and to initiate and coordinate the workload management techniques. To detect severe performance degradation in database systems, we propose the use of indicators. We demonstrate a learning-based approach to identify a set of internal DBMS monitor metrics that best indicate the problem. We illustrate and validate our framework and approaches using a prototype system implemented on top of IBM DB2 Workload Manager. Our prototype system leverages the existing workload management facilities and implements a set of corresponding controllers to adapt to dynamic and mixed workloads while protecting DBMSs against severe performance degradation. / Thesis (Ph.D, Computing) -- Queen's University, 2014-05-07 13:35:42.858
3

Workload Management for Data-Intensive Services

Lim, Harold Vinson Chao January 2013 (has links)
<p>Data-intensive web services are typically composed of three tiers: i) a display tier that interacts with users and serves rich content to them, ii) a storage tier that stores the user-generated or machine-generated data used to create this content, and iii) an analytics tier that runs data analysis tasks in order to create and optimize new content. Each tier has different workloads and requirements that result in a diverse set of systems being used in modern data-intensive web services.</p><p>Servers are provisioned dynamically in the display tier to ensure that interactive client requests are served as per the latency and throughput requirements. The challenge is not only deciding automatically how many servers to provision but also when to provision them, while ensuring stable system performance and high resource utilization. To address these challenges, we have developed a new control policy for provisioning resources dynamically in coarse-grained units (e.g., adding or removing servers or virtual machines in cloud platforms). Our new policy, called proportional thresholding, converts a user-specified performance target value into a target range in order to account for the relative effect of provisioning a server on the overall workload performance.</p><p>The storage tier is similar to the display tier in some respects, but poses the additional challenge of needing redistribution of stored data when new storage nodes are added or removed. Thus, there will be some delay before the effects of changing a resource allocation will appear. Moreover, redistributing data can cause some interference to the current workload because it uses resources that can otherwise be used for processing requests. We have developed a system, called Elastore, that addresses the new challenges found in the storage tier. Elastore not only coordinates resource allocation and data redistribution to preserve stability during dynamic resource provisioning, but it also finds the best tradeoff between workload interference and data redistribution time.</p><p>The workload in the analytics tier consists of data-parallel workflows that can either be run in a batch fashion or continuously as new data becomes available. Each workflow is composed of smaller units that have producer-consumer relationships based on data. These workflows are often generated from declarative specifications in languages like SQL, so there is a need for a cost-based optimizer that can generate an efficient execution plan for a given workflow. There are a number of challenges when building a cost-based optimizer for data-parallel workflows, which includes characterizing the large execution plan space, developing cost models to estimate the execution costs, and efficiently searching for the best execution plan. We have built two cost-based optimizers: Stubby for batch data-parallel workflows running on MapReduce systems, and Cyclops for continuous data-parallel workflows where the choice of execution system is made a part of the execution plan space.</p><p>We have conducted a comprehensive evaluation that shows the effectiveness of each tier's automated workload management solution.</p> / Dissertation
4

Self-Learning Prediciton System for Optimisation of Workload Managememt in a Mainframe Operating System

Bensch, Michael, Brugger, Dominik, Rosenstiel, Wolfgang, Bogdan, Martin, Spruth, Wilhelm 06 November 2018 (has links)
We present a framework for extraction and prediction of online workload data from a workload manager of a mainframe operating system. To boost overall system performance, the prediction will be corporated into the workload manager to take preventive action before a bottleneck develops. Model and feature selection automatically create a prediction model based on given training data, thereby keeping the system flexible. We tailor data extraction, preprocessing and training to this specific task, keeping in mind the nonstationarity of business processes. Using error measures suited to our task, we show that our approach is promising. To conclude, we discuss our first results and give an outlook on future work.
5

Dynamic resource balancing in virtualization clusters / Dynamic resource balancing in virtualization clusters

Grafnetter, Michael January 2011 (has links)
The purpose of this thesis was to analyze the problem of resource load balancing in virtualization clusters. Another aim was to implement a pilot version of resource load balancer for the VMware vSphere Standard-based virtualization cluster. The thesis also inspected available commercial and open source resource load balancers and examined their usability and effectiveness. While designing the custom solution, a modification of the greedy algorithm has been chosen to be used to determine which virtual machines should be migrated and to select their target hosts. Furthermore, experiments have been conducted to determine some parameters for the algorithm. Finally, it was experimentally verified that the implemented solution can be applied to effectively balance virtualization server workloads by live migrating virtual machines running on these hosts.
6

Řízení zátěže datových skladů s využitím architektury Teradata Active System Management / Workload management in data warehouses using Teradata Active System Management architecture

Taimr, Jan January 2011 (has links)
This work is focused on the workload management of data warehouses based on Teradata technologies using Active System Management architecture. Objectives of this work are to characterize and analyze Active System Management architecture and types of rules, used in the workload management of Teradata data warehouses. These objectives have been achieved by a search of available resources and their subsequent analysis. Informations obtained from the analysis were empirically verified on a particular instance of a data warehouse and their synthesis is presented in this work The contributions of this work are in the documentation of technology that is currently not well known and widespread in the Czech Republic. Another contribution is identification of risks, drawbacks and presentation of recommendations in the workload management using Active System Management based on empirical tests. The repeatable implementation procedure based on induction has been proposed in the work. Maturity of the architecture for a production environment is evaluated. The work is hierarchically divided into several chapters that are dedicated to Teradata database technologies, workload management, Active System Management and implementation procedure. The first three chapters are focused on theory, however, they also contain practical informations related to the processed theory. The latter chapter is focused practically, therein is designed a repeatable Active System Management architecture implementation procedure.
7

Design and Implementation of a High Performance Network Processor with Dynamic Workload Management

Duggisetty, Padmaja 23 November 2015 (has links)
Internet plays a crucial part in today's world. Be it personal communication, business transactions or social networking, internet is used everywhere and hence the speed of the communication infrastructure plays an important role. As the number of users increase the network usage increases i.e., the network data rates ramped up from a few Mb/s to Gb/s in less than a decade. Hence the network infrastructure needed a major upgrade to be able to support such high data rates. Technological advancements have enabled the communication links like optical fibres to support these high bandwidths, but the processing speed at the nodes remained constant. This created a need for specialised devices for packet processing in order to match the increasing line rates which led to emergence of network processors. Network processors were both programmable and flexible. To support the growing number of internet applications, a single core network processor has transformed into a multi/many core network processor with multiple cores on a single chip rather than just one core. This improved the packet processing speeds and hence the performance of a network node. Multi-core network processors catered to the needs of a high bandwidth networks by exploiting the inherent packet-level parallelism in a network. But these processors still had intrinsic challenges like load balancing. In order to maximise throughput of these multi-core network processors, it is important to distribute the traffic evenly across all the cores. This thesis describes a multi-core network processor with dynamic workload management. A multi-core network processor, which performs multiple applications is designed to act as a test bed for an effective workload management algorithm. An effective workload management algorithm is designed in order to distribute the workload evenly across all the available cores and hence maximise the performance of the network processor. Runtime statistics of all the cores were collected and updated at run time to aid in deciding the application to be performed on a core to to enable even distribution of workload among the cores. Hence, when an overloading of a core is detected, the applications to be performed on the cores are re-assigned. For testing purposes, we built a flexible and a reusable platform on NetFPGA 10G board which uses a FPGA-based approach to prototyping network devices. The performance of the designed workload management algorithm is tested by measuring the throughput of the system for varying workloads.
8

System Identification in Automatic Database Memory Tuning

Burrell, Tiffany 25 March 2010 (has links)
Databases are very complex systems that require database system administrators to perform system tuning in order to achieve optimal performance. Memory tuning is vital to the performance of a database system because when the database workload exceeds its memory capacity, the results of the queries running on a system are delayed and can cause substantial user dissatisfaction. In order to solve this problem, this thesis presents a platform modeled after a closed control feedback loop to control the level of multi-query processing. Utilizing this platform provides two key assets. First, the system identification is acquired, which is one of two crucial steps involved in developing a closed feedback loop. Second, the platform provides a means to experimentally study database tuning problem and verify the effectiveness of research ideas related to database performance.
9

Thermodynamic and Workload Optimization of Data Center Cooling Infrastructures

Gupta, Rohit January 2021 (has links)
The ever-growing demand for cyber-physical infrastructures has significantly affected worldwide energy consumption and environmental sustainability over the past two decades. Although the average heat load of the computing infrastructures has increased, the supportive capacity of cooling infrastructures requires further improvement. Consequently, energy-efficient cooling architectures, real-time load management, and waste heat utilization strategies have gained attention in the data center (DC) industry. In this dissertation, essential aspects of cooling system modularization, workload management, and waste-heat utilization were addressed. At first, benefits of several legacy and modular DCs were assessed from the viewpoint of the first and second laws of thermodynamics. A computational fluid dynamics simulation-informed thermodynamic energy-exergy formulation captured equipment-level inefficiencies for various cooling architectures and scenarios. Furthermore, underlying reasons and possible strategies to reduce dominant exergy loss components were suggested. Subsequently, strategies to manage cooling parameters and IT workload were developed for the DCs with rack-based and row-based cooling systems. The goal of these management schemes was to fulfill either single or multiple objectives such as energy, exergy, and computing efficiencies. Thermal models coupled to optimization problems revealed the non-trivial tradeoffs across various objective functions and operation parameters. Furthermore, the scalability of the proposed approach for a larger DC was demonstrated. Finally, a waste heat management strategy was developed for new-age infrastructures containing both air- and liquid-cooled servers, one of the critical issues in the DC industry. Exhaust hot water from liquid-cooled servers was used to drive an adsorption chiller, which in turn produced chilled water required for the air-handler units of the air-cooled system. This strategy significantly reduced the energy consumption of existing compression chillers. Furthermore, economic and environmental assessments were performed to discuss the feasibility of this solution for the DC community. The work also investigated the potential tradeoffs between waste heat recovery and computing efficiencies. / Thesis / Doctor of Philosophy (PhD)
10

A Bandwidth Market in an IP Network

Lusilao-Zodi, Guy-Alain 03 1900 (has links)
Thesis (MSc (Mathematical Sciences. Computer Science))--University of Stellenbosch, 2008. / Consider a path-oriented telecommunications network where calls arrive to each route in a Poisson process. Each call brings on average a fixed number of packets that are offered to route. The packet inter-arrival times and the packet lengths are exponentially distributed. Each route can queue a finite number of packets while one packet is being transmitted. Each accepted packet/call generates an amount of revenue for the route manager. At specified time instants a route manager can acquire additional capacity (“interface capacity”) in order to carry more calls and/or the manager can acquire additional buffer space in order to carry more packets, in which cases the manager earns more revenue; alternatively a route manager can earn additional revenue by selling surplus interface capacity and/or by selling surplus buffer space to other route managers that (possibly temporarily) value it more highly. We present a method for efficiently computing the buying and the selling prices of buffer space. Moreover, we propose a bandwidth reallocation scheme capable of improving the network overall rate of earning revenue at both the call level and the packet level. Our reallocation scheme combines the Erlang price [4] and our proposed buffer space price (M/M/1/K prices) to reallocate interface capacity and buffer space among routes. The proposed scheme uses local rules and decides whether or not to adjust the interface capacity and/or the buffer space. Simulation results show that the reallocation scheme achieves good performance when applied to a fictitious network of 30-nodes and 46-links based on the geography of Europe.

Page generated in 0.0646 seconds