The skyrocketing amount of electricity consumed by many data centers around the globe has become a serious issue for the cloud computing and entire IT industry. The demand for data centers is rapidly increasing due to widespread usage of cloud services. It also leads to huge carbon emissions contributing to the global greenhouse effect. The US Environmental Protection Agency has declared that data centers represent a substantial portion of the energy consumption in the US and the whole world. Some of this energy consumption is caused by idle servers or servers running at higher-than-necessary frequencies. Due to the Dynamic Voltage and Frequency Scaling (DVFS) technology enabled in many CPUs, strategically reducing CPU frequency without affecting the Quality of Service (QoS) is desired. Our goal in this paper is to calculate and tune to the best CPU frequency for each running task combined with two commonly-used scheduling approaches, namely round robin and first fit algorithms, given the CPU configuration and the execution deadline. The effectiveness of our algorithms is evaluated under a CloudSim/CloudReport simulation environment as well as real hypervisor computer system with power gauge. The open source CloudReport, based on the CloudSim simulator, has been used to integrate our DVFS algorithm with the two scheduling algorithms to illustrate the efficiency of power saving in different scenarios. Furthermore, electricity consumption is measured and compared using power gauge of Watts Up meter.
Identifer | oai:union.ndltd.org:siu.edu/oai:opensiuc.lib.siu.edu:theses-2396 |
Date | 01 May 2014 |
Creators | Aldhahri, Eiman Ali |
Publisher | OpenSIUC |
Source Sets | Southern Illinois University Carbondale |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses |
Page generated in 0.0024 seconds