• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Autonomic Cloud Resource Management

Tunc, Cihan January 2015 (has links)
The power consumption of data centers and cloud systems has increased almost three times between 2007 and 2012. The traditional resource allocation methods are typically designed for high performance as the primary objective to support peak resource requirements. However, it is shown that server utilization is between 12% and 18%, while the power consumption is close to those at peak loads. Hence, there is a pressing need for devising sophisticated resource management approaches. State of the art dynamic resource management schemes typically rely on only a single resource such as core number, core speed, memory, disk, and network. There is a lack of fundamental research on methods addressing dynamic management of multiple resources and properties with the objective of allocating just enough resources for each workload to meet quality of service requirements while optimizing for power consumption. The main focus of this dissertation is to simultaneously manage power and performance for large cloud systems. The objective of this research is to develop a framework of performance and power management and investigate a general methodology for an integrated autonomic cloud management. In this dissertation, we developed an autonomic management framework based on a novel data structure, AppFlow, used for modeling current and near-term future cloud application behavior. We have developed the following capabilities for the performance and power management of the cloud computing systems: 1) online modeling and characterizing the cloud application behavior and resource requirements; 2) predicting the application behavior to proactively optimize its operations at runtime; 3) a holistic optimization methodology for performance and power using number of cores, CPU frequency, and memory amount; and 4) an autonomic cloud management to support the dynamic change in VM configurations at runtime to simultaneously optimize multiple objectives including performance, power, availability, etc. We validated our approach using RUBiS benchmark (emulating eBay), on an IBM HS22 blade server. Our experimental results showed that our approach can lead to a significant reduction in power consumption upto 87% when compared to the static resource allocation strategy, 72% when compared to adaptive frequency scaling strategy, and 66% when compared to a multi-resource management strategy.
2

Resource Management in Large-scale Systems

Paya, Ashkan 01 January 2015 (has links)
The focus of this thesis is resource management in large-scale systems. Our primary concerns are energy management and practical principles for self-organization and self-management. The main contributions of our work are: 1. Models. We proposed several models for different aspects of resource management, e.g., energy-aware load balancing and application scaling for the cloud ecosystem, hierarchical architecture model for self-organizing and self-manageable systems and a new cloud delivery model based on auction-driven self-organization approach. 2. Algorithms. We also proposed several different algorithms for the models described above. Algorithms such as coalition formation, combinatorial auctions and clustering algorithm for scale-free organizations of scale-free networks. 3. Evaluation. Eventually we conducted different evaluations for the proposed models and algorithms in order to verify them. All the simulations reported in this thesis had been carried out on different instances and services of Amazon Web Services (AWS). All of these modules will be discussed in detail in the following chapters respectively.

Page generated in 0.1008 seconds