Return to search

Flash Caching for Cloud Computing Systems

As the size of cloud systems and the number of hosted virtual machines (VMs) rapidly grow, the scalability of shared VM storage systems becomes a serious issue. Client-side flash-based caching has the potential to improve the performance of cloud VM storage by employing flash storage available on the VM hosts to exploit the locality inherent in VM IOs. However, there are several challenges to the effective use of flash caching in cloud systems. First, cache configurations such as size, write policy, metadata persistency and RAID level have significant impacts on flash caching. Second, the typical capacity of flash devices is limited compared to the dataset size of consolidated VMs. Finally, flash devices wear out and face serious endurance issues which are aggravated by the use for caching.
This dissertation presents the research for addressing these problems of cloud flash caching in the following three aspects. First, it presents a thorough study of different cache configurations including a new cache-optimized RAID configuration using a large amount of long-term traces collected from real-world public and private clouds. Second, it studies an on-demand flash cache management solution for meeting VM cache demands and minimizing device wear-out. It uses a new cache demand model Reuse Working Set (RWS) to capture the data with good temporal locality, and uses the RWS size (RWSS) to model a workload?s cache demand. Finally, to handle situations where a cache is insufficient for VMs? demands, it employs dynamic cache migration to balance cache load across hosts by live migrating cached data along with the VMs.
The results show that the cache-optimized RAID improves performance by 137% without sacrificing reliability, compared to traditional RAID. The RWSS-based on-demand cache allocation reduces workload?s cache usage by 78% and lowers the amount of writes sent to cache device by 40%, compared to traditional working set based cache allocation. Combining on-demand cache allocation with dynamic cache migration for 12 concurrent VMs, results show 28% higher hit ratio and 28% lower 90th percentile IO latency, compared to the case without cache allocation.

Identiferoai:union.ndltd.org:fiu.edu/oai:digitalcommons.fiu.edu:etd-3631
Date18 March 2016
CreatorsArteaga Clavijo, Dulcardo Ariel
PublisherFIU Digital Commons
Source SetsFlorida International University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceFIU Electronic Theses and Dissertations

Page generated in 0.0019 seconds