Cloud computing has revolutionized the computing landscape by providing on-demand, pay-as-you-go access to elastically scalable resources. Many applications are now being migrated from on-premises data centers to public clouds; yet, the transition to the cloud is not always straightforward and smooth. An application that performed well in an on-premise data center may not perform identically in public computing clouds, because many variables like virtualization can impact the application's performance. By collecting significant performance data through experimental study, the cloud's complexity particularly as it relates to performance can be revealed. However, conducting large-scale system experiments is particularly challenging because of the practical difficulties that arise during experimental deployment, configuration, execution and data processing. In spite of these associated complexities, we argue that a promising approach for addressing these challenges is to leverage automation to facilitate the exhaustive measurement of large-scale experiments.
Automation provides numerous benefits: removes the error prone and cumbersome involvement of human testers, reduces the burden of configuring and running large-scale experiments for distributed applications, and accelerates the process of reliable applications testing. In our approach, we have automated three key activities associated with the experiment measurement process: create, manage and analyze. In create, we prepare the platform and deploy and configure applications. In manage, we initialize the application components (in a reproducible and verifiable order), execute workloads, collect resource monitoring and other performance data, and parse and upload the results to the data warehouse. In analyze, we process the collected data using various statistical and visualization techniques to understand and explain performance phenomena. In our approach, a user provides the experiment configuration file, so at the end, the user merely receives the results while the framework does everything else. We enable the automation through code generation. From an architectural viewpoint, our code generator adopts the compiler approach of multiple, serial transformative stages; the hallmarks of this approach are that stages typically operate on an XML document that is the intermediate representation, and XSLT performs the code generation. Our automated approach to large-scale experiments has enabled cloud experiments to scale well beyond the limits of manual experimentation, and it has enabled us to identify non-trivial performance phenomena that would not have been possible otherwise.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/49098 |
Date | 20 September 2013 |
Creators | Jayasinghe, Indika D. |
Contributors | Pu, Calton |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Language | en_US |
Detected Language | English |
Type | Dissertation |
Format | application/pdf |
Page generated in 0.0016 seconds