Spelling suggestions: "subject:"openstack"" "subject:"openstacks""
1 |
Performance Evaluation of OpenStack Deployment ToolsAluguri, Tarun January 2016 (has links)
Cloud computing enables on-demand access to a shared pool of computing resources, that can beeasily provisioned, configured and released with minimal management cost and effort. OpenStack isan open source cloud management platform aimed at providing private or public IaaS cloud onstandard hardware. Since, deploying OpenStack manually is tedious and time-consuming, there are several tools that automate the deployment of OpenStack. Usually, cloud admins choose a tool basedon its level of automation, ease of use or interoperability with existing tools used by them. However,another desired factor while choosing a deployment tool is its deployment speed. Cloud admins cannot select based on this factor since, there is no previous work on the comparison of deploymenttools based on deployment time. This thesis aims to address this issue. The main aim of the thesis is to evaluate the performance of OpenStack deployment tools with respectto operating system provisioning and OpenStack deployment time, on physical servers. Furthermore,the effect of varying number of nodes, OpenStack architecture deployed and resources (cores andRAM) provided to deployment node on provisioning and deployment times, is also analyzed. Also,the tools classified based on stages of deployment and method of deploying OpenStack services. In this thesis we evaluate the performance of MAAS, Foreman, Mirantis Fuel and Canonical Autopilot. The performance of the tools is measured via experimental research method. Operating system provisioning time and OpenStack deployment times are measured while varying the number of nodes/OpenStack architecture and resources provided to deployment node i.e. cores and RAM. Results show that provisioning time of MAAS is less than Mirantis Fuel, which is less than Foreman.Furthermore, for all 3 tools as number of nodes increases provisioning time increases. However, the amount of increase is lowest for MAAS than Mirantis Fuel and Foreman. Similarly, results for baremetal OpenStack deployment time show that, Canonical Autopilot outperforms Mirantis Fuel by asignificant difference for all OpenStack scenarios considered. Furthermore, as number of nodes in an OpenStack scenario increases, the deployment time for both the tools increases. From the research, it is concluded that MAAS and Canonical Autopilot perform better as provisioningand bare metal OpenStack deployment tool respectively, than other tools that have been analyzed.Furthermore, from the analysis it can be concluded that increase in number of nodes/ OpenStackarchitecture, leads to an increase in both provisioning time and OpenStack deployment time for all the tools.
|
2 |
Live Migration of Virtual Machines in the Cloud : An Investigation by MeasurementsPasumarthy, Sarat Chandra January 2015 (has links)
Cloud computing has grown in prevalence from recent years due to its concept of computing as a service, thereby, allowing users to offload the infrastructure management costs and tasks to a cloud provider. Cloud providers leverage server virtualization technology for efficient resource utilization, faster provisioning times, reduced energy consumption, etc. Cloud computing inherits a key feature of server virtualization which is the live migration of virtual machines (VMs). This technique allows transferring of a VM from one host to another with minimal service interruption. However, live migration is a complex process and with a cloud management software used by cloud providers for management, there could be a significant influence on the migration process. This thesis work aims to investigate the complex process of live migration performed by the hypervisor as well as the additional steps involved when a cloud management software or platform is present and form a timeline of these collection of steps or phases. The work also aims to investigate the performance of these phases, in terms of time, when migrating VMs with different sizes and workloads. For this thesis, the Kernel-based Virtual Machine (KVM) hypervisor and the OpenStack cloud software have been considered. The methodology employed is experimental and quantitative. The essence of this work is investigation by network passive measurements. To elaborate, this thesis work performs migrations on physical test-beds and uses measurements to investigate and evaluate the migration process performed by the KVM hypervisor as well as the OpenStack platform deployed on KVM hypervisors. Experiments are designed and conducted based on the objectives to be met. The results of the work primarily include the timeline of the migration phases of both the KVM hypervisor and the OpenStack platform. Results also include the time taken by each migration phase as well as the total migration time and the VM downtime. The results indicate that the total migration time, downtime and few of the phases increase with increase in CPU load and VM size. However, some of the phases do not portray any such trend. It has also been observed that the transfer stage alone does not contribute and influence the total time but every phase of the process has significant influence on the migration process. The conclusions from this work is that although a cloud management software aids in managing the infrastructure, it has notable impact on the migration process carried out by the hypervisor. Moreover, the migration phases and their proportions not only depend on the VM but on the physical environment as well. This thesis work focuses solely on the time factor of each phase. Further evaluation of each phase with respect to its resource utilization can provide better insight into probable optimization opportunities.
|
3 |
Smart Placement of Virtual Machines : Optimizing Energy ConsumptionKari, Raywon Teja January 2016 (has links)
Context: Recent trends show that there is a tremendous shift from IT companies following traditional methods by hosting their applications/systems in self-managed on premise data centers to using the so-called cloud data centers. Cloud computing has received immense popularity due to its architecture and the ease of usage. Due to this increase in demand and shift in practices, there has been a tremendous increase in number of data centers over a period, resulting in increase of energy consumption. In this thesis work, a research is carried out on optimizing the energy consumption of a typical cloud data center. OpenStack cloud computing software is chosen as the platform in this research. We have used live migration as a key aspect in this research. Objectives: In this research, our objectives are as follows: Design an OpenStack testbed to implement the migration of virtual machines. To estimate the energy consumption of the data center. To design a heuristic algorithm to evaluate the performance metrics and to optimize the overall energy consumption. Methods: We have used PowerAPI, a software tool to estimate the energy consumption of hosts as well as virtual machines. A heuristic algorithm is designed and implemented in an instrumental OpenStack testbed to optimize the energy consumption. Server consolidation and load balancing of virtual machines methodologies are used in the heuristic algorithm design. Our research is carried out against the functionality of Nova scheduler of OpenStack. Results: Results section describes the values of performance metrics yielded by carrying out the experiment. The obtained results showed that energy can be optimized significantly by modifying the way OpenStack nova scheduler can work. The experiment is carried out on vanilla OpenStack and OpenStack with the heuristic algorithm in place, In the second case, the nova scheduler algorithms are not used but the heuristic algorithm is used instead. The CPU utilization and CPU load were noticed to be higher than the metrics observed in case of OpenStack with nova scheduler. Energy consumption is observed to be lesser than the consumption in OpenStack design with nova scheduler. Conclusions: The research tells that energy consumption can be optimized significantly using desired algorithms without compromising the service quality it offers. However, the design impacts on CPU slightly as the metrics are observed to be higher when compared to that in case of OpenStack with nova scheduler. Although it won’t have noticeable impact on the system.
|
4 |
Performance Evaluation of OpenStack Deployment ToolsVemula, S Sai Srinivas Jayapala January 2016 (has links)
Cloud computing allows access to a collection of computing resources that can be easily provisioned, configured as well as released on-demand with minimum cost and effort. OpenStack is an open source cloud management platform aimed at providing public or private IaaS cloud on standard hardware. Since, deploying OpenStack manually is tedious and time-consuming, several tools that automate the deployment of OpenStack are available. Usually, cloud administrators choose a tool based on its level of automation, ease of use or interoperability with existing tools used by them. However, another desired factor while choosing a deployment tool is its deployment speed. Cloud admins cannot select based on this factor since, there is no previous work done on the comparison of deployment tools based on deployment time. This thesis aims to address this issue. The main aim of the thesis is to evaluate the performance of OpenStack deployment tools with respect to operating system provisioning and OpenStack deployment time, on physical servers. Furthermore, the effect of varying number of nodes, OpenStack architecture deployed and resources (cores and RAM) provided to deployment node on provisioning and deployment times, is also analyzed. Also, the tools are classified based on stages of deployment and method of deploying OpenStack services. In this thesis we evaluate the performance of MAAS, Foreman, Mirantis Fuel and Canonical Autopilot. The performance of the tools is measured via experimental research method. Operating system provisioning time and OpenStack deployment times are measured while varying the number of nodes/ OpenStack architecture and resources provided to deployment node i.e. cores and RAM. Results show that provisioning time of MAAS is less than Mirantis Fuel which is less than Foreman for all node scenarios and resources cases considered. Furthermore, for all 3 tools as number of nodes increases provisioning time increases. However, the amount of increase is lowest for MAAS than Mirantis Fuel and Foreman. Similarly, results for bare metal OpenStack deployment time show that, Canonical Autopilot outperforms Mirantis Fuel by a significant difference for all OpenStack scenarios and resources cases considered. Furthermore, as number of nodes in an OpenStack scenario as well as its complexity increases, the deployment time for both the tools increases. From the research, it is concluded that MAAS and Canonical Autopilot perform better as provisioning and bare metal OpenStack deployment tool respectively, than other tools that have been analyzed. Furthermore, from the analysis it can be concluded that increase in number of nodes/ OpenStack architecture, leads to an increase in both provisioning time and OpenStack deployment time for all the tools. Finally, after analyzing the results the tools are classified based on the method of deploying OpenStack services i.e. parallel or role-wise parallel.
|
5 |
An Investigation of CPU utilization relationship between host and guests in a Cloud infrastructureAhmadi Mehri, Vida January 2015 (has links)
Cloud computing stands as a revolution in IT world in recent years. This technology facilitates resource sharing by reducing hardware costs for business users and promises energy efficiency and better resource utilization to the service providers. CPU utilization is a key metric considered in resource management across clouds. The main goal of this thesis study is directed towards investigating CPU utilization behavior with regard to host and guest, which would help us in understanding the relationship between them. It is expected that perception of these relationships would be helpful in resource management. Working towards our goal, the methodology we adopted is experi- mental research. This involves experimental modeling, measurements and observations from the results. The experimental setup covers sev- eral complex scenarios including cloud and a standalone virtualization system. The results are further analyzed for a visual correlation. Results show that CPU utilization in cloud and virtualization sce- nario coincides. More experimental scenarios are designed based on the first observations. The obtaining results show the irregular behav- ior between PM and VM in variable workload. CPU utilization retrieved from both cloud and a standalone system is similar. 100% workload situations showed that CPU utilization is constant with no correlation co-efficient obtained. Lower workloads showed (more/less) correlation in most of the cases in our correlation analysis. It is expected that more number of iterations can possibly vary the output. Further analysis of these relationships for proper resource management techniques will be considered.
|
6 |
Enhancing OpenStack clouds using P2P technologiesJoseph, Robin January 2017 (has links)
It was known for a long time that OpenStack has issues with scalability. Peer-to-Peer systems, on the other hand, have proven to scale well without significant reduction of performance. The objectives of this thesis are to study the challenges associated with P2P-enhanced clouds and present solutions for overcoming them. As a case study, we take the architecture of the P2P-enhanced OpenStack implemented at Ericsson that uses the CYCLON P2Pprotocol. We study the OpenStack architecture and P2P technologies and finally propose solutions and provide possibilities in addressing the challenges that are faced by P2P-enhanced OpenStack clouds. We emphasize mainly on a decentralized identity service and management of Virtual machine images. This work also investigates the characterization of P2P architectures for their use in P2P-enhanced OpenStack clouds. The results section shows that the proposed solution enables the existing P2P system to scale beyond what was originally possible. We also show that the P2P-enhanced system performs better than the standard OpenStack. / <p>Ericsson Cloud Research supported this work through the guidance of Dr. Fetahi Wuhib, Dr. Joao Monteiro Soares and Vinay Yadav, Experienced Researchers, Ericsson Cloud Research, Kista, Stockholm.</p>
|
7 |
Comparison between OpenStack virtual machines and Docker containers in regards to performanceBonnier, Victor January 2020 (has links)
Cloud computing is a fast growing technology which more and more companies are starting to use throughout the years. When deploying a cloud computing application it is important to know what kind of technology that you should use. Two popular technologies are containers and virtual machines. The objective with this study was to find out how the performance differs between Docker containers and OpenStack virtual machines in regards to memory usage, CPU utilization, time to boot up and throughput from a scalability perspective when scaling between two and four instances of containers and virtual machines. The comparison was done by having two different virtual machines running, one with Docker that ran the containers and another machine with OpenStack that was running a stack of my virtual machines. To gather the data from the virtual machines I used the command ”htop” and to get the data from the containers, I used the command ”Docker stats”. The results from the experiment showed a favor towards the Docker containers where the boot time on the virtual machines were between 280-320 seconds and the containers had between 5-8 seconds bootup time. The memory usage was more than doubled on the virtual machines than the containers. The CPU utilization and throughput favored the containers and the gap in performance increased when scaling the application outwards to four instances in all cases except for the throughput when adding information to a database. The conclusion that can be drawn from this is that Docker containers are favored over the OpenStack virtual machines from a performance perspective. There are still other aspects to think about regarding when choosing which technology to use when deploying a cloud application, such as security for example.
|
8 |
Performance Management for Cloud Services: Implementation and Evaluation of Schedulers for OpenStackLindgren, Hans January 2013 (has links)
To achieve the best performance out of an IaaS cloud, the resource management layer must be able to distribute the workloads it is tasked with optimally on the underlying infrastructure. A utilization-based scheduler can take advantage of the fact that allocated resources and actual resource usage often differ to make better-informed decisions of where to place future requests. This thesis presents the design, implementation and evaluation of an initial placement controller that uses host utilization data as one of its inputs to help place virtual machines according to one of a number of supported management objectives. The implementation, which builds on top of the OpenStack cloud platform, deals with two different objectives, namely, balanced load and energy efficiency. The thesis also discusses additional objectives and how they can be supported. A testbed and demonstration platform consisting of the aforementioned controller, a synthetic load generator and a monitoring system are built and used during evaluation of the system. Results indicate that the scheduler performs equally well for both objectives using synthetically generated request patterns of both interactive and batch type workloads. A discussion of current limitations of the scheduler and ways to overcome those conclude the thesis. Among the things discussed are how the rate at which host utilization data is collected limits the performance of the scheduler and under which circumstances dynamic placement of virtual machines must be used to complement utilization-based scheduling to avoid the risk of overloading the cloud. / För att erhålla maximal prestanda ur ett IaaS moln måste dess resurshanteringssystem kunna schemalägga resursutnyttjandet av den underliggande infrastrukturen på ett optimalt sätt. En nyttjandebaserad schemaläggare kan dra nytta av det faktum att allokerade resurser och faktiskt använda resurser ofta skiljer sig åt, för att på så sätt kunna fatta mer välgrundade beslut om var framtida förfrågningar skall placeras. Detta examensarbete presenterar såväl utformning, implementation samt utvärdering av en kontrollenhet för initial placering som till sin hjälp använder information om värdutnyttjande som indata för placering av virtuella maskiner enligt ett av ett antal stödda förvaltningssyften. Implementationen, som baseras på molnplattformen OpenStack, behandlar två sådana syften, balanserad last och energieffektivitet. I tillägg diskuteras krav för att stödja ytterligare syften. En testmiljö och demonstrationsplattform, bestående av ovan nämnda kontrollenhet, en syntetisk lastgenererare samt en övervakningsplattform byggs upp och används vid utvärdering av systemet. Resultat av utvärderingen påvisar att schemaläggaren presterar likvärdigt för båda syftena vid användande av last bestående av såväl interaktiva som applikationer av batch-typ. En diskussion om nuvarande begränsningar hos schemaläggaren och hur dessa kan överbryggas sammanfattar arbetet. Bland det som tas upp diskuteras bl.a. hur hastigheten vid vilken värdutnyttjande data samlas in påverkar prestandan hos schemaläggaren samt under vilka förhållanden dynamisk placering av virtuella maskiner bör användas som komplement till nyttjandebaserad schemaläggning för att undvika risk för överbelastning i molnet.
|
9 |
Investigating performance and energy efficiency on a private cloudSmith, James William January 2014 (has links)
Organizations are turning to private clouds due to concerns about security, privacy and administrative control. They are attracted by the flexibility and other advantages of cloud computing but are wary of breaking decades-old institutional practices and procedures. Private Clouds can help to alleviate these concerns by retaining security policies, in-organization ownership and providing increased accountability when compared with public services. This work investigates how it may be possible to develop an energy-aware private cloud system able to adapt workload allocation strategies so that overall energy consumption is reduced without loss of performance or dependability. Current literature focuses on consolidation as a method for improving the energy-efficiency of cloud systems, but if consolidation is undesirable due to the performance penalties, dependability or latency then another approach is required. Given a private cloud in which the machines are constant, with no machines being powered down in response to changing workloads, and a set of virtual machines to run, each with different characteristics and profiles, it is possible to mix the virtual machine placement to reduce energy consumption or improve performance of the VMs. Through a series of experiments this work demonstrates that workload mixes can have an effect on energy consumption and the performance of applications running inside virtual machines. These experiments took the form of measuring the performance and energy usage of applications running inside virtual machines. The arrangement of these virtual machines on their hosts was varied to determine the effect of different workload mixes. The insights from these experiments have been used to create a proof-of- concept custom VM Allocator system for the OpenStack private cloud computing platform. Using CloudMonitor, a lightweight monitoring application to gather data on system performance and energy consumption, the implementation uses a holistic view of the private cloud state to inform workload placement decisions.
|
10 |
Performance Evaluation of MongoDB on Amazon Web Service and OpenStackAvutu, Neeraj January 2018 (has links)
Context MongoDB is an open-source, scalable, NoSQL database that distributes the data over many commodity servers. It provides no single point of failure by copying and storing the data in different locations. MongoDB uses a master-slave design rather than the ring topology used by Cassandra. Virtualization is the technique used for accessing multiple machines in a single host and utilizing the various virtual machines. It is the fundamental technology, which allows cloud computing to provide resource sharing among the users. Objectives Studying and identifying MongoDB, Virtualization on AWS and OpenStack. Experiments were conducted to identify the CPU utilization associated when Mongo DB instances are deployed on AWS and physical server arrangement. Understanding the effect of Replication in the Mongo DB instances and its effect on MongoDB concerning throughput, CPU utilization and latency. Methods Initially, a literature review is conducted to design the experiment with the mentioned problems. A three node MongoDB cluster runs on Amazon EC2 and OpenStack Nova with Ubuntu 16.04 LTS as an operating system. Latency, throughput and CPU utilization were measured using this setup. This procedure was repeated for five nodes MongoDB cluster and three nodes production cluster with six types of workloads of YCSB. Results Virtualization overhead has been identified in terms of CPU utilization and the effects of virtualization on MongoDB are found out in terms of CPU utilization, latency and throughput. Conclusions It is concluded that there is a decrease in latency and increases throughput with the increase in nodes. Due to replication, increase in latency was observed.
|
Page generated in 0.1502 seconds