• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • 2
  • 1
  • Tagged with
  • 22
  • 22
  • 9
  • 9
  • 9
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Mitigating Interference During Virtual Machine Live Migration through Storage Offloading

Stuart, Morgan S 01 January 2016 (has links)
Today's cloud landscape has evolved computing infrastructure into a dynamic, high utilization, service-oriented paradigm. This shift has enabled the commoditization of large-scale storage and distributed computation, allowing engineers to tackle previously untenable problems without large upfront investment. A key enabler of flexibility in the cloud is the ability to transfer running virtual machines across subnets or even datacenters using live migration. However, live migration can be a costly process, one that has the potential to interfere with other applications not involved with the migration. This work investigates storage interference through experimentation with real-world systems and well-established benchmarks. In order to address migration interference in general, a buffering technique is presented that offloads the migration's read, eliminating interference in the majority of scenarios.
12

Dynamic resource balancing in virtualization clusters / Dynamic resource balancing in virtualization clusters

Grafnetter, Michael January 2011 (has links)
The purpose of this thesis was to analyze the problem of resource load balancing in virtualization clusters. Another aim was to implement a pilot version of resource load balancer for the VMware vSphere Standard-based virtualization cluster. The thesis also inspected available commercial and open source resource load balancers and examined their usability and effectiveness. While designing the custom solution, a modification of the greedy algorithm has been chosen to be used to determine which virtual machines should be migrated and to select their target hosts. Furthermore, experiments have been conducted to determine some parameters for the algorithm. Finally, it was experimentally verified that the implemented solution can be applied to effectively balance virtualization server workloads by live migrating virtual machines running on these hosts.
13

Élasticité de l’exécution des processus métier / Elasticity of business processes execution

Rosinosky, Guillaume 23 January 2019 (has links)
La disponibilité de plateformes middleware dans le cloud, avec un passage à l'échelle transparent est un vrai progrès pour les développeurs et les intégrateurs logiciels. Ils peuvent développer et déployer leurs applications sans s'inquiéter des détails opérationnels. Cependant, le coût d'exploitation d'une infrastructure dans le cloud peut devenir rapidement important. Les fournisseurs doivent disposer de méthodes pour le réduire en adaptant la taille des ressources aux besoins des clients. Dans cette thèse, nous nous focalisons sur les applications Web multi-tenant transactionnelles, plus particulièrement les moteurs d'exécution de processus métiers. Nous proposons des méthodes permettant d'optimiser les coûts opérationnels d'un fournisseur d'exécution de processus "en tant que service" (BPMaaS) tout en assurant un niveau suffisant de qualité de service. Ce type d'applications ne passe pas facilement à l'échelle à cause de sa couche persistance et de la nature transactionnelle des opérations. Il faut distribuer les installations des clients de manière à optimiser les coûts et éventuellement les déplacer en fonction de l'évolution de la charge. Ces déplacements (ou migrations) ont un impact sur la qualité de service et il faut les limiter. Dans un premier temps, nous proposons une méthode de mesure de la capacité des ressources du cloud en termes de débit d'exécution de tâches BPM, puis nous proposons une méthode de mesure de l'impact des migrations que nous avons évalué, ceci confirmant nos hypothèses. Ensuite, nous proposons plusieurs modèles d'optimisation linéaire, ainsi que des heuristiques d'allocation de ressources et de distribution des clients prenant en compte le coût de l'infrastructure, la capacité des ressources et les besoins des clients, tout en limitant les nombres de migrations. Ces modèles sont fondés sur la connaissance de l'évolution de la charge des clients par unité de temps. Nous avons expérimenté les trois méthodes que nous avons proposées sur la solution BPM Bonita, et montré qu'elles permettent des gains substantiels sur l'exploitation de l'infrastructure par rapport à une méthode basique / The availability of middleware platforms in the cloud, with "transparent" scalability, is a progress for software developers and integrators. They can develop and deploy their applications without worrying about technical details. However, the exploitation cost of a cloud infrastructure can quickly become important. Providers requires methods to reduce this cost by adapting the size of ressources to the needs of the customers. In this thesis, we focus on multi-tenant transactional web applications, more precisely on business processes execution engines. We propose methods allowing to optimize the operational costs of providers of business process execution "as a Service" (BPMaaS) while ensuring a sufficient level of quality of service. This type of application do not scale well because of its persistence tier and of the transactional nature of operations. One must distribute the customers installations in order to optimize the cost, and sometimes move them depending of the needs of the customers. These moves (or migrations) have an impact on the quality of service and they must be limited. First, we propose a method for measuring the size of resources in terms of BPM tasks throughput, and then a method for measuring the impact of migrations we evaluate, thus confirming our hypothesis. We also propose several linear optimization models and heuristics targeting resouce allocation and distribution of customers, while limiting the number of migrations. These models are based on the knowledge of the needs of customers per time slot. We have experimented our three methods on the BPM solution Bonita, and demonstrated that they provide substantial savings on the infrastructure exploitation compared to a basic method
14

Challenges and New Solutions for Live Migration of Virtual Machines in Cloud Computing Environments

Zhang, Fei 03 May 2018 (has links)
No description available.
15

Algorithms for efficient VM placement in data centers : Cloud Based Design and Performance Analysis

Atchukatla, Mahammad suhail January 2018 (has links)
Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.   Objectives:  In this research, our objectives are as follows <ul type="disc">Perform simulation of algorithms in CloudSim simulator. Estimate and compare the energy consumption of different packing algorithms. Design an OpenStack testbed to implement the Bin packing algorithm.   Methods: We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.   Results: Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.   Conclusions: The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.
16

Comparing Live Migration between Linux Containers and Kernel Virtual Machine : Investigation study in terms of parameters

Kotikalapudi, Sai Venkat Naresh January 2017 (has links)
Context. Virtualization technologies have been extensively used in various cloud platforms. Hardware replacements and maintenance are occasionally required, which leads to business downtime. Live migration is performed to ensure high availability of services, as it is a major aspect. The performance of live migration in virtualization technologies directly impacts the performance of cloud platforms. Hence comparison is performed in two mainstream virtualization technologies, container and hypervisor based virtualization. Objectives. In the present study, the objective is to perform live migration of hypervisor and container based virtualization technologies, Kernel Virtual Machine (KVM) and Linux Containers (LXC) respectively. Measure and compare the downtime, total migration time, CPU utilization and disk utilization of KVM and LXC during live migration. Methods. An initial literature is conducted to get in-depth knowledge about live migration in virtualization technologies. An experiment is conducted to perform live migration in KVM and LXC. The live migration process is performed when 100 % and 66% workloads are being generated to Cassandra present in virtual machine and container. The performance of live migration in KVM and LXC is measured in terms of CPU utilization, disk utilization, total migration time and downtime. Results. Based on the obtained results from the experiment, graphs are plotted for the performance of KVM and LXC during live migration. The results indicated that KVM has better CPU utilization when compared to LXC. However, downtime, total migration time and disk utilization of LXC are relatively better than KVM. From the obtained results, mean and standard deviation are calculated. Box plotting for downtime and total migration time is performed to illustrate difference between KVM and LXC. The measurable difference between KVM and LXC is calculated using Cohen’s d effect size for downtime, total migration time, CPU and disk utilization. Conclusions. The present study concludes that no single hypervisor has better performance when considering all performance metrics. While LXC has better performance when considering downtime, total migration time and disk utilization. However, KVM performs better when CPU usage is considered.
17

Performance comparison of KVM and XEN for telecommunication services

Outadi, Siavash, Trchalikova, Jana January 2013 (has links)
High stability of telecommunication services has a positive e ect on customer satisfaction and thus helps to maintain competitiveness of the product in telecommunication market. Since live migration provides a minimal down- time of virtual machines, it is deployed by telecommunication companies to ensure high availability of services and to prevent service interruptions. The main objective of this research is to assess the performance of various hypervisors in terms of live migration and determine which of them best meets the criteria given by a telecommunication company. Response time and CPU utilization of telecommunication services are measured in non- virtualized and virtualized environments to better understand the impacts of virtualization on the services. Two hypervisors, i.e. KVM and XEN, are used to grasp their characteristic behaviour of handling the services. Furthermore, performance of live migration is assessed for both hypervisors using miscellaneous test cases to identify which one has the best overall performance in terms of downtime and total migration time.
18

Resilire: Achieving High Availability Through Virtual Machine Live Migration

Lu, Peng 16 October 2013 (has links)
High availability is a critical feature of data centers, cloud, and cluster computing environments. Replication is a classical approach to increase service availability by providing redundancy. However, traditional replication methods are increasingly unattractive for deployment due to several limitations such as application-level non-transparency, non-isolation of applications (causing security vulnerabilities), complex system management, and high cost. Virtualization overcomes these limitations through another layer of abstraction, and provides high availability through virtual machine (VM) live migration: a guest VM image running on a primary host is transparently check-pointed and migrated, usually at a high frequency, to a backup host, without pausing the VM; the VM is resumed from the latest checkpoint on the backup when a failure occurs. A virtual cluster (VC) generalizes the VM concept for distributed applications and systems: a VC is a set of multiple VMs deployed on different physical machines connected by a virtual network. This dissertation presents a set of VM live migration techniques, their implementations in the Xen hypervisor and Linux operating system kernel, and experimental studies conducted using benchmarks (e.g., SPEC, NPB, Sysbench) and production applications (e.g., Apache webserver, SPECweb). We first present a technique for reducing VM migration downtimes called FGBI. FGBI reduces the dirty memory updates that must be migrated during each migration epoch by tracking memory at block granularity. Additionally, it determines memory blocks with identical content and shares them to reduce the increased memory overheads due to block-level tracking granularity, and uses a hybrid compression mechanism on the dirty blocks to reduce the migration traffic. We implement FGBI in the Xen hypervisor and conduct experimental studies, which reveal that the technique reduces the downtime by 77% and 45% over competitors including LLM and Remus, respectively, with a performance overhead of 13%. We then present a lightweight, globally consistent checkpointing mechanism for virtual cluster, called VPC, which checkpoints the VC for immediate restoration after (one or more) VM failures. VPC predicts the checkpoint-caused page faults during each checkpointing interval, in order to implement a lightweight checkpointing approach for the entire VC. Additionally, it uses a globally consistent checkpointing algorithm, which preserves the global consistency of the VMs' execution and communication states, and only saves the updated memory pages during each checkpointing interval. Our Xen-based implementation and experimental studies reveal that VPC reduces the solo VM downtime by as much as 45% and reduces the entire VC downtime by as much as 50% over competitors including VNsnap, with a memory overhead of 9% and performance overhead of 16%. The dissertation's third contribution is a VM resumption mechanism, called VMresume, which restores a VM from a (potentially large) checkpoint on slow-access storage in a fast and efficient way. VMresume predicts and preloads the memory pages that are most likely to be accessed after the VM's resumption, minimizing otherwise potential performance degradation due to cascading page faults that may occur on VM resumption. Our experimental studies reveal that VM resumption time is reduced by an average of 57% and VM's unusable time is reduced by 73.8% over native Xen's resumption mechanism. Traditional VM live migration mechanisms are based on hypervisors. However, hypervisors are increasingly becoming the source of several major security attacks and flaws. We present a mechanism called HSG-LM that does not involve the hypervisor during live migration. HSG-LM is implemented in the guest OS kernel so that the hypervisor is completely bypassed throughout the entire migration process. The mechanism exploits a hybrid strategy that reaps the benefits of both pre-copy and post-copy migration mechanisms, and uses a speculation mechanism that improves the efficiency of handling post-copy page faults. We modify the Linux kernel and develop a new page fault handler inside the guest OS to implement HSG-LM. Our experimental studies reveal that the technique reduces the downtime by as much as 55%, and reduces the total migration time by as much as 27% over competitors including Xen-based pre-copy, post-copy, and self-migration mechanisms. In a virtual cluster environment, one of the main challenges is to ensure equal utilization of all the available resources while avoiding overloading a subset of machines. We propose an efficient load balancing strategy using VM live migration, called DCbalance. Differently from previous work, DCbalance records the history of mappings to inform future placement decisions, and uses a workload-adaptive live migration algorithm to minimize VM downtime. We improve Xen's original live migration mechanism and implement the DCbalance technique, and conduct experimental studies. Our results reveal that DCbalance reduces the decision generating time by 79%, the downtime by 73%, and the total migration time by 38%, over competitors including the OSVD virtual machine load balancing mechanism and the DLB (Xen-based) dynamic load balancing algorithm. The dissertation's final contribution is a technique for VM live migration in Wide Area Networks (WANs), called FDM. In contrast to live migration in Local Area Networks (LANs), VM migration in WANs involve migrating disk data, besides memory state, because the source and the target machines do not share the same disk service. FDM is a fast and storage-adaptive migration mechanism that transmits both memory state and disk data with short downtime and total migration time. FDM uses page cache to identify data that is duplicated between memory and disk, so as to avoid transmitting the same data unnecessarily. We implement FDM in Xen, targeting different disk formats including raw and Qcow2. Our experimental studies reveal that FDM reduces the downtime by as much as 87%, and reduces the total migration time by as much as 58% over competitors including pre-copy or post-copy disk migration mechanisms and the disk migration mechanism implemented in BlobSeer, a widely used large-scale distributed storage service. / Ph. D.
19

ESPECIFICAÇÃO DE UMA ARQUITETURA PARA MIGRAÇÃO DE MÁQUINAS VIRTUAIS UTILIZANDO ONTOLOGIAS / SPECIFICATION OF AN ARCHITECTURE FOR MIGRATION OF VIRTUAL MACHINES USING ONTOLOGIES

Rohden, Rafael Barasuol 23 July 2015 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Cloud computing is a new area in computing, providing new perspectives in the area of interconnect technologies and raises issues in architecture, design and implementation of existing networks and data centers. Currently through technology like server virtualization, has been widely used for providing on-demand services with avoiding the spreading of servers. In this way the servers are used so that its resources be better used to ensure the availability of resources and services for users, enabling, so these users from accessing services based on your needs, regardless of where the services are hosted, or how they are delivered. This being, the main feature of cloud computing. However, some servers become eventually overloaded and others are more idle, and the way to solve this is by using the migration of virtual machines in real time, that is, perform the migration of running virtual machine along with its applications to another server by restoring the balance of the servers. This balance, called load balancing is one of the techniques used by real-time migration technology. That is, the technology of migration of virtual machines in real time has become the key to optimizing computer resources. Thus, it becomes interesting the development of solutions that enable the deployment of this technology. Through a virtualized environment where applications monitors check the load state of the servers it is possible to interact with the virtual machines performing migration to ensure the optimization and utilization of computational resources. Considering this, this work presents an architecture for migration of virtual machines, which uses ontologies for knowledge representation in a virtualization environment. For this, was developed, through the process Ontology Development 101, an ontology, Onto-LM, which represents a virtual machine virtualization environment which offers help to visualize current state of the environment. For the specified architecture in this work was delimited components and their respective information flows between a component and another. Use of ontologies as one of its components. For examples of this architecture has been developed a tool, OntoMig, in the JAVA programming language, which allows to run and manage the information acquired from monitoring of servers, the charge of the ontology and the migration of virtual machines when needed. / A computação em nuvem é um novo campo na computação, sobretudo na Internet, que proporciona novas perspectivas no domínio das tecnologias de interconexões e levanta problemas na arquitetura, design e implementação de redes existentes e de Data Centers. Atualmente, através de tecnologia como virtualização de servidores, vem sendo largamente utilizado para disponibilização de serviços por demanda evitando que haja o espalhamento de servidores. Desta forma, os servidores são utilizados de maneira que seus recursos sejam melhores empregados para garantir a disponibilidade de recursos e serviços para os usuários, permitindo assim, que estes usuários acessem serviços baseados em suas necessidades, independentemente de onde os serviços são hospedados ou como eles são entregues. Sendo esta a característica principal da Computação em Nuvem. No entanto, em algum momento servidores podem ficar sobrecarregados e outros podem ficar mais ociosos, e a maneira para resolver isso é utilizando a migração de máquinas virtuais em tempo real, onde ocorre a migração de máquina virtual em execução juntamente com suas aplicações para outro servidor, restabelecendo, assim, o equilíbrio dos servidores. Este equilíbrio, chamado de balanceamento de carga, é uma das técnicas utilizadas pela tecnologia de migração em tempo real. Ou seja, a aplicação de migração de máquinas virtuais em tempo real tem se tornado a chave para a otimização de recursos computacionais. Assim, torna-se interessante o desenvolvimento de soluções que viabilizem a implantação desta tecnologia. Através de um ambiente virtualizado onde aplicações monitores verificam o estado de carga dos servidores é possível interagir com as máquinas virtuais realizando a migração para garantir a otimização e utilização dos recursos computacionais. Considerando isto, o presente trabalho apresenta uma arquitetura para migração de máquinas virtuais, a qual utiliza ontologias para a representação do conhecimento em um ambiente de virtualização. Para isto, foi desenvolvida, através do processo Ontology Development 101, uma ontologia, Onto- LM, que representa um ambiente de virtualização de máquinas virtuais a qual propõe auxiliar a visualização do estado atual do ambiente. Para a arquitetura especificada neste trabalho foi delimitado componentes e seus respectivos fluxos de informações entre um componente e outro. Utiliza-se de ontologias como um de seus componentes. Para a exemplificação desta arquitetura foi desenvolvida uma ferramenta, OntoMig, em linguagem de programação JAVA, que permite executar e gerenciar as informações obtidas do monitoramento dos servidores, a população da ontologia e a migração de máquinas virtuais quando necessário.
20

Automated Live Migration of Virtual Machines

Glad, Andreas, Forsman, Mattias January 2013 (has links)
This thesis studies the area of virtualization. The focus is on the sub-area live migration, a technique that allows a seamless migration of a virtual machine from one physical machine to another physical machine. Virtualization is an attractive technique, utilized in large computer systems, for example data centers. By using live migration, data center administrators can migrate virtual machines, seamlessly, without the users of the virtual machines taking notice about the migrations. Manually initiated migrations can become cumbersome, with an ever-increasing number of physical machines. The number of physical and virtual machines is not the only problem, deciding when to migrate and where to migrate are other problems that needs to be solved. Manually initiated migrations can also be inaccurate and untimely. Two different strategies for automated live migration have been developed in this thesis. The Push and the Pull strategies. The Push strategy tries to get rid of virtual machines and the Pull strategy tries to steal virtual machines. Both of these strategies, their design and implementation, are presented in the thesis. The strategies utilizes Shannon&apos;s Information Entropy to measure the balance in the system. The strategies further utilizes a cost model to predict the time a migration would require. This is used together with the Information Entropy to decide which virtual machine to migrate if and when a hotspot occurs. The implementation was done with the help of OMNeT++, an open-source simulation tool. The strategies are evaluated with the help of a set of simulations. These simulations include a variety of scenarios with different workloads. Our results shows that the developed strategies can re-balance a system of computers, after a large amount of virtual machines has been added or removed, in only 4-5 minutes. The results further shows that our strategies are able to keep the system balanced when the system load is at medium. This while virtual machines are continuously added or removed from the system. The contribution this thesis brings to the field is a model for how automated live migration of virtual machines can be done to improve the performance of a computer system, for example a data center.

Page generated in 0.1373 seconds