31 |
Resilire: Achieving High Availability Through Virtual Machine Live MigrationLu, Peng 16 October 2013 (has links)
High availability is a critical feature of data centers, cloud, and cluster computing environments. Replication is a classical approach to increase service availability by providing redundancy. However, traditional replication methods are increasingly unattractive for deployment due to several limitations such as application-level non-transparency, non-isolation of applications (causing security vulnerabilities), complex system management, and high cost. Virtualization overcomes these limitations through another layer of abstraction, and provides high availability through virtual machine (VM) live migration: a guest VM image running on a primary host is transparently check-pointed and migrated, usually at a high frequency, to a backup host, without pausing the VM; the VM is resumed from the latest checkpoint on the backup when a failure occurs. A virtual cluster (VC) generalizes the VM concept for distributed applications and systems: a VC is a set of multiple VMs deployed on different physical machines connected by a virtual network.
This dissertation presents a set of VM live migration techniques, their implementations in the Xen hypervisor and Linux operating system kernel, and experimental studies conducted using benchmarks (e.g., SPEC, NPB, Sysbench) and production applications (e.g., Apache webserver, SPECweb). We first present a technique for reducing VM migration downtimes called FGBI. FGBI reduces the dirty memory updates that must be migrated during each migration epoch by tracking memory at block granularity. Additionally, it determines memory blocks with identical content and shares them to reduce the increased memory overheads due to block-level tracking granularity, and uses a hybrid compression mechanism on the dirty blocks to reduce the migration traffic. We implement FGBI in the Xen hypervisor and conduct experimental studies, which reveal that the technique reduces the downtime by 77% and 45% over competitors including LLM and Remus, respectively, with a performance overhead of 13%.
We then present a lightweight, globally consistent checkpointing mechanism for virtual cluster, called VPC, which checkpoints the VC for immediate restoration after (one or more) VM failures. VPC predicts the checkpoint-caused page faults during each checkpointing interval, in order to implement a lightweight checkpointing approach for the entire VC. Additionally, it uses a globally consistent checkpointing algorithm, which preserves the global consistency of the VMs' execution and communication states, and only saves the updated memory pages during each checkpointing interval. Our Xen-based implementation and experimental studies reveal that VPC reduces the solo VM downtime by as much as 45% and reduces the entire VC downtime by as much as 50% over competitors including VNsnap, with a memory overhead of 9% and performance overhead of 16%.
The dissertation's third contribution is a VM resumption mechanism, called VMresume, which restores a VM from a (potentially large) checkpoint on slow-access storage in a fast and efficient way. VMresume predicts and preloads the memory pages that are most likely to be accessed after the VM's resumption, minimizing otherwise potential performance degradation due to cascading page faults that may occur on VM resumption. Our experimental studies reveal that VM resumption time is reduced by an average of 57% and VM's unusable time is reduced by 73.8% over native Xen's resumption mechanism.
Traditional VM live migration mechanisms are based on hypervisors. However, hypervisors are increasingly becoming the source of several major security attacks and flaws. We present a mechanism called HSG-LM that does not involve the hypervisor during live migration. HSG-LM is implemented in the guest OS kernel so that the hypervisor is completely bypassed throughout the entire migration process. The mechanism exploits a hybrid strategy that reaps the benefits of both pre-copy and post-copy migration mechanisms, and uses a speculation mechanism that improves the efficiency of handling post-copy page faults. We modify the Linux kernel and develop a new page fault handler inside the guest OS to implement HSG-LM. Our experimental studies reveal that the technique reduces the downtime by as much as 55%, and reduces the total migration time by as much as 27% over competitors including Xen-based pre-copy, post-copy, and self-migration mechanisms.
In a virtual cluster environment, one of the main challenges is to ensure equal utilization of all the available resources while avoiding overloading a subset of machines. We propose an efficient load balancing strategy using VM live migration, called DCbalance. Differently from previous work, DCbalance records the history of mappings to inform future placement decisions, and uses a workload-adaptive live migration algorithm to minimize VM downtime. We improve Xen's original live migration mechanism and implement the DCbalance technique, and conduct experimental studies. Our results reveal that DCbalance reduces the decision generating time by 79%, the downtime by 73%, and the total migration time by 38%, over competitors including the OSVD virtual machine load balancing mechanism and the DLB (Xen-based) dynamic load balancing algorithm.
The dissertation's final contribution is a technique for VM live migration in Wide Area Networks (WANs), called FDM. In contrast to live migration in Local Area Networks (LANs), VM migration in WANs involve migrating disk data, besides memory state, because the source and the target machines do not share the same disk service. FDM is a fast and storage-adaptive migration mechanism that transmits both memory state and disk data with short downtime and total migration time. FDM uses page cache to identify data that is duplicated between memory and disk, so as to avoid transmitting the same data unnecessarily. We implement FDM in Xen, targeting different disk formats including raw and Qcow2. Our experimental studies reveal that FDM reduces the downtime by as much as 87%, and reduces the total migration time by as much as 58% over competitors including pre-copy or post-copy disk migration mechanisms and the disk migration mechanism implemented in BlobSeer, a widely used large-scale distributed storage service. / Ph. D.
|
32 |
Design and Implementation of the VirtuOS Operating SystemNikolaev, Ruslan 21 January 2014 (has links)
Most operating systems provide protection and isolation to user processes, but not to critical system components such as device drivers or other systems code. Consequently, failures in these components often lead to system failures. VirtuOS is an operating system that exploits a new method of decomposition to protect against such failures. VirtuOS exploits virtualization to isolate and protect vertical slices of existing OS kernels in separate service domains. Each service domain represents a partition of an existing kernel, which implements a subset of that kernel's functionality. Service domains directly service system calls from user processes. VirtuOS exploits an exceptionless model, avoiding the cost of a system call trap in many cases. We illustrate how to apply exceptionless system calls across virtualized domains.
To demonstrate the viability of VirtuOS's approach, we implemented a prototype based on the Linux kernel and Xen hypervisor. We created and evaluated a network and a storage service domain. Our prototype retains compatibility with existing applications, can survive the failure of individual service domains while outperforming alternative approaches such as isolated driver domains and even exceeding the performance of native Linux for some multithreaded workloads.
The evaluation of VirtuOS revealed costs due to decomposition, memory management, and communication, which necessitated a fine-grained analysis to understand their impact on the system's performance. The interaction of virtual machines with multiple underlying software and hardware layers in virtualized environment makes this task difficult. Moreover, performance analysis tools commonly used in native environments were not available in virtualized environments. Our work addresses this problem to enable an in-depth performance analysis of VirtuOS. Our Perfctr-Xen framework provides capabilities for per-thread analysis with both accumulative event counts and interrupt-driven event sampling. Perfctr-Xen is a flexible and generic tool, supports different modes of virtualization, and can be used for many applications outside of VirtuOS. / Ph. D.
|
33 |
Avaliação de desempenho de plataformas de virtualização de redes. / Performance evaluation of network virtualization plataforms.Leopoldo Alexandre Freitas Mauricio 27 August 2013 (has links)
O objetivo desta dissertação é avaliar o desempenho de ambientes virtuais de
roteamento construídos sobre máquinas x86 e dispositivos de rede existentes na Internet atual.
Entre as plataformas de virtualização mais utilizadas, deseja-se identificar quem melhor
atende aos requisitos de um ambiente virtual de roteamento para permitir a programação do
núcleo de redes de produção. As plataformas de virtualização Xen e KVM foram instaladas
em servidores x86 modernos de grande capacidade, e comparadas quanto a eficiência,
flexibilidade e capacidade de isolamento entre as redes, que são os requisitos para o bom
desempenho de uma rede virtual. Os resultados obtidos nos testes mostram que, apesar de ser
uma plataforma de virtualização completa, o KVM possui desempenho melhor que o do Xen
no encaminhamento e roteamento de pacotes, quando o VIRTIO é utilizado. Além disso,
apenas o Xen apresentou problemas de isolamento entre redes virtuais. Também avaliamos o
efeito da arquitetura NUMA, muito comum em servidores x86 modernos, sobre o desempenho
das VMs quando muita memória e núcleos de processamento são alocados nelas. A análise
dos resultados mostra que o desempenho das operações de Entrada e Saída (E/S) de rede pode
ser comprometido, caso as quantidades de memória e CPU virtuais alocadas para a VM não
respeitem o tamanho dos nós NUMA existentes no hardware. Por último, estudamos o
OpenFlow. Ele permite que redes sejam segmentadas em roteadores, comutadores e em
máquinas x86 para que ambientes virtuais de roteamento com lógicas de encaminhamento
diferentes possam ser criados. Verificamos que ao ser instalado com o Xen e com o KVM, ele
possibilita a migração de redes virtuais entre diferentes nós físicos, sem que ocorram
interrupções nos fluxos de dados, além de permitir que o desempenho do encaminhamento de
pacotes nas redes virtuais criadas seja aumentado. Assim, foi possível programar o núcleo da
rede para implementar alternativas ao protocolo IP. / The aim of this work is to evaluate the performance of routing virtual environments
built on x86 machines and network devices existing on the Internet today. Among the most
widely used virtualization platforms, we want to identify which best meets the requirements
of a virtual routing to allow programming of the core production networks. Virtualization
platforms Xen and KVM were installed on modern large capacity x86 machines, and they
were compared for efficiency, flexibility and isolation between networks, which are the
requirements for good performance of a virtual network. The tests results show that, despite
being a full virtualization platform, KVM has better performance than Xen in forwarding and
routing packets when the VIRTIO is used. Furthermore, only Xen had isolation problems
between networks. We also evaluate the effect of the NUMA architecture, very common in
modern x86 servers, on the performance of VMs when lots of memory and processor cores
are allocated to them. The results show that Input and Output (I/O) network performance can
be compromised whether the amounts of virtual memory and CPU allocated to VM do not
respect the size of the existing hardware NUMA nodes. Finally, we study the OpenFlow. It
allows slicing networks into routers, switches and x86 machines to create virtual
environments with different routing forwarding rules. We found that, when installed with Xen
and KVM, it enables the migration of virtual networks among different physical nodes,
without interruptions in the data streams, and allows to increase the performance of packet
forwarding in the virtual networks created. Thus, it was possible to program the core network
to implement alternatives to IP protocol.
|
34 |
Avaliação de desempenho de plataformas de virtualização de redes. / Performance evaluation of network virtualization plataforms.Leopoldo Alexandre Freitas Mauricio 27 August 2013 (has links)
O objetivo desta dissertação é avaliar o desempenho de ambientes virtuais de
roteamento construídos sobre máquinas x86 e dispositivos de rede existentes na Internet atual.
Entre as plataformas de virtualização mais utilizadas, deseja-se identificar quem melhor
atende aos requisitos de um ambiente virtual de roteamento para permitir a programação do
núcleo de redes de produção. As plataformas de virtualização Xen e KVM foram instaladas
em servidores x86 modernos de grande capacidade, e comparadas quanto a eficiência,
flexibilidade e capacidade de isolamento entre as redes, que são os requisitos para o bom
desempenho de uma rede virtual. Os resultados obtidos nos testes mostram que, apesar de ser
uma plataforma de virtualização completa, o KVM possui desempenho melhor que o do Xen
no encaminhamento e roteamento de pacotes, quando o VIRTIO é utilizado. Além disso,
apenas o Xen apresentou problemas de isolamento entre redes virtuais. Também avaliamos o
efeito da arquitetura NUMA, muito comum em servidores x86 modernos, sobre o desempenho
das VMs quando muita memória e núcleos de processamento são alocados nelas. A análise
dos resultados mostra que o desempenho das operações de Entrada e Saída (E/S) de rede pode
ser comprometido, caso as quantidades de memória e CPU virtuais alocadas para a VM não
respeitem o tamanho dos nós NUMA existentes no hardware. Por último, estudamos o
OpenFlow. Ele permite que redes sejam segmentadas em roteadores, comutadores e em
máquinas x86 para que ambientes virtuais de roteamento com lógicas de encaminhamento
diferentes possam ser criados. Verificamos que ao ser instalado com o Xen e com o KVM, ele
possibilita a migração de redes virtuais entre diferentes nós físicos, sem que ocorram
interrupções nos fluxos de dados, além de permitir que o desempenho do encaminhamento de
pacotes nas redes virtuais criadas seja aumentado. Assim, foi possível programar o núcleo da
rede para implementar alternativas ao protocolo IP. / The aim of this work is to evaluate the performance of routing virtual environments
built on x86 machines and network devices existing on the Internet today. Among the most
widely used virtualization platforms, we want to identify which best meets the requirements
of a virtual routing to allow programming of the core production networks. Virtualization
platforms Xen and KVM were installed on modern large capacity x86 machines, and they
were compared for efficiency, flexibility and isolation between networks, which are the
requirements for good performance of a virtual network. The tests results show that, despite
being a full virtualization platform, KVM has better performance than Xen in forwarding and
routing packets when the VIRTIO is used. Furthermore, only Xen had isolation problems
between networks. We also evaluate the effect of the NUMA architecture, very common in
modern x86 servers, on the performance of VMs when lots of memory and processor cores
are allocated to them. The results show that Input and Output (I/O) network performance can
be compromised whether the amounts of virtual memory and CPU allocated to VM do not
respect the size of the existing hardware NUMA nodes. Finally, we study the OpenFlow. It
allows slicing networks into routers, switches and x86 machines to create virtual
environments with different routing forwarding rules. We found that, when installed with Xen
and KVM, it enables the migration of virtual networks among different physical nodes,
without interruptions in the data streams, and allows to increase the performance of packet
forwarding in the virtual networks created. Thus, it was possible to program the core network
to implement alternatives to IP protocol.
|
35 |
Radium: Secure Policy Engine in HypervisorShah, Tawfiq M. 08 1900 (has links)
The basis of today’s security systems is the trust and confidence that the system will behave as expected and are in a known good trusted state. The trust is built from hardware and software elements that generates a chain of trust that originates from a trusted known entity. Leveraging hardware, software and a mandatory access control policy technology is needed to create a trusted measurement environment. Employing a control layer (hypervisor or microkernel) with the ability to enforce a fine grained access control policy with hyper call granularity across multiple guest virtual domains can ensure that any malicious environment to be contained. In my research, I propose the use of radium's Asynchronous Root of Trust Measurement (ARTM) capability incorporated with a secure mandatory access control policy engine that would mitigate the limitations of the current hardware TPM solutions. By employing ARTM we can leverage asynchronous use of boot, launch, and use with the hypervisor proving its state and the integrity of the secure policy. My solution is using Radium (Race free on demand integrity architecture) architecture that will allow a more detailed measurement of applications at run time with greater semantic knowledge of the measured environments. Radium incorporation of a secure access control policy engine will give it the ability to limit or empower a virtual domain system. It can also enable the creation of a service oriented model of guest virtual domains that have the ability to perform certain operations such as introspecting other virtual domain systems to determine the integrity or system state and report it to a remote entity.
|
36 |
A Performance Study of VM Live Migration over the WANMohammad, Taha, Eati, Chandra Sekhar January 2015 (has links)
Virtualization is the key technology that has provided the Cloud computing platforms a new way for small and large enterprises to host their applications by renting the available resources. Live VM migration allows a Virtual Machine to be transferred form one host to another while the Virtual Machine is active and running. The main challenge in Live migration over WAN is maintaining the network connectivity during and after the migration. We have carried out live VM migration over the WAN migrating different sizes of VM memory states and presented our solutions based on Open vSwitch/VXLAN and Cisco GRE approaches. VXLAN provides the mobility support needed to maintain the network connectivity between the client and the Virtual machine. We have setup an experimental testbed to calculate the concerned performance metrics and analyzed the performance of live migration in VXLAN and GRE network. Our experimental results present that the network connectivity was maintained throughout the migration process with negligible signaling overhead and minimal downtime. The downtime variation experience with change in the applied network delay was relatively higher when compared to variation experienced when migrating different VM memory states. The total migration time experienced showed a strong relationship with size of the migrating VM memory state. / 0763472814
|
37 |
Virtualisering : en prestandajämförelse mellan fullständig- och parallell systemvirtualiseringLindberg, Magnus January 2008 (has links)
Virtualisering är en abstraktion av underliggande fysisk hårdvara som omvandlas till en förutbestämd struktur av hårdvara via mjukvara. En virtuell maskin kan då vara frånkopplad från hårdvaran. Virtualisering tillåter hårdvara att delas upp som flera separata virtuella hårdvaror vilket kan ske transparent för operativsystem i virtuella maskiner. Virtualisering ökade under 90-talet och det utvecklades två virtualiseringsteknologier: (i) den fullständiga systemvirtualisering och (ii) parallell systemvirtualisering. Fullständig systemvirtualisering erbjuder abstraktion som utgör en frånkoppling från hårdvara. Operativsystem som använder en virtuell maskin känner då inte till att virtualisering skett med resultatet att alla operativsystem kan användas. Parallell systemvirtualisering använder en delvis abstraktion då operativsystem modifieras för att virtuell maskin skall vara medveten om att virtualisering utförts för att möjliggöra för prestandaförbättringar. Den problemställningen som ställts försöker utröna vilken av dessa två teknologier som kan leverera bästa prestanda över FTP. Experiment har då utförts och visade att det är inga skillnader mellan teknologierna.
|
38 |
Analysis of cloud testbeds using opensource solutionsMohammed, Bashir, Kiran, Mariam January 2015 (has links)
No / Cloud computing is increasingly attracting large attention both in academic research and in industrial initiatives. However, despite the popularity, there is a lack of research on the suitability of software tools and parameters for creating and deploying Cloud test beds. Virtualization and how to set up virtual environments can be done through software tools, which are available as open source, but there still needs to be work in terms of which tools to use and how to monitor parameters with the suitability of hardware resources available. This paper discusses the concepts of virtualization, as a practical view point, presenting an in-depth critical analysis of open source cloud implementation tools such as CloudStack, Eucalyptus, Nimbus, OpenStack, OpenNebula, OpenIoT, to name a few. This paper analyzes the various toolkits, parameters of these tools, and their usability for researchers looking to deploy their own Cloud test beds. The paper also extends further in developing an experimental case study of using OpenStack to construct and deploy a test bed using current resources available in the labs at the University of Bradford. This paper contributes to the theme of software setups and open source issues for developing Cloud test bed for deploying and constructing private Cloud test bed.
|
39 |
Performance of Disk I/O operations during the Live Migration of a Virtual Machine over WANVemulapalli, Revanth, Mada, Ravi Kumar January 2014 (has links)
Virtualization is a technique that allows several virtual machines (VMs) to run on a single physical machine (PM) by adding a virtualization layer above the physical host's hardware. Many virtualization products allow a VM be migrated from one PM to other PM without interrupting the services running on the VM. This is called live migration and offers many potential advantages like server consolidation, reduced energy consumption, disaster recovery, reliability, and efficient workflows such as "Follow-the-Sun''. At present, the advantages of VM live migration are limited to Local Area Networks (LANs) as migrations over Wide Area Networks (WAN) offer lower performance due to IP address changes in the migrating VMs and also due to large network latency. For scenarios which require migrations, shared storage solutions like iSCSI (block storage) and NFS (file storage) are used to store the VM's disk to avoid the high latencies associated with disk state migration when private storage is used. When using iSCSI or NFS, all the disk I/O operations generated by the VM are encapsulated and carried to the shared storage over the IP network. The underlying latency in WAN will effect the performance of application requesting the disk I/O from the VM. In this thesis our objective was to determine the performance of shared and private storage when VMs are live migrated in networks with high latency, with WANs as the typical case. To achieve this objective, we used Iometer, a disk benchmarking tool, to investigate the I/O performance of iSCSI and NFS when used as shared storage for live migrating Xen VMs over emulated WANs. In addition, we have configured the Distributed Replicated Block Device (DRBD) system to provide private storage for our VMs through incremental disk replication. Then, we have studied the I/O performance of the private storage solution in the context of live disk migration and compared it to the performance of shared storage based on iSCSI and NFS. The results from our testbed indicate that the DRBD-based solution should be preferred over the considered shared storage solutions because DRBD consumed less network bandwidth and has a lower maximum I/O response time.
|
40 |
FairCPU: Uma Arquitetura para Provisionamento de MÃquinas Virtuais Utilizando CaracterÃsticas de Processamento / FairCPU: An Architecture for Provisioning Virtual Machines Using Processing FeaturesPaulo Antonio Leal Rego 02 March 2012 (has links)
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico / O escalonamento de recursos à um processo chave para a plataforma de ComputaÃÃo em Nuvem, que geralmente utiliza mÃquinas virtuais (MVs) como unidades de escalonamento. O uso de tÃcnicas de virtualizaÃÃo fornece grande flexibilidade com a habilidade de instanciar vÃrias MVs em uma mesma mÃquina fÃsica (MF), modificar a capacidade das MVs e migrÃ-las entre as MFs. As tÃcnicas de consolidaÃÃo e alocaÃÃo dinÃmica de MVs tÃm tratado o impacto da sua utilizaÃÃo como uma medida independente de localizaÃÃo. à geralmente aceito que o desempenho de uma MV serà o mesmo, independentemente da MF em que ela à alocada. Esta à uma suposiÃÃo razoÃvel para um ambiente homogÃneo, onde as MFs sÃo idÃnticas e as MVs estÃo executando o mesmo sistema operacional e aplicativos. No entanto, em um ambiente de ComputaÃÃo em Nuvem, espera-se compartilhar um conjunto composto por recursos heterogÃneos, onde as MFs podem variar em termos de capacidades de seus recursos e afinidades de dados. O objetivo principal deste trabalho à apresentar uma arquitetura que possibilite a padronizaÃÃo da representaÃÃo do poder de processamento das MFs e MVs, em funÃÃo de Unidades de Processamento (UPs), apoiando-se na limitaÃÃo do uso da CPU para prover isolamento de desempenho e manter a capacidade de processamento das MVs independente da MF subjacente. Este trabalho busca suprir a necessidade de uma soluÃÃo que considere a heterogeneidade das MFs presentes na infraestrutura da Nuvem e apresenta polÃticas de escalonamento baseadas na utilizaÃÃo das UPs. A arquitetura proposta, chamada FairCPU, foi implementada para trabalhar com os hipervisores KVM e Xen, e foi incorporada a uma nuvem privada, construÃda com o middleware OpenNebula, onde diversos experimentos foram realizados para avaliar a soluÃÃo proposta. Os resultados comprovam a eficiÃncia da arquitetura FairCPU em utilizar as UPs para reduzir a variabilidade no desempenho das MVs, bem como para prover uma nova maneira de representar e gerenciar o poder de processamento das MVs e MFs da infraestrutura. / Resource scheduling is a key process for cloud computing platform, which generally
uses virtual machines (VMs) as scheduling units. The use of virtualization techniques
provides great flexibility with the ability to instantiate multiple VMs on one physical machine
(PM), migrate them between the PMs and dynamically scale VMâs resources. The techniques
of consolidation and dynamic allocation of VMs have addressed the impact of its use as an
independent measure of location. It is generally accepted that the performance of a VM will be
the same regardless of which PM it is allocated. This assumption is reasonable for a homogeneous
environment where the PMs are identical and the VMs are running the same operating
system and applications. Nevertheless, in a cloud computing environment, we expect that a set
of heterogeneous resources will be shared, where PMs will face changes both in terms of their
resource capacities and as also in data affinities. The main objective of this work is to propose
an architecture to standardize the representation of the processing power by using processing
units (PUs). Adding to that, the limitation of CPU usage is used to provide performance isolation
and maintain the VMâs processing power at the same level regardless the underlying PM.
The proposed solution considers the PMs heterogeneity present in the cloud infrastructure and
provides scheduling policies based on PUs. The proposed architecture is called FairCPU and
was implemented to work with KVM and Xen hypervisors. As study case, it was incorporated
into a private cloud, built with the middleware OpenNebula, where several experiments were
conducted. The results prove the efficiency of FairCPU architecture to use PUs to reduce VMsâ
performance variability, as well as to provide a new way to represent and manage the processing
power of the infrastructureâs physical and virtual machines.
|
Page generated in 0.0385 seconds