Spelling suggestions: "subject:"kva""
1 |
Shared-Memory Optimizations for Virtual MachinesMacdonell, A. Cameron Unknown Date
No description available.
|
2 |
Análise do impacto do isolamento em ambientes virtuaisSILVA, Luís Eduardo Tenório 07 March 2016 (has links)
Submitted by Luiza Maria Pereira de Oliveira (luiza.oliveira@ufpe.br) on 2017-08-04T14:28:42Z
No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
DISSERTAÇÃO Luiz Eduardo Tenório Silva.pdf: 1078029 bytes, checksum: 3ffaed1682082ec2d37b84b496c3cd81 (MD5) / Made available in DSpace on 2017-08-04T14:28:42Z (GMT). No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
DISSERTAÇÃO Luiz Eduardo Tenório Silva.pdf: 1078029 bytes, checksum: 3ffaed1682082ec2d37b84b496c3cd81 (MD5)
Previous issue date: 2016-03-07 / CNPq / O surgimento da virtualização permitiu uma mudança na maneira de pensar em
prestação de serviço pela Internet, permitindo assim o surgimento de grandes
conceitos como a computação em nuvem. Com o passar do tempo, novas
tecnologias que permitem virtualizar recursos foram surgindo e trouxe a tona
questões de desempenho, isolamento, entre outras. Analisar o impacto causado
pelo mau comportamento das abstrações virtuais pode permitir ao administrador da
nuvem tomar uma ação na escolha de uma tecnologia específica de virtualização
para minimizar os impactos causados a todo ambiente. Hodiernamente, o
isolamento é uma preocupação que remete pesquisas e estudos do seu impacto na
qualidade dos serviços prestados em um ambiente virtualizado. Indagar sobre a
possibilidade de detecção para tomada de ações com a finalidade de minimizar os
impactos decorrentes de interferências devido a um mau isolamento é uma das
atividades que vêm sendo estudadas com o passar dos anos. O surgimento de
diversas técnicas de virtualização trouxe também preocupações de qual é adequada
a qual caso. Alguns dessas técnicas vêm sofrendo melhorias únicas nesses últimos
anos, principalmente no que diz respeito ao isolamento e controle de recursos.
Nesse contexto, essa dissertação propõe uma estratégia adaptada da literatura
(procurando unir técnicas distintas) para observar possíveis indícios de quebra do
isolamento de ambientes virtuais e organizar os serviços de determinada natureza
na melhor técnica de virtualização, verificando também os resultados apresentados
por cada técnica de virtualização existente. Para tanto, é adotada uma metodologia
que favorece a concepção dos diversos cenários possíveis a partir de um número de
infraestruturas virtuais, ofertando serviços web e utilizando diferentes técnicas de
virtualização, observando principalmente os recursos utilizados pelas infraestruturas
virtuais e a qualidade do serviço prestado. Concluímos que dependendo do tipo de
recurso observado as estratégias de isolamento de uma técnica de virtualização
podem ser ou não eficazes. / The rise of virtualization has enabled a shift in thinking in service delivery over the
Internet, thus allowing the emergence of major concepts such as cloud computing.
Over time, new technologies that enable virtualization resources have emerged and
brought up performance issues, performance, insulation, etc. Analyze the impact of
the bad behavior of virtual abstractions can enable the cloud administrator take an
action in choosing a specific virtualization technology to minimize impacts to the
whole environment. Nowadays, isolation is a concern that brings research and
studies its impact on the quality of services in a virtualized environment. Investigate
about the possibility of detection for taking actions in order to minimize the impacts of
interference due to poor isolation, it is one of the activities that have been studied
over the years. The emergence of various virtualization techniques also brought
concerns the kind which is suitable case. Some of these techniques have improved
in recent years, especially with regard to isolation and control features. In this
context, this work proposes an adapted strategy literature (seeking to unite different
techniques) to observe possible break indications isolation of virtual environments
and organize the particular nature services in the best virtualization technique, also
checking the results presented for each technique existing virtualization. Therefore, a
methodology that favors the design of the various possible scenarios constructed
from a number of virtual infrastructures offering web services and using different
virtualization techniques is adopted, especially noting the resources used by the
virtual infrastructure and the quality of service. We conclude that depending on the
type of resource noted the isolation strategies of a virtualization technique can be
effective or not.
|
3 |
Measuring And Modeling Of Open vSwitch Performance : Implementation in DockerHarshini, Nekkanti January 2016 (has links)
Network virtualization has become an important aspect of the Telecom industry. The need forefficient, scalable and reliable virtualized network functions is paramount to modern networking.Open vSwitch is such virtual switch that attempts to extend the usage of virtual switches to industrygrade performance levels on heterogeneous platforms.The aim of the thesis is to give an insight into the working of Open vSwitch. To evaluate theperformance of Open vSwitch in various virtualization scenarios such as KVM (second companionthesis)[1] and Docker. To investigate different scheduling techniques offered by the Open vSwitchsoftware and supported by the Linux kernel such as FIFO, SFQ, CODEL, FQCODEL, HTB andHFSC. To differentiate the performance of Open vSwitch in these scenarios and scheduling capacitiesand determine the best scenario for optimum performance.The methodology of the thesis involved a physical model of the system used for real-timeexperimentation as well as quantitative analysis. Quantitative analysis of obtained results paved theway for unbiased conclusions. Experimental analysis was required to measure metrics such asthroughput, latency and jitter in order to grade the performance of Open vSwitch in the particularvirtualization scenario.The results of the thesis must be considered in context with a second companion thesis[1]. Both thethesis aim at measuring the performance of Open v-Switch but the virtualization scenarios (Dockerand KVM) which are chosen are different, However, this thesis outline the performance of Open vSwitch and linux bridge in docker scenario. Various scheduling techniques were measured fornetwork performance metrics across both Docker and KVM (second companion thesis) and it wasobserved that Docker performed better in terms of throughput, latency and jitter. In Docker scenarioamongst the scheduling algorithms measured, it has almost same throughput in all schedulingalgorithms and latency shows slight variation and FIFO has least latency, as it is a simplest algorithmand consists of default qdisk. Finally jitter also shows variation on all scheduling algorithms.The conclusion of the thesis is that the virtualization layer on which Open vSwitch operates is one ofthe main factors in determining the switching performance. The KVM scenario and Docker scenarioeach have different virtualization techniques that incur different overheads that in turn lead to differentmeasurements. This difference occurs in different packet scheduling techniques. Docker performsbetter than KVM for both bridges. In the Docker scenario Linux bridge performs better than that ofOpen vSwitch, throughput is almost constant and FIFO has a least latency amongst all schedulingalgorithms and jitter shows more variation in all scheduling algorithms.
|
4 |
Measuring and Modeling of Open vSwitch Performance : Implementation in KVM environmentPothuraju, Rohit January 2016 (has links)
Network virtualization has become an important aspect of the Telecom industry. The need for efficient, scalable and reliable virtualized network functions is paramount to modern networking. Open vSwitch is a virtual switch that attempts to extend the usage of virtual switches to industry grade performance levels on heterogeneous platforms.The aim of the thesis is to give an insight into the working of Open vSwitch. To evaluate the performance of Open vSwitch in various virtualization scenarios such as KVM and Docker (from second companion thesis)[1]. To investigate different scheduling techniques offered by the Open vSwitch software and supported by the Linux kernel such as FIFO, SFQ, CODEL, FQCODEL, HTB and HFSC. To differentiate the performance of Open vSwitch in these scenarios and scheduling capacities and determine the best scenario for optimum performance.The methodology of the thesis involved a physical model of the system used for real-time experimentation as well as quantitative analysis. Quantitative analysis of obtained results paved the way for unbiased conclusions. Experimental analysis was required to measure metrics such as throughput, latency and jitter in order to grade the performance of Open vSwitch in the particular virtualization scenario.The result of this thesis must be considered in context with a second companion thesis[1]. Both the theses aim at measuring and modeling performance of Open vSwitch in NFV. However, the results of this thesis outline the performance of Open vSwitch and Linux bridge in KVM virtualization scenario. Various scheduling techniques were measured for network performance metrics and it was observed that Docker performed better in terms of throughput, latency and jitter. In the KVM scenario, from the throughput test it was observed that all algorithms perform similarly in terms of throughput, for both Open vSwitch and Linux bridges. In the round trip latency tests, it was seen that FIFO has the least round trip latency, CODEL and FQCODEL had the highest latencies. HTB and HFSC perform similarly in the latency test. In the jitter tests, it was seen that HTB and HFSC had highest average jitter measurements in UDP Stream test. CODEL and FQCODEL had the least jitter results for both Open vSwitch and Linux bridges.The conclusion of the thesis is that the virtualization layer on which Open vSwitch operates is one of the main factors in determining the switching performance. Docker performs better than KVM for both bridges. In the KVM scenario, irrespective of the scheduling algorithm considered, Open vSwitch performed better than Linux bridge. HTB had highest throughput and FIFO had least round trip latency. CODEL and FQCODEL are efficient scheduling algorithms with low jitter measurements.
|
5 |
Live Migration of Virtual Machines in the Cloud : An Investigation by MeasurementsPasumarthy, Sarat Chandra January 2015 (has links)
Cloud computing has grown in prevalence from recent years due to its concept of computing as a service, thereby, allowing users to offload the infrastructure management costs and tasks to a cloud provider. Cloud providers leverage server virtualization technology for efficient resource utilization, faster provisioning times, reduced energy consumption, etc. Cloud computing inherits a key feature of server virtualization which is the live migration of virtual machines (VMs). This technique allows transferring of a VM from one host to another with minimal service interruption. However, live migration is a complex process and with a cloud management software used by cloud providers for management, there could be a significant influence on the migration process. This thesis work aims to investigate the complex process of live migration performed by the hypervisor as well as the additional steps involved when a cloud management software or platform is present and form a timeline of these collection of steps or phases. The work also aims to investigate the performance of these phases, in terms of time, when migrating VMs with different sizes and workloads. For this thesis, the Kernel-based Virtual Machine (KVM) hypervisor and the OpenStack cloud software have been considered. The methodology employed is experimental and quantitative. The essence of this work is investigation by network passive measurements. To elaborate, this thesis work performs migrations on physical test-beds and uses measurements to investigate and evaluate the migration process performed by the KVM hypervisor as well as the OpenStack platform deployed on KVM hypervisors. Experiments are designed and conducted based on the objectives to be met. The results of the work primarily include the timeline of the migration phases of both the KVM hypervisor and the OpenStack platform. Results also include the time taken by each migration phase as well as the total migration time and the VM downtime. The results indicate that the total migration time, downtime and few of the phases increase with increase in CPU load and VM size. However, some of the phases do not portray any such trend. It has also been observed that the transfer stage alone does not contribute and influence the total time but every phase of the process has significant influence on the migration process. The conclusions from this work is that although a cloud management software aids in managing the infrastructure, it has notable impact on the migration process carried out by the hypervisor. Moreover, the migration phases and their proportions not only depend on the VM but on the physical environment as well. This thesis work focuses solely on the time factor of each phase. Further evaluation of each phase with respect to its resource utilization can provide better insight into probable optimization opportunities.
|
6 |
Síťový storage pro účely virtualizace / Network storage for virtualizationKorbelář, Jakub January 2014 (has links)
The diploma thesis is focused on expansion of current KVM virtualization infrastructure with network storage in a web hosting company environment. The first part describes the basics of the network storage field, and the virtualization field as well. This is amended by a description of the current solution in the company, which is going to be expanded. The searching for suitable innovative solution is following, several variants are found, each of them is commented and their advantages and disadvantages are summarized. The realization of the selected solution is implemented, including the testing on the practical part.
|
7 |
Univerzální mobilní komunikační platforma pracující s technologií bluetooth / Universal mobile communication platform using the bluetooth technologySopko, Richard January 2009 (has links)
This master’s thesis is focused on field of communication technologies in mobile devices in personal WPAN type wireless networks. Work consists of three basic parts. First part provides overview of personal WPAN wireless networks and is specialized on Bluetooth technologies and its opportunities of communication between mobile devices. Second part deals with an opportunity of using programming language Java 2 Micro Edition in work with Bluetooth technology. Key point of this work is third part which includes scheme of conception of a communication platform and creating of application designed for mobile phones. Created application enables communication by means of changing files and written conversation of two or more people in real time by Bluetooth connection.
|
8 |
A Flattened Hierarchical Scheduler for Real-Time Virtual MachinesDrescher, Michael Stuart 04 June 2015 (has links)
The recent trend of migrating legacy computer systems to a virtualized, cloud-based environment has expanded to real-time systems. Unfortunately, modern hypervisors have no mechanism in place to guarantee the real-time performance of applications running on virtual machines. Past solutions to this problem rely on either spatial or temporal resource partitioning, both of which under-utilize the processing capacity of the host system. Paravirtualized solutions in which the guest communicates its real-time needs have been proposed, but they cannot support legacy operating systems. This thesis demonstrates the shortcomings of resource partitioning using temporally-isolated servers, presents an alternative solution to the scheduling problem called the KairosVM Flattening Scheduling Algorithm, and provides an implementation of the algorithm based on Linux and KVM. The algorithm is analyzed theoretically and an exact schedulability test for the algorithm is derived. Simulations show that the algorithm can schedule more than 90% of all randomly generated tasksets with a utilization less than 0.95. In comparison to the state-of-the-art server based approach, the KairosVM Flattening Scheduling Algorithm is able to schedule more than 20 times more tasksets with utilization of 0.95. Experimental results demonstrate that the Linux-based implementation is able to match the deadline satisfaction ratio of a state-of-the-art server-based approach when the taskset is schedulable using the state-of-the-art approach. When tasksets are unschedulable, the implementation is able to increase the deadline satisfaction ratio of Vanilla KVM by up to 400%. Furthermore, unlike paravirtualized solutions, the implementation supports legacy systems through the use of introspection. / Master of Science
|
9 |
Real-Time Hierarchical Scheduling of Virtualized SystemsBurns, Kevin Patrick 17 October 2014 (has links)
In industry there has been a large focus on system integration and server consolidation, even for real-time systems, leading to an interest in virtualization. However, many modern hypervisors do not inherently support the strict timing guarantees of real-time applications. There are several challenges that arise when trying to virtualize a real-time application. One key challenge is to maintain the guest's real-time guarantees. In a typical virtualized environment there is a hierarchy of schedulers. Past solutions solve this issue by strict resource reservation models. These reservations are pessimistic as they accommodate the worst case execution time of each real-time task. We model real-time tasks using probabilistic execution times instead of worst case execution times which are difficult to calculate and are not representative of the actual execution times. In this thesis, we present a probabilistic hierarchical framework to schedule real-time virtual machines. Our framework reduces the number CPUs reserved for each guest by up to 45%, while only decreasing the deadline satisfaction by 2.7%. In addition, we introduce an introspection mechanism capable of gathering real-time characteristics from the guest systems and present them to the host scheduler. Evaluations show that our mechanism incurs up to 21x less overhead than that of bleeding edge introspection techniques when tracing real-time events. / Master of Science
|
10 |
Measurement and Analysis of Networking Performance in Virtualised EnvironmentsChauhan, Maneesh January 2014 (has links)
Mobile cloud computing, having embraced the ideas like computation ooading, mandates a low latency, high speed network to satisfy the quality of service and usability assurances for mobile applications. Networking performance of clouds based on Xen and Vmware virtualisation solutions has been extensively studied by researchers, although, they have mostly been focused on network throughput and bandwidth metrics. This work focuses on the measurement and analysis of networking performance of VMs in a small, KVM based data centre, emphasising the role of virtualisation overheads in the Host-VM latency and eventually to the overall latency experienced by remote clients. We also present some useful tools such as Driftanalyser, VirtoCalc and Trotter that we developed for carrying out specific measurements and analysis. Our work proves that an increase in a VM's CPU workload has direct implications on the network Round trip times. We also show that Virtualisation Overheads (VO) have significant bearing on the end to end latency and can contribute up to 70% of the round trip time between the Host and VM. Furthermore, we thoroughly study Latency due to Virtualisation Overheads as a networking performance metric and analyse the impact of CPU loads and networking workloads on it. We also analyse the resource sharing patterns and their effects amongst VMs of different sizes on the same Host. Finally, having observed a dependency between network performance of a VM and the Host CPU load, we suggest that in a KVM based cloud installation, workload profiling and optimum processor pinning mechanism can be e ectively utilised to regulate network performance of the VMs. The ndings from this research work are applicable to optimising latency oriented VM provisioning in the cloud data centres, which would benefit most latency sensitive mobile cloud applications. / Mobil cloud computing, har anammat ideerna som beräknings avlastning, att en låg latens, höghastighetsnät för att tillfredsställa tjänsternas kvalitet och användbarhet garantier för mobila applikationer. Nätverks prestanda moln baserade på Xen och VMware virtualiseringslösningar har studerats av forskare, även om de har mestadels fokuserat på nätverksgenomströmning och bandbredd statistik. Arbetet är inriktat på mätning och analys av nätverksprestanda i virtuella maskiner i en liten, KVM baserade datacenter, betonar betydelsen av virtualiserings omkostnader i värd-VM latens och så småningom till den totala fördröjningen upplevs av fjärrklienter. Wealso presentera några användbara verktyg som Driftanalyser, VirtoCalc och Trotter som vi utvecklat för att utföra specifika mätningar och analyser. Vårt arbete visar att en ökning av en VM processor arbetsbelastning har direkta konsekvenser för nätverket Round restider. Vi visar också att Virtualiserings omkostnader (VO) har stor betydelse för början till slut latens och kan bidra med upp till 70 % av rundtrippstid mellan värd och VM. Dessutom är vi noga studera Latency grund Virtualiserings Omkostnader som en nätverksprestanda och undersöka effekterna av CPU-belastning och nätverks arbetsbelastning på den. Vi analyserar också de resursdelningsmönster och deras effekter bland virtuella maskiner i olika storlekar på samma värd. Slutligen, efter att ha observerat ett beroende mellan nätverksprestanda i ett VM och värd CPU belastning, föreslar vi att i en KVM baserad moln installation, arbetsbelastning profilering och optimal processor pinning mekanism kan anvandas effektivt för att reglera VM nätverksprestanda. Resultaten från denna forskning gäller att optimera latens orienterade VM provisione i molnet datacenter, som skulle dra störst latency känsliga mobila molnapplikationer.
|
Page generated in 0.0478 seconds