31 |
On the Role of Performance Interference in Consolidated EnvironmentsRameshan, Navaneeth January 2016 (has links)
With the advent of resource shared environments such as the Cloud, virtualization has become the de facto standard for server consolidation. While consolidation improves utilization, it causes performance-interference between Virtual Machines (VMs) from contention in shared resources such as CPU, Last Level Cache (LLC) and memory bandwidth. Over-provisioning resources for performance sensitive applications can guarantee Quality of Service (QoS), however, it results in low machine utilization. Thus, assuring QoS for performance sensitive applications while allowing co-location has been a challenging problem. In this thesis, we identify ways to mitigate performance interference without undue over-provisioning and also point out the need to model and account for performance interference to improve the reliability and accuracy of elastic scaling. The end goal of this research is to leverage on the observations to provide efficient resource management that is both performance and cost aware. Our main contributions are threefold; first, we improve the overall machine utilization by executing best-effort applications along side latency critical applications without violating its performance requirements. Our solution is able to dynamically adapt and leverage on the changing workload/phase behaviour to execute best-effort applications without causing excessive interference on performance; second, we identify that certain performance metrics used for elastic scaling decisions may become unreliable if performance interference is unaccounted. By modelling performance interference, we show that these performance metrics become reliable in a multi-tenant environment; and third, we identify and demonstrate the impact of interference on the accuracy of elastic scaling and propose a solution to significantly minimise performance violations at a reduced cost. / <p>QC 20160927</p>
|
32 |
Improving energy efficiency of virtualized datacentersNitu, Vlad-Tiberiu 28 September 2018 (has links) (PDF)
Nowadays, many organizations choose to increasingly implement the cloud computing approach. More specifically, as customers, these organizations are outsourcing the management of their physical infrastructure to data centers (or cloud computing platforms). Energy consumption is a primary concern for datacenter (DC) management. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US DCs alone will spend about $13 billion on energy bills. Generally, the datacenter servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. The latter allows a physical server to execute multiple virtual servers (called virtual machines) in an isolated way. With virtualization, the cloud provider can pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with sets of longterm unused resources (called 'holes'). My first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. This solution is effective only for elastic applications, i.e. applications that can be executed and reconfigured over an arbitrary number of VMs. However the datacenter resource fragmentation stems from a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. My second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. Thereby, the two resources can vary independently, depending on their demand. My third and my forth contribution show a practical system which exploit the second contribution. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. My third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the booked memory. My fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, my system allows the VM to allocate memory on a remote physical server.
|
33 |
HelenOS jako Xen hypervisor / HelenOS as Xen hypervisorDolejš, Jan January 2012 (has links)
The aim of the master thesis is to create a prototype implementation of the interface of the Xen hypervisor within the HelenOS operating system. The target architecture of this prototype implementation is IA-32. The result of the thesis is a port of HelenOS which can be uses to run the selected para-virtualized domain. The thesis contains a brief introduction to the methods of virtualization and describes the main differences between them. Thesis also describes the parts of the architecture of the Xen hypervisor and the HelenOS operating system, which will be modified in the prototype implementation. The most important part of this thesis is to select of the t testing domain as well as analyze and describe all changes, which were required for the do-main's operation.
|
34 |
Exploring Virtualization Techniques for Branch Outcome PredictionSadooghi-Alvandi, Maryam 20 December 2011 (has links)
Modern processors use branch prediction to predict branch outcomes, in order to fetch ahead in the instruction stream, increasing concurrency and performance. Larger predictor tables can improve prediction accuracy, but come at the cost of larger area and longer access delay.
This work introduces a new branch predictor design that increases the perceived predictor capacity without increasing its delay, by using a large virtual second-level table allocated in the second-level caches. Virtualization is applied to a state-of-the-art multi- table branch predictor. We evaluate the design using instruction count as proxy for timing on a set of commercial workloads. For a predictor whose size is determined by access delay constraints rather than area, accuracy can be improved by 8.7%. Alternatively, the design can be used to achieve the same accuracy as a non-virtualized design while using 25% less dedicated storage.
|
35 |
Exploring Virtualization Techniques for Branch Outcome PredictionSadooghi-Alvandi, Maryam 20 December 2011 (has links)
Modern processors use branch prediction to predict branch outcomes, in order to fetch ahead in the instruction stream, increasing concurrency and performance. Larger predictor tables can improve prediction accuracy, but come at the cost of larger area and longer access delay.
This work introduces a new branch predictor design that increases the perceived predictor capacity without increasing its delay, by using a large virtual second-level table allocated in the second-level caches. Virtualization is applied to a state-of-the-art multi- table branch predictor. We evaluate the design using instruction count as proxy for timing on a set of commercial workloads. For a predictor whose size is determined by access delay constraints rather than area, accuracy can be improved by 8.7%. Alternatively, the design can be used to achieve the same accuracy as a non-virtualized design while using 25% less dedicated storage.
|
36 |
Topology-Awareness and Re-optimization Mechanism for Virtual Network EmbeddingButt, Nabeel 06 January 2010 (has links)
Embedding of virtual network (VN) requests on top of a shared physical network poses an intriguing combination of theoretical and practical challenges. Two major problems with the state-of-the-art VN embedding algorithms are their indifference to the underlying substrate topology and their lack of re-optimization mechanisms for already embedded VN requests. We argue that topology-aware embedding together with re-optimization mechanisms can improve the performance of the previous VN embedding algorithms in terms of acceptance ratio and load balancing. The major contributions of this thesis are twofold: (1) we present a mechanism to differentiate among resources based on their importance in the substrate
topology, and (2) we propose a set of algorithms for re-optimizing and
re-embedding initially-rejected VN requests after fixing their bottleneck
requirements. Through extensive simulations, we show that not only our techniques improve the acceptance ratio, but they also provide the added benefit of balancing load better than previous proposals. The metrics we use to validate our techniques are improvement in acceptance ratio, revenue-cost ratio, incurred cost, and distribution of utilization.
|
37 |
Topology-Awareness and Re-optimization Mechanism for Virtual Network EmbeddingButt, Nabeel 06 January 2010 (has links)
Embedding of virtual network (VN) requests on top of a shared physical network poses an intriguing combination of theoretical and practical challenges. Two major problems with the state-of-the-art VN embedding algorithms are their indifference to the underlying substrate topology and their lack of re-optimization mechanisms for already embedded VN requests. We argue that topology-aware embedding together with re-optimization mechanisms can improve the performance of the previous VN embedding algorithms in terms of acceptance ratio and load balancing. The major contributions of this thesis are twofold: (1) we present a mechanism to differentiate among resources based on their importance in the substrate
topology, and (2) we propose a set of algorithms for re-optimizing and
re-embedding initially-rejected VN requests after fixing their bottleneck
requirements. Through extensive simulations, we show that not only our techniques improve the acceptance ratio, but they also provide the added benefit of balancing load better than previous proposals. The metrics we use to validate our techniques are improvement in acceptance ratio, revenue-cost ratio, incurred cost, and distribution of utilization.
|
38 |
Virtual platforms: System support to enrich the functionality of end client devicesJang, Minsung 21 September 2015 (has links)
Client devices operating at the edges on the Internet, in homes, cars, offices, and elsewhere, are highly heterogeneous in terms of their hardware configurations, form factors, and capabilities, ranging from small sensors to wearable and mobile devices, to the stationary ones like smart TVs and desktop machines. With recent and future advances in wireless networking allowing all such devices to interact with each other and with the cloud, it becomes possible to combine and augment capabilities of individual devices via services running at the edge - in edge clouds - and/or via services running in remote datacenters.
The virtual platform approach to combining and enhancing such devices developed in this research makes possible the creation of innovative end user services, using low-latency communications with nearby devices to create for each end user exactly the platform needed for current tasks, guided by permissions and policies controlled by remote, cloud-resident social network services (SNS). To end users, virtual platforms operate beyond the limitations of individual devices, as natural extensions of those devices that offer improved functionality and performance, with ease-of-use provided by cloud-level global context and knowledge.
|
39 |
Performance Evaluation of Concurrent Multipath Transmission : Measurements and AnalysisTedla, Sukesh Kumar January 2015 (has links)
Context: The data transmission mechanisms in a multi-homed network has gained importance in the past few years because of its potentials. Concurrent multipath transmission (CMT) technique uses the available network interfaces for transmission by pooling multiple paths together. It allows transport mechanisms to work independent of the underlying technology, which resembles the concept of Transport Virtualization (TV). As a result, in the development of Future Internet Architectures (FIA), TV plays a vital role. The leading commercial software technologies like IOS and Android have implemented such mechanisms in their devices. Multipath TCP and CMT-SCTP are the protocols under development which support this feature. The implementation and evaluation of CMT in real-time is complex because of the challenges like path binding, out-of-order packet delivery, packet-reordering and end-to-end delay. Objectives: The main objective of this thesis is to identify the possibilities of implementing CMT in real-time using multiple access technologies, and to evaluate the performance of transmission by measurements and analysis under different scenarios. Methods: To fulfill the objectives of the thesis, different methods are adopted. The development of CMT scenario is based on a spiral methodology where each spiral refers to different objectives. The sub-stages in a spiral are mainly implementation, observations, decisions and modifications. In order to implement and identify the possibilities of CMT in real-time, a deep literature study is performed beforehand. Results: The throughput of data transmission is less affected by varying the total number of TCP connections in transmission. Under different cases it is observed that the throughput has significant impact by varying number of efficient paths in transmission. Conclusion: From the experimental methodology of this work it can be observed that, CMT can be implemented in real-time using off-the-shelf components. Based on the experimentation results, it can be concluded that the throughput of transmission is affected by increasing number of paths. It can also be concluded that the total number of TCP connections during the transmission have less impact on throughput.
|
40 |
Performance evaluation of Linux Bridge and OVS in XenSingh, Jaswinder January 2015 (has links)
Virtualization is the key technology which has provided smarter and easier ways for effectively utilizing resources provided by the hypervisor. Virtualization allows multiple operative systems (OS) to run on a single hardware. The resources from a hardware are allocated to virtual machines (VM) by hypervisor. It is important to know how the performance of virtual switches used in hypervisor for network communication affect the network traffic. Performance of Linux Bridge (LB) and Open vSwitch (OVS) is investigated in this study. The method that has been used in this research is experimentation. Two different scenarios are used to benchmark the performance of LB and OVS in virtual and non-virtual environment. Performance metrics bitrate is used to benchmark the performance LB and OVS. The results received from the experimental runs contains the ingress bitrate and egress bitrate of LB and OVS in virtual and non-virtual environment. The results also contain the ingress and egress bitrate values from scenarios with different memory and CPU cores in virtual environment. Results achieved in this thesis report are from multiple experiment configurations. From results it can concluded that LB and OVS have almost same performance in non-virtual environment. There are small differences in ingress and egress of both virtual switches.
|
Page generated in 0.027 seconds