11 |
Designing RDMA-based efficient Communication for GPU RemotingBhandare, Shreya Amit 24 August 2023 (has links)
The use of General Purpose Graphics Processing Units (GPGPUs) has become crucial for accelerating high-performance applications. However, the procurement, setup, and maintenance of GPUs can be costly, and their continuous energy consumption poses additional challenges. Moreover, many applications exhibit suboptimal GPU utilization. To address these concerns, GPU virtualization techniques have been proposed. Among them, GPU Remoting stands out as a promising technology that enables applications to transparently harness the computational capabilities of GPUs remotely. GVirtuS, a GPU Remoting software, facilitates transparent and hypervisor-independent access to GPGPUs within virtual machines. This research focuses on the middleware communication layer implemented in GVirtuS and presents a comprehensive redesign that leverages the power of Remote Direct Memory Access (RDMA) technology. Experimental evaluations, conducted using a matrix multiplication application, demonstrate that the newly proposed protocol achieves approximately 50% reduced execution time for data sizes ranging from 1 to 16MB, and around 12% decreased execution time for sizes ranging from 500 to upto 1GB. These findings highlight the significant performance improvements attained through the redesign of the communication layer in GVirtuS, showcasing its potential for enhancing GPU Remoting efficiency. / Master of Science / General Purpose Graphics Processing Units (GPGPUs) have become essential tools for accelerating high-performance applications. However, the acquisition and maintenance of GPUs can be expensive, and their continuous energy consumption adds to the overall costs. Additionally, many applications often underutilize the full potential of GPUs. To tackle these challenges, researchers have proposed GPU virtualization techniques. One such promising approach is GPU Remoting, which enables applications to seamlessly utilize GPUs remotely. GVirtuS, a GPU Remoting software, allows virtual machines to access GPGPUs in a transparent and independent manner from the underlying system. This study focuses on enhancing the communication layer in GVirtuS, which facilitates efficient interaction between virtual machines and GPUs. By leveraging advanced technology called Remote Direct Memory Access (RDMA), we achieved significant improvements in performance. Evaluations using a matrix multiplication application showed a reduction of approximately 50% in execution time for small data sizes (1-16MB) and around 12% for larger sizes (500-800MB). These findings highlight the potential of our redesign to enhance GPU virtualization, leading to better performance and cost-efficiency in various applications.
|
12 |
Optimizing Boot Times and Enhancing Binary Compatibility for UnikernelsChiba, Daniel Juzer 25 June 2018 (has links)
Unikernels are lightweight, single-purpose virtual machines designed for the cloud. They provide enhanced security, minimal resource utilisation, fast boot times, and the ability to optimize performance for the target application. Despite their numerous advantages, unikernels face significant barriers to their widespread adoption. We identify two such obstacles as unscalable boot procedures in hypervisors and the difficulty in porting native applications to unikernel models. This work presents a solution for the first based on the popular Xen hypervisor, and demonstrates a significant performance benefit when running a large number of guest VMs. The HermiTux unikernel aims to overcome the second obstacle by providing the ability to run unmodified binaries as unikernels. This work adds to HermiTux, enabling it to retain some of the important advantages of unikernels such as fast system calls and modularity. / MS / Cloud computing provides economic benefits to users by allowing them to pay only for the resources that they use. Traditional virtual machines, so far the mainstay of cloud computing, come with a large number of features that are unnecessary for most cloud applications. Unikernels are specialised virtual machines that are compiled with only the features required to run the target application on top of a hypervisor. They have reduced memory requirements, short boot times, fast system calls, enhanced security and greater customizability. Despite these advantages, unikernels have not gained significant traction in industry. One reason is that existing hypervisors were not designed with unikernels in mind. Specifically, we show that for the Xen hypervisor, boot times rise exponentially with the number of VMs running on the system. The small size of unikernels allows us to run a much larger number of guest VMs than was previously possible, but these rising boot times present a major bottleneck. This thesis analyses the cause of this overhead and presents a solution that leads to a 4x reduction in the overall time required to boot 500 unikernels at once. Another reason for the slow adoption of unikernels is the difficulty involved in porting legacy applications to unikernel models. The HermiTux unikernel aims to remove this effort by allowing users to run unmodified, statically compiled executables compiled for Linux. In doing so, however, we lose the ability to modularise the unikernel for the application concerned, and also reintroduce a major source of overhead from regular applications - namely system calls. This thesis presents techniques based on binary analysis and binary rewriting that enable us to regain these advantages of unikernels in HermiTux.
|
13 |
Virtualization Security Threat Forensic and Environment SafeguardingZahedi, Saed January 2014 (has links)
The advent of virtualization technologies has evolved the IT infrastructure and organizations are migrating to virtual platforms. Virtualization is also the foundation for cloud platform services. Virtualization is known to provide more security into the infrastructure apart from agility and flexibility. However security aspects of virtualization are often overlooked. Various attacks to the virtualization hypervisor and its administration component are desirable for adversaries. The threats to virtualization must be rigorously scrutinized to realize common breaches and knowing what is more attractive for attackers. In this thesis a current state of perimeter and operational threats along with taxonomy of virtualization security threats is provided. The common attacks based on vulnerability database are investigated. A distribution of the virtualization software vulnerabilities, mapped to the taxonomy is visualized. The famous industry best practices and standards are introduced and key features of each one are presented for safeguarding the virtualization environments. A discussion of other possible approaches to investigate the severity of threats based on automatic systems is presented.
|
14 |
Analysis of Resource Isolation and Resource Management in Network VirtualizationLindholm, Rickard January 2016 (has links)
Context. Virtualized networks are considered a major advancement in the technology of today, virtualized networks are offering plenty of functional benefits compared to todays dedicated networking elements. The virtualization allows network designers to separate networks and adapting the resources depending on the actual loads in other words, Load Balancing. Virtual networks would enable a minimized downtime for deployment of updates and similar tasks by performing a simple migration and then updating the linking after properly testing and preparing the Virtual Machine with the new software. When this technology is successfully proven to be efficient or evaluated and later adapted to the existing flaws. Virtualized networks will at that point claim the tasks of todays dedicated networking elements. But there are still unknown behaviors and effects of the technology for example, how the scheduler or hypervisor handles the virtual separation since they do share the same physical transmission resources.Objectives. By performing the experiments in this thesis, the hope is to learn about the effects of virtualization and how it performs under stress. By learning about the performance under stress it would also increase the knowledge about the efficiency of network virtualization. The experiments are conducted by creating scripts, using already written programs and systems, adding different loads and measuring the effects, this is documented so that other students and researchers can benefit from the research done in this thesis.Methods. In this thesis 5 different methodologies are performed: Experimental validation, statistical comparative analysis, resource sharing, control theory and literature review. Two systems are compared to previous research by evaluating the statistical results and analyzing them. As mentioned earlier the investigation of this thesis is focusing on how the scheduler executes the resource sharing under stress. The first system which is the control test is designed without any interference and a 5 Mbit/s UDP stream which is going through the system under test and being timestamped on measurement points on both the ingress and the egress, the second experiment involves an interfering load of a 5 Mbit/s UDP stream on the same system under test. Since it is a complex system quite some literature reviewing was done but mostly to gain a understanding and an overview of the different parts of the system and so that some obstacles would be able to be avoided.Results. The statistical comparative analysis of the experiments produced two graphs and two tables containing the coefficient of variance of the two experiments. The graph of the control test produced a graph with a quite even distribution over the time intervals with a coefficient of variance difference to the power of 10−3 and increasing somewhat over the larger time intervals. The second experiment with two virtual machines and an interfering packet stream are more distributed over the 0.0025 seconds and the 0.005 seconds intervals with a larger difference than the control test having a difference to the power of 10−2, showing some signs of a bottleneck in the system.Conclusions. Since the performance of the experiments and also the statistical handling of the data took longer than expected the choice was made to not deploy the system using Open Virtual Switch instead of Linux Bridge, hence there is not any other experiments to compare the performance with. But from referred research under related works the researcher concluded that the difference between Open Virtual Switch and Linux Bridge is small when compared without introducing any load. This is also confirmed on the website of Open Virtual Switch which states that Open Virtual Switch uses the same base as Linux Bridge. Linux Bridge is performing according to the expectations, it is a simple yet powerful tool and the results are confirming the previous research which claims that there are bottlenecks in the system. According to the pre-set requirement for validity for this experiment the difference of the CoV would be greater than to the power of 10−5, the measured difference was to the power of 10−2 which gives support to the theory that there are bottlenecks in the system. In the future it would be interesting to examine more about the effects of different hypervisors, virtualization techniques, packet generators etcetera to tackle these problems. A company that have taken countermeasures is Intel who have developed DPDK which confronts these efficiency problems by tailoring the scheduler towards the specific tasks. The downside of Intel’s DPDK is that it limits the user to Intel processors and removes one of the most important benefits of virtualization, the independence. But Intel have tried to keep it as independent as possible by maintaining DPDK as open source.
|
15 |
Performance Comparison of Cassandra in LXC and Bare metal : Container Virtualization case studyThiruvallur Vangeepuram, Reventh January 2016 (has links)
Big data is a developing term that describes any large amount of structured and unstructured data that has the potential to be mined for information. To store this type of large amounts of data, cloud storage systems are necessary. These cloud storage systems are developed such that they are capable of keeping the data accessible and available to the users over a network. To store big data new platforms are required. Some of the popular big data platforms are Mongo, Cassandra and Hadoop. In this thesis we used Cassandra database system because it is a distributed database and also open source. Cassandra’s architecture is master less ring design that is easy to setup and easy to maintain. Apache Cassandra is a highly scalable distributed database designed to handle big data management with linear scalable and seamless multiple data center deployment. It is a NoSQL database system which allow schema free tables so that a data item could have a variable set of columns unlike in relational databases. Cassandra provides with high scalability with no single point of failure. For the past few years’ container based virtualization has been evolving rapidly. Container based virtualization such as LXC have been focused here. Linux Containers (LXC) is an operating system level virtualization method for running multiple isolated Linux systems on a single control host. It does not resemble a virtual machine, but provides a virtual environment that has its own CPU, memory, network, etc. space and the resource control mechanism. In this thesis work performance of Apache Cassandra database has been analyzed between bare metal and Linux Containers(LXC). A three node Cassandra cluster has been created on both bare metal and Linux container. Assuming one node as seed and Cassandra stress utility tool has been used to test the load of Cassandra cluster. The performance of Cassandra cluster database has been evaluated in bare metal and Linux Container which is the goal of this thesis work. Linux containers (LXC) are deployed in all the servers. A three node Cassandra database cluster has been created in these servers and also in Linux Container(LXC). Port forwarding is the technique used here for making communication between Cassandra in LXC which is the goal of this thesis work. The performance metrics which determine the performance of Cassandra cluster database are selected according to it. The network configuration parameters are changed according to the behavior of Cassandra. By doing changes in these parameters Cassandra starts running according to the required configuration, after this Cassandra cluster performance will be analyzed. This is done with different write, read and mixed load operations and compared with Cassandra cluster performance on bare metal. The results of the thesis show an analysis of measurements of performance metrics like CPU utilization, Disk throughput and latency while running on Cassandra cluster in both bare metal and Linux Containers. A quantitative and statistical analysis of performance of Cassandra cluster is compared. The physical resources utilized by the Cassandra database on native bare metal and Linux Containers (LXC) is similar. According to the results, CPU utilization is more for Cassandra database in Linux Containers. Disk throughput is also more in Linux Containers except in the case of 66% load write operation. Bare metal has less latency compared to Linux Containers in all the scenarios.
|
16 |
APPLYING LEAN PRINCIPLES FOR PERFORMANCE ORIENTED SERVICE DESIGN OF VIRTUAL NETWORK FUNCTIONS FOR NFV INFRASTRUCTURE : Roles and RelationshipsKolluri, Saiphani Krishna Priyanka Kolluri January 2016 (has links)
Context. Network Function Virtualization was recently proposed by European Telecommunications Standards Institute (ETSI) to improve the network service flexibility by virtualization of network services and applications that run on hardware. To virtualize network functions, the software is decoupled from underlying physical hardware. NFV aims to transform industries by reducing capital investments on hardware by using commercial-of-the-shelf (COTS) hardware. NFV makes rapid innovative growth in telecom services through software based service deployment. Objectives. This thesis work aims to investigate how business organizations function and the roles in defining a service relationship model. The work also aims to define a service relationship model and to validate it via proof of concept using network function virtualization as a service. For this thesis, we finally apply lean principles for the defined service relationship model to reduce waste and investigate how lean benefits the model to be proven as performance service oriented. Methods. The essence of this work is to make a business organization lean by investigating its actions and applying lean principles. To elaborate, this thesis work involves in a research of papers from IEEE, TMF, IETF and Ericsson. It results in modelling of a PoC by following requirement analysis methodology and by applying lean principles to eliminate unnecessary processes which doesn’t add any value. Results. The results of the work include a full-fledged service relationship model that include three service levels with roles that can fit in to requirement specifications of NFV infrastructure. The results also show the service levels functionalities and their relationships between the roles. It has also been observed that the services that are needed to be standardized are defined with syntax for ways to describe network functions. It is observed that lean principles benefit the service relationship model from reducing waste factors and hereby providing a PoC which is performance service oriented. Conclusions. We conclude that roles defined are fit for the service relationship model designed. Moreover, we conclude that the model can hence contain the flow of service by standardizing the subservices and reducing waste interpreted with lean principles and there is a need for further use case proof of the model in full scale industry trials. It also concludes the ways to describe network functions syntax which follows lean principles that are essential to have them for the sub-services standardization. However, PoC defined can be an assurance to the NFV infrastructure.
|
17 |
APPLYING LEAN PRINCIPLES FOR PERFORMANCE ORIENTED SERVICE DESIGN OF VIRTUAL NETWORK FUNCTIONS FOR NFV INFRASTRUCTURE : Concepts of LeanAdapa, Sasank Sai Sujan January 2016 (has links)
Context. Network Function Virtualization was recently proposed by European Telecommunications Standards Institute (ETSI) to improve the network service flexibility by virtualization of network services and applications that run on hardware. To virtualize network functions, the software is decoupled from underlying physical hardware. NFV aims to transform industries by reducing capital investments on hardware by using commercial-of-the-shelf (COTS) hardware. NFV makes rapid innovative growth in telecom services through software based service deployment. Objectives. This thesis work aims to investigate how business organizations function and the roles in defining a service relationship model. The work also aims to define a service relationship model and to validate it via proof of concept using network function virtualization as a service. For this thesis, we finally apply lean principles for the defined service relationship model to reduce waste and investigate how lean benefits the model to be proven as performance service oriented. Methods. The essence of this work is to make a business organization lean by investigating its actions and applying lean principles. To elaborate, this thesis work involves in a research of papers from IEEE, TMF, IETF and Ericsson. It results in modelling of a PoC by following requirement analysis methodology and by applying lean principles to eliminate unnecessary processes which doesn’t add any value. Results. The results of the work include a full-fledged service relationship model that include three service levels with roles that can fit in to requirement specifications of NFV infrastructure. The results also show the service levels functionalities and their relationships between the roles. It has also been observed that the services that are needed to be standardized are defined with syntax for ways to describe network functions. It is observed that lean principles benefit the service relationship model from reducing waste factors and hereby providing a PoC which is performance service oriented. Conclusions. We conclude that roles defined are fit for the service relationship model designed. Moreover, we conclude that the model can hence contain the flow of service by standardizing the subservices and reducing waste interpreted with lean principles and there is a need for further use case proof of the model in full scale industry trials. It also concludes the ways to describe network functions syntax which follows lean principles that are essential to have them for the sub-services standardization. However, PoC defined can be an assurance to the NFV infrastructure.
|
18 |
Low overhead dynamic binary translation for ARMD'Antras, Bernard January 2017 (has links)
Driven by Moore's Law, many computer architectures - ARM, x86, MIPS, PowerPC, SPARC - have evolved from 32-bit to 64-bit. To support existing applications, these have all kept support for a 32-bit compatibility mode. However, this comes at a cost in hardware complexity, power consumption and development time. Dynamic binary translation - recompiling binaries into the new instruction set at runtime - can be used instead of specific hardware for this purpose. While this approach has previously been used to assist architecture transition, these translators have all traded-off performance and transparency, a measure of how accurately they emulate the 32-bit environment. This thesis addresses ARM's transition from AArch32 to AArch64 through MAMBO-X64, a dynamic binary translator developed to support this transition. A range of novel optimizations were devised to improve translation performance while maintaining strict transparency. This follows a common theme of exploiting existing hardware features such as hardware return prediction, virtual memory and virtualization extensions to offset translation overheads. HyperMAMBO-X64 - a variant of MAMBO-X64 integrated in a hypervisor - was also developed to support system-level translation while remaining transparent to guest operating systems. Results demonstrate that the cost of binary translation is reduced, delivering performance competitive with the manufacturer's hardware. Performance in several benchmarks even exceeds that from the integrated compatibility mode. Thus MAMBO-X64 not only provides a means for architectural upgrade, but also an alternative to the expense of the legacy support currently employed.
|
19 |
Dynamic selection of redundant web servicesSlavova, Svetlana 15 August 2007
In the domain of Web Services, it is not uncommon to find redundant services that provide functionalities to the clients. Services with the same functionality can be clustered into a group of redundant services. Respectively, if a service offers different functionalities, it belongs to more than one group. Having various Web Services that are able to handle the client's request suggests the necessity of a mechanism that selects the most appropriate Web Service at a given moment of time. <p>This thesis presents an approach, Virtual Web Services Layer, for dynamic service selection based on virtualization on the server side. It helps managing redundant services in a transparent manner as well as allows adding services to the system at run-time. In addition, the layer assures a level of security since the consumers do not have direct access to the Web Services. <p>Several selection techniques are applied to increase the performance of the system in terms of load-balancing, dependability, or execution time. The results of the experiments show which selection techniques are appropriate when different QoS criteria of the services are known and how the correctness of this information influences on the decision-making process.
|
20 |
A Performance Evaluation of Database Systems on Virtual MachinesMinhas, Umar Farooq 04 December 2007 (has links)
Virtual machine technologies offer simple and practical mechanisms to address
many manageability problems in database systems. For example, these
technologies allow for server consolidation, easier deployment, and more flexible
provisioning. Therefore, database systems are increasingly being run on
virtual machines. This offers many unique opportunities for database research. However, it is also important to understand the
cost of virtualization. Virtual machine technologies add a layer of indirection between applications and the hardware that they use (e.g. CPU, memory, disk). This added complexity results in a performance overhead for software systems running in a virtual machine. In this thesis, we present an experimental study of the overhead of running a database workload in a virtual machine. Using a TPC-H workload running on PostgreSQL in a Xen
virtual machine environment, we show that Xen does indeed introduce overhead
for system calls, page fault handling, and disk I/O. However, these overheads
do not translate to a high overhead in query execution time. We show that in all cases the
average overhead is less than 10% and, therefore, conclude that the advantages of running a database system in a virtual
machine do not come at a high cost in performance.
|
Page generated in 0.0209 seconds