• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Análise do impacto do isolamento em ambientes virtuais

SILVA, Luís Eduardo Tenório 07 March 2016 (has links)
Submitted by Luiza Maria Pereira de Oliveira (luiza.oliveira@ufpe.br) on 2017-08-04T14:28:42Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Luiz Eduardo Tenório Silva.pdf: 1078029 bytes, checksum: 3ffaed1682082ec2d37b84b496c3cd81 (MD5) / Made available in DSpace on 2017-08-04T14:28:42Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Luiz Eduardo Tenório Silva.pdf: 1078029 bytes, checksum: 3ffaed1682082ec2d37b84b496c3cd81 (MD5) Previous issue date: 2016-03-07 / CNPq / O surgimento da virtualização permitiu uma mudança na maneira de pensar em prestação de serviço pela Internet, permitindo assim o surgimento de grandes conceitos como a computação em nuvem. Com o passar do tempo, novas tecnologias que permitem virtualizar recursos foram surgindo e trouxe a tona questões de desempenho, isolamento, entre outras. Analisar o impacto causado pelo mau comportamento das abstrações virtuais pode permitir ao administrador da nuvem tomar uma ação na escolha de uma tecnologia específica de virtualização para minimizar os impactos causados a todo ambiente. Hodiernamente, o isolamento é uma preocupação que remete pesquisas e estudos do seu impacto na qualidade dos serviços prestados em um ambiente virtualizado. Indagar sobre a possibilidade de detecção para tomada de ações com a finalidade de minimizar os impactos decorrentes de interferências devido a um mau isolamento é uma das atividades que vêm sendo estudadas com o passar dos anos. O surgimento de diversas técnicas de virtualização trouxe também preocupações de qual é adequada a qual caso. Alguns dessas técnicas vêm sofrendo melhorias únicas nesses últimos anos, principalmente no que diz respeito ao isolamento e controle de recursos. Nesse contexto, essa dissertação propõe uma estratégia adaptada da literatura (procurando unir técnicas distintas) para observar possíveis indícios de quebra do isolamento de ambientes virtuais e organizar os serviços de determinada natureza na melhor técnica de virtualização, verificando também os resultados apresentados por cada técnica de virtualização existente. Para tanto, é adotada uma metodologia que favorece a concepção dos diversos cenários possíveis a partir de um número de infraestruturas virtuais, ofertando serviços web e utilizando diferentes técnicas de virtualização, observando principalmente os recursos utilizados pelas infraestruturas virtuais e a qualidade do serviço prestado. Concluímos que dependendo do tipo de recurso observado as estratégias de isolamento de uma técnica de virtualização podem ser ou não eficazes. / The rise of virtualization has enabled a shift in thinking in service delivery over the Internet, thus allowing the emergence of major concepts such as cloud computing. Over time, new technologies that enable virtualization resources have emerged and brought up performance issues, performance, insulation, etc. Analyze the impact of the bad behavior of virtual abstractions can enable the cloud administrator take an action in choosing a specific virtualization technology to minimize impacts to the whole environment. Nowadays, isolation is a concern that brings research and studies its impact on the quality of services in a virtualized environment. Investigate about the possibility of detection for taking actions in order to minimize the impacts of interference due to poor isolation, it is one of the activities that have been studied over the years. The emergence of various virtualization techniques also brought concerns the kind which is suitable case. Some of these techniques have improved in recent years, especially with regard to isolation and control features. In this context, this work proposes an adapted strategy literature (seeking to unite different techniques) to observe possible break indications isolation of virtual environments and organize the particular nature services in the best virtualization technique, also checking the results presented for each technique existing virtualization. Therefore, a methodology that favors the design of the various possible scenarios constructed from a number of virtual infrastructures offering web services and using different virtualization techniques is adopted, especially noting the resources used by the virtual infrastructure and the quality of service. We conclude that depending on the type of resource noted the isolation strategies of a virtualization technique can be effective or not.
2

LXC utvärdering : Skriv- och läshastighet till disk analys av LXC under ESXi / LXC Evaluation : Write and reading speed evaluation of LXC intertwined with ESXi

Olsson, Johan January 2016 (has links)
There are several ways to virtualize machines from the different closed source variants as VMware ESXi and Windows Hyper-V virtualization to open source varies as Xen and Kernel-based Virtual Machine (KVM). There is also another way to virtualize parts of an operating system to increase versatility and be able to use more of the system’s resources in a more efficient way. LXC (Linux Containers) is a lightweight virtualization that is run on top of the existing operating system by encapsulating applications that is inside the container. LXC works so the kernel of the Linux system is shared by the containers that run next to each other without much knowledge of each other. In that way it can be more resource efficient than virtualizing the entire Linux kernel several times for different applications in a traditional guest to host environment. Many data centers today are already using some variant of virtualization in their production environment, it may then be interesting to examine if there are some other methods that result in better performance for chosen application and power savings when hosts can be turned off. That is why this project has carried out a field study to examine how LXC performs when the host system is virtualized in a hypervisor environment. An organization might want to migrate from a hypervisor environment to a lightweight virtualization environment that is based on containers. The work has been done by doing experiments using two different software to examine I/O to determine if LXC is affected by being nested inside ESXi. The study begins with a small background study to obtain information that will give relevant information from previous done work in relevant fields. The study was conducted with the use of the experimental method to be able to answer the hypothesis and the projects questions. The questions that was answered in the project was: How much degradation of the file system's read and write speeds arises when LXC is nested in ESXi? Does it affect the file system's ability to read and write to disk when there are restrictions on available resources? The result of the experiments show that LXC performs close to equal of the bare metal systems, with a 2 percent loss as a minimum and a maximum of 11 percent in write and read ability to/from disk. When LXC is intertwined with ESXi there is an up to 15 percent loss in write and read ability excluding the loss the hypervisor adds. When restricting the resources for a container down to one processor core and two gigabytes of primary memory experiments show that there was a 3 to 15 percent loss in write and read ability from the disk
3

Performance comparison of Linux containers(LXC) and OpenVZ during live migration : An experiment

Indukuri, Pavan Sutha Varma January 2016 (has links)
Context: Cloud computing is one of the most widely used technologies all over the world that provides numerous products and IT services. Virtualization is one of the innovative technologies in cloud computing which has advantages of improved resource utilisation and management. Live migration is an innovative feature of virtualization that allows a virtual machine or container to be transferred from one physical server to another.  Live migration is a complex process which can have a significant impact on cloud computing when used by the cloud-based software.  Objectives: In this study, live migration of LXC and OpenVZ containers has been performed.  Later the performance of LXC and OpenVZ has been conducted in terms of total migration time and downtime. Further CPU utilisation, disk utilisation and an average load of the servers is also evaluated during the process of live migration. The main aim of this research is to compare the performance of LXC and OpenVZ during live migration of containers.  Methods: A literature study has been done to gain knowledge about the process of live migration and the metrics that are required to compare the performance of LXC and OpenVZ during live migration of containers. Further an experiment has been conducted to compute and evaluate the performance metrics that have been identified in the literature study. The experiment was done to investigate and evaluate migration process for both LXC and OpenVZ. Experiments were designed and conducted based on the objectives which were to be met. Results:  The results of the experiments include the migration performance of both LXC and OpenVZ. The performance metrics identified in the literature review, total migration time and downtime, were evaluated for LXC and OpenVZ. Further graphs were plotted for the CPU utilisation, disk utilisation, and average load during the live migration of containers. The results were analysed to compare the performance differences between OpenVZ and LXC during live migration of containers. Conclusions.  The conclusions that can be drawn from the experiment. LXC has shown higher utilisation, thus lower performance when compared with OpenVZ. However, LXC has less migration time and downtime when compared to OpenVZ.
4

Towards a Secure IoT Computing Platform Using Linux-Based Containers

Hufvudsson, Marcus January 2017 (has links)
The Internet of Things (IoT) are small, sensing, network enabled computing devices which can extend smart behaviour into resource constrained domains. This thesis focus on evaluating the viability of Linux containers in relation to IoT devices. Three research questions are posed to investigate various aspects of this. (1) Can any guidelines and best practices be derived from creating a Linux container based security enhanced IoT platform? (2) Can the LiCShield project be extended to build dynamic, default deny seccomp configurations? (3) Are Linux containers viable on IoT platforms in regards to operational performance impact? To answer these questions, a literature review was conducted, research gaps identified and a research methodology selected. A Linux-based container platform was then created in which applications could be run. Experimentation was conducted on the platform and operational measurements collected. A number of interesting results was produced during the project. In relation to the first research question, it was discovered that the LXC templating code created could probably benefit other Linux container projects as well as the LXC project itself. Secondly, it was found that a robust, layered containerized security architecture could be created by utilizing basic container configurations and by drawing from best practices from LXC and docker. In relation to the second research question, a proof of concept system was created to profile and build dynamic, default deny seccomp configurations. Analysis of the system shows that the developed method is viable. In relation to the final research question; Container overhead with regards to CPU, memory, network I/O and storage was measured. In this project, there were no CPU overhead and only a slight performance decrease of 0.1 % on memory operations. With regards to network I/O, a speed decrease of 0.2 % was observed when a container received data and utilized NAT. On the other hand, while the container was sending data, a speed increase of 1.4 % was observed while the container was operating in bridge mode and an increase of 0.9 % was observed while utilizing NAT. Regarding storage overhead, a total of 508 KB base overhead was added to each container on creation. Due to these findings, the overhead containers introduce are considered negligible and thus deemed viable on IoT devices.
5

Comparing Live Migration between Linux Containers and Kernel Virtual Machine : Investigation study in terms of parameters

Kotikalapudi, Sai Venkat Naresh January 2017 (has links)
Context. Virtualization technologies have been extensively used in various cloud platforms. Hardware replacements and maintenance are occasionally required, which leads to business downtime. Live migration is performed to ensure high availability of services, as it is a major aspect. The performance of live migration in virtualization technologies directly impacts the performance of cloud platforms. Hence comparison is performed in two mainstream virtualization technologies, container and hypervisor based virtualization. Objectives. In the present study, the objective is to perform live migration of hypervisor and container based virtualization technologies, Kernel Virtual Machine (KVM) and Linux Containers (LXC) respectively. Measure and compare the downtime, total migration time, CPU utilization and disk utilization of KVM and LXC during live migration. Methods. An initial literature is conducted to get in-depth knowledge about live migration in virtualization technologies. An experiment is conducted to perform live migration in KVM and LXC. The live migration process is performed when 100 % and 66% workloads are being generated to Cassandra present in virtual machine and container. The performance of live migration in KVM and LXC is measured in terms of CPU utilization, disk utilization, total migration time and downtime. Results. Based on the obtained results from the experiment, graphs are plotted for the performance of KVM and LXC during live migration. The results indicated that KVM has better CPU utilization when compared to LXC. However, downtime, total migration time and disk utilization of LXC are relatively better than KVM. From the obtained results, mean and standard deviation are calculated. Box plotting for downtime and total migration time is performed to illustrate difference between KVM and LXC. The measurable difference between KVM and LXC is calculated using Cohen’s d effect size for downtime, total migration time, CPU and disk utilization. Conclusions. The present study concludes that no single hypervisor has better performance when considering all performance metrics. While LXC has better performance when considering downtime, total migration time and disk utilization. However, KVM performs better when CPU usage is considered.
6

Experimental Investigation of Container-based Virtualization Platforms For a Cassandra Cluster

Sulewski, Patryk, Jesper, Hallborg January 2017 (has links)
Context. Cloud computing is growing fast and has established itself as the next generationsoftware infrastructure. A major role in cloud computing is the virtualization of hardware toisolate systems from each other. This virtualization is often done with Virtual Machines thatemulate both hardware and software, which in turn makes the process isolation expensive. Newtechniques, known as Microservices or containers, has been developed to deal with the overhead.The infrastructure is conjoint with storing, processing and serving vast and unstructureddata sets. The overall cloud system needs to have high performance while providing scalabilityand easy deployment. Microservices can be introduced for all kinds of applications in a cloudcomputing network, and be a better fit for certain products.Objectives. In this study we investigate how a small system consisting of a Cassandra clusterperform while encapsulated in LXC and Docker containers, compared to a non virtualizedstructure. A specific loader is built to stress the cluster to find the limits of the containers.Methods. We constructed an experiment on a three node Cassandra cluster. Test data is sentfrom the Cassandra-loader from another server in the network. The Cassandra processes are thendeployed in the different architectures and tested. During these tests the metrics CPU, disk I/O,network I/O are monitored on the four servers. The data from the metrics is used in statisticalanalysis to find significant deviations.Results. Three experiments are being conducted and monitored. The Cluster test pointed outthat isolated Docker container indicate major latency during disk reads. A local stress test furtherconfirmed those results. The step-wise test in turn, implied that disk read latencies happened dueto isolated Docker containers needs to read more data to handle these requests. All Microservicesprovide some overheads, but fall behind the most for read requests.Conclusions. The results in this study show that virtualization of Cassandra nodes in a clusterbring latency in comparison to a non virtualized solution for write operations. However, thoselatencies can be neglected if scalability in a system is the main focus. For read operationsall microservices had reduced performance and isolated Docker containers brought out thehighest overhead. This is due to the file system used in those containers, which makes disk I/Oslower compared to the other structures. If a Cassandra cluster is to be launched in a containerenvironment we recommend a Docker container with mounted disks to bypass Dockers filesystem or a LXC solution.
7

Systém pro automatickou správu serverů / System for Automated Server Administration

Pavelka, Martin January 2019 (has links)
The goal of this diploma thesis is to design the user interface and implement the information system as a web application. Using the custom implemented library the system communicates with GraphQL server which manages the client data. The thesis describes possible solutions for physical servers automatization. The application provides the application interface to manage virtual servers. Automatization is possible without human interaction. Connection to the virtualization technologies is handled by web interface APIs or custom scripts running in the virtual system terminal. There is a monitoring system built over project components. The thesis also describes the continuous integration using Gitlab tools. Running the configuration task is solved using the Unix CRON system.

Page generated in 0.02 seconds