• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of Auto-Scaling Policies Using Docker Swarm / Jämförelse av autoskalningspolicies med hjälp av Docker Swarm

Adolfsson, Henrik January 2019 (has links)
When deploying software engineering applications in the cloud there are two similar software components used. These are Virtual Machines and Containers. In recent years containers have seen an increase in popularity and usage, in part because of tools such as Docker and Kubernetes. Virtual Machines (VM) have also seen an increase in usage as more companies move to solutions in the cloud with services like Amazon Web Services, Google Compute Engine, Microsoft Azure and DigitalOcean. There are also some solutions using auto-scaling, a technique where VMs are commisioned and deployed to as load increases in order to increase application performace. As the application load decreases VMs are decommisioned to reduce costs. In this thesis we implement and evaluate auto-scaling policies that use both Virtual Machines and Containers. We compare four different policies, including two baseline policies. For the non-baseline policies we define a policy where we use a single Container for every Virtual Machine and a policy where we use several Containers per Virtual Machine. To compare the policies we deploy an image serving application and run workloads to test them. We find that the choice of deployment strategy and policy matters for response time and error rate. We also find that deploying applications as described in the methodis estimated to take roughly 2 to 3 minutes.
2

Performance Evaluation of WebRTC Server On Different Container Technologies : Kubernetes and Docker Swarm

Kukkapalli, Naga Vyshnavi January 2021 (has links)
Background:  Cloud computing technology has come a long way with various technological advancements in the past few years. It has been accelerated with the evolution of various virtualization technologies. Currently almost every social platform and small-scale applications look towards cloud to deploy their services successfully and provide maximum satisfaction to their end-user. Thus, virtualizing their services becomes utmost important to deploy and develop their applications. This alone emphasizes the importance of Docker containers in the development world. Docker containers right now are playing a very important role in the field of cloud computing. Since Multimedia plays a huge role in our day to day lives and most people crave for faster and efficient responses, it is essential to develop our applications with better Real time communication capabilities. Thus, we are determining which container orchestration tool serves best for Real time communication applications.  A multimedia application is developed and deployed using WebRTC based Kurento media server and the performance of the server is measured when the application is deployed. We have chosen Kubernetes and Docker Swarm as container platforms for this thesis. The Servers and Clients are virtualized and metrics such as CPU Utilization, Network Traffic, Container overhead, Memory Utilization are measured. These metrics provide the performance overhead in different scenarios for each orchestration technology. This will be helpful to analyze and understand the effect of Kurento server on these technologies. Thus, the results are expected to determine which orchestration technology serves best for RTC applications. Objectives: The objectives of this project are:  • To implement WebRTC based Kurento server in a container orchestrated environment.  • To extract performance metrics such as Network Traffic, CPU and Memory Utilization while server is running.  • To compare WebRTC based Kurento server in Kubernetes and Docker Swarm.   Method: Kubernetes and Docker Swarm environments are setup and then docker images with video conferencing application(One-to-One call and One-to-Many call) using Kurento media server is deployed in them. Once either of the applications is running, experiments are performed for analyzing performance metrics like CPU Utilization, Memory Utilization, Network Traffic and overhead using monitoring tool, Prometheus. Along with Kubernetes and Docker Swarm, Kurento server is also deployed on a stand-alone container to estimate the performance overhead. Later, statistical analysis(ANOVA and differences of Standard error) is done over these metrics and conclusions are drawn.  Results: Based on the performed experiments and the extracted metrics, for One-to-One call application, Kubernetes showed better resource utilization for CPU and Network Traffic while it consumed more memory over Docker Swarm. Similar behaviour is observed for One-to-Many application. When application is scaled, the percent of resource utilization increase in Kubernetes is higher when compared to Docker Swarm, but overall resource utilization of Kubernetes is much lower than that of Docker Swarm.  Conclusions: WebRTC based Kurento media server is investigated in  Kubernetes and Docker Swarm. From the detailed analysis there is significant overhead in Docker Swarm than in Kubernetes for CPU Utilization and Network Traffic. For Memory Utilization, this is opposite. Packet Loss resulted in 0 percent as network transfer is within the same network . By considering all the metrics and providing evidence that numbers obtained in this thesis are statistically significant and not by fluctuations(ANOVA and post-hoc analysis), we can better recommend Kubernetes over Docker Swarm for Web based Real Time Communication.   However, not all applications need the complex deployment, scheduling, and scaling services (or the overhead) that Kubernetes offers. But to meet the increasing demand for seamless Real time communications, and to suffice user requirements, the overheard offered by it is acceptable.

Page generated in 0.0627 seconds