• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 4
  • 2
  • 1
  • Tagged with
  • 92
  • 57
  • 43
  • 35
  • 27
  • 27
  • 27
  • 27
  • 24
  • 23
  • 21
  • 19
  • 14
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Achieving a Reusable Reference Architecture for Microservices in Cloud Environments

Leo, Zacharias January 2019 (has links)
Microservices are a new trend in application development. They allow for breaking down big monolithic applications into smaller parts that can be updated and scaled independently. However, there are still many uncertainties when it comes to the standards of the microservices, which can lead to costly and time consuming creations or migrations of system architectures. One of the more common ways of deploying microservices is through the use of containers and container orchestration platform, most commonly the open-source platform Kubernetes. In order to speed up the creation or migration it is possible to use a reference architecture that acts as a blueprint to follow when designing and implementing the architecture. Using a reference architecture will lead to more standardized architectures, which in turn are most time and cost effective. This thesis proposes such a reference architecture to be used when designing microservice architectures. The goal of the reference architecture is to provide a product that meets the needs and expectations of companies that already use microservices or might adopt microservices in the future. In order to achieve the goal of the thesis, the work was divided into three main phases. First, a questionnaire was conducted and sent out to be answered by experts in the area of microservices or system architectures. Second, literature studies were made on the state of the art and practice of reference architectures and microservice architectures. Third, studies were made on the Kubernetes components found in the Kubernetes documentation, which were evaluated and chosen depending on how well they reflected the needs of the companies. This thesis finally proposes a reference architecture with components chosen according to the needs and expectations of the companies found from the questionnaire.
12

Řadič postupného nasazení software nad platformou Kubernetes / Kubernetes Canary Deployment Controller

Malina, Peter January 2019 (has links)
Potreba dodania hodnoty uživatelom každodočne rastie na kompetitívnom trhu IT. Agilita a DevOps sa stávajú kritickými aspektami pre vývoj software, vyhľadávajúci nástroje ktoré podporujú agilnú kultúru. Softwarové projekty v agilnej kultúre majú častú tendenciu zaoberať sa stratégiami nasadenia ktoré redukujú risk nasadenia nových zmien do existujúceho systému. A však, prostredia určené pre vývoj a testovanie sa takmer vždy odlišujú od produkčných. Využitie primeranej stratégie nasadenie ako canary zlepšuje celkovú stabilitu systému testovaním nových zmien na malej vzorke produkčnej prevádzky. Bolo vykonaných niekoľko experimentov pre dôkaz, že stratégia canary môže pozitívne ovplyvniť stabilitu nasadení a redukovať risk ktorý prinášajú nové zmeny.
13

Kubernetes for Game Development : Evaluation of the Container-Orchestration Software

Lundgren, Jonas January 2021 (has links)
Kubernetes is a software for managing clusters of containerized applications and has recently risen in popularity in the tech industry. However, this popularity seems to not have spread to the game development industry, prompting the author to investigate if the reason is a technical limitation. The investigation is done by creating a proof-of-concept of a simple system setup for running a game server in Kubernetes, consisting of the Kubernetes-cluster itself, the game server to be run in the cluster, and a matchmaker server for managing client requests and creation of game server instances. Thanks to the successful proof-of-concept, a conclusion can be made that there is no inherent technical limitation causing its infrequent use in game development, but most likely habitual reasons in combination with how new Kubernetes is.
14

Optimized Autoscaling of Cloud Native Applications

Åsberg, Niklas January 2021 (has links)
Software containers are changing the way distributed applications are executedand managed on cloud computing resources. Autoscaling allows containerizedapplications and services to run resiliently with high availability without the demandof user intervention. However, specifying an auto­scaling policy that can guaranteethat no performance violations will take place is an extremely hard task, and doomedto fail unless considerable care is taken. Existing autoscaling solutions try to solvethis problem but fail to consider application specific parameters when doing so, thuscausing poor resource utilization and/or unsatisfactory quality of service in certaindynamic workload scenarios.This thesis proposes an autoscaling solution that enables cloud native application toautoscale based on application specific parameters. The proposed solution consistsof a profiling strategy that detects key parameters that affect the performance ofautoscaling, and an autoscaling algorithm that automatically enforces autoscalingdecisions based on derived parameters from the profiling strategy.The proposed solution is compared and evaluated against the default auto­scalingfeature in Kubernetes during different realistic user scenarios. Results from thetesting scenarios indicate that the proposed solution, which uses application specificparameters, outperforms the default autoscaling feature of Kubernetes in resourceutilization while keeping SLO violations at a minimum
15

CONTAINER SYSTEM VISIBILITY & MODELEXTRACTION / CONTAINER SYSTEM & MODELEXTRACTION

Alanko, Mikael January 2022 (has links)
The development of applications that use microservice architecture patterns is increasingrapidly, and this architecture is proven to be successful in many different areas, especiallyin cloud computing. The reason microservices and cloud computing are a great matchis the possibility of scaling and deploying individual services, which positively affects thecost and utilization. This architecture pattern includes some challenges for the devel-opers, such as placement optimisation and knowledge about how the applications aredeployed.This study intends to clarify how the applications in a multi-cluster environment are de-ployed. A service model was created, describing how applications built with microservicearchitecture patterns communicate to each other and which microservices the applicationcontains. More specifically, this can be seen as the first step of placement optimisationthat will be developed in the future. The test cases used to produce the service modelshave various characteristics, such as control planes, where applications were deployed,and numbers of replicas. These kinds of characteristics were varied so that the servicemodels could be relied on and such that the model created works independent of howthe deployment model is created. The created service models show that the applicationtopology is not restricted for the reverse engineering method to work. Independent ofthe number of control planes or replicas, this method worked. Furthermore, the servicemodels created for each test case gave the correct outcome for each application regardingmicroservices and the connections between each microservice.
16

Finding the Sweet Spot: Optimizing Kubernetes for Scalability and Resilience : A Comprehensive Study on Improving Resource Utilization and Performance in Containerized Environments.

Rör, Adam January 2023 (has links)
Modern technology is rapidly and efficiently expanding, and by looking at the largest companies by market cap, one will find enterprises like Apple, Microsoft, Alphabet, and Meta. Given the complexity of modern software architecture, there arises a necessity for a software architecture that is both scalable and adaptable. This demand has given rise to the adoption of microservices as a preferred approach for building complex and distributed applications. However, managing microservices effectively is a difficult task. Therefore, Google created an orchestration tool called Kubernetes (K8). The primary purpose of this thesis is to extend the information about the characteristics of a K8 cluster by monitoring its performance in various scenarios. There is substantial documentation about how K8 works and why it is used. However, insufficient information exists regarding the performance of K8 in different scenarios.  Extensive testing has transpired to extend the information about the characteristics of K8. Parameters such as the number of Pods, containers, mounts, and CPU cores have been thoroughly tested. Additionally, parameters such as container load, CPU limitation, container distribution, and memory allocation have been examined. The core result will include startup time and CPU utilization. The startup time is essential in a K8 cluster because of its ephemeral characteristics, meaning each Pod is short-lived and will restart frequently. CPU utilization testing is essential to analyze how K8 allocate resources and perform with different amounts of resources.  The results show that the most significant parameters regarding startup time are, as one might expect, the number of containers, CPUs, Pods, and the load in each Pod. However, the complexity of the Pod, for instance, the number of mount points, has significantly less effect on the cluster than expected. Regarding CPU utilization, the results show that K8 does lower CPU usage if possible, resulting in equal CPU usage even with different numbers of CPUs. The most significant CPU usage parameter is the load of the application. Finally, this thesis work has filled some gaps in how a K8 cluster behaves during various circumstances, for instance, varying numbers of Pods, containers, or CPUs. One must consider several aspects while designing a K8 cluster. However, all aspects have not been considered, and the usage of K8 increases daily. Therefore, this thesis will hopefully be one of many reports investigating how a K8 cluster behaves and what to consider when building a cluster.
17

Kubernetes Automatic Geographical Failover Techniques

Eriksson, Philip January 2023 (has links)
With the rise of microservice architectures, there is a need for an orchestration tool to manage containers. Kubernetes has emerged as one of the most popular alternatives, adopting widespread usage. But managing multiple Kubernetes clusters on its own have proven to be a challenging task. This difficulty has given rise to multiple cloud based alternatives which help streamline the managing process of a cluster environment and helps maintain an extreme high availability environment that is hard to replicate in an on premise environment. Using these cloud based platforms for hosting and managing ones system is great, but alleviating control of a system to a cloud provider masquerades any illicit behaviour performed on or through the system. The scope of this thesis is on examining optional designs that will automate the process of executing a geographical failover between different locations to better sustain an on premise fault tolerant kubernetes environment. There already exists multiple tools in the area of kubernetes service mesh, but their focus is not primarily on increasing system resilience but to increase security, observability and performance. Linkerd is a sidecar oriented service mesh which supports geographical failover by manually announcing individual services between cluster(s) mirror gateways. Cilium offers an Container Networking Interface (CNI) which performs routing through eBPF and allows for seamless failover between clusters by managing cross cluster service endpoints. Both of the mentioned service mesh providers handle failover from inside the kubernetes cluster. The contributions includes two new peer-to-peer designs that focus on external cluster geographical failover - both designs are compatible with preexisting kubernetes clusters without internal modifications. A fully repli-cated design was then realised into a proof of concept (POC), and tested against a Cilium multi cluster environment on the metric of north to south traffic latency. Due to the nature of the underlying hardware, the tests showed that the POC can be used for external geographical failover and it showed potential performance capabilities in a limited lab scale. As the purpose of this thesis was not to determine the traffic throughput of a geographical failover solution; but to examine different approaches automatic geographical failover can be implemented, the tests were a success. Therefore, this thesis can conclude that there exists several working solutions, and the POC have shown that there are still undiscovered and unimplemented solutions to explore.
18

An evaluation of Honeypots with Compliant Kubernetes

Eriksson, Oscar January 2023 (has links)
This thesis evaluates different honeypot technologies and how they can be integrated into Compliant Kubernetes (CK8s), a secure open-source distribution of Kubernetes designed to address various compliance and regulatory requirements. The thesis identifies and compares the features, metrics, and suitability of several candidate honeypots for CK8s based on a literature survey and experimental testing. The thesis also discusses the value and challenges of using honeypots in cloud environments and the legal and ethical issues involved. The main findings of the thesis are that ContainerSSH is the most mature, user-friendly, and Kubernetes-compatible honeypot among the candidates, and that honeypots can provide useful threat intelligence and security awareness for cloud systems.
19

Automatic Detection of Security Deficiencies and Refactoring Advises for Microservices

Ünver, Burak January 2023 (has links)
The microservice architecture enables organizationsto shorten development cycles and deliver cloud-native applicationsrapidly. However, it also brings security concerns thatneed to be addressed by developers. Therefore, security testingin microservices becomes even more critical. Recent researchpapers indicate that security testing of microservices is oftenneglected for reasons such as lack of time, lack of experience inthe security domain, and absence of automated test environments.Even though several security scanning tools exist to detectcontainer, containerized workload management (Kubernetes),and network issues, none individually is sufficient to cover allsecurity problems in microservices. Using multiple scanning toolsincreases the complexity of analyzing findings and mitigatingsecurity vulnerabilities. This paper presents a fully automatedtest tool suite that can help developers address security issuesin microservices and resolve them. It targets to reduce timeand effort in security activities by encapsulating open-sourcescanning tools into one suite and providing improved feedback.The developed security scanning suite is named Pomegranate.To develop Pomegranate, we employed Design Science andconducted our investigation in Ericsson. We have evaluated ourtool using a static approach. The evaluation results indicate thatthe Pomegranate could be helpful to developers by providingsimplified and classified outputs for security vulnerabilities inmicroservices. More than half of the practitioners who give usfeedback found Pomegranate helpful in detecting and mitigatingsecurity problems in microservices. We conclude that a fullyautomated test tool suite can help developers to address mostsecurity issues in microservices. Based on the findings in thispaper, the direction for future work is to conduct a dynamicvalidation of Pomegranate in a live project.
20

Container Orchestration and Performance Optimization for a Microservicesbased Application

Yousaf, Ali January 2022 (has links)
Microservices is a new software design concept for developing scalable, loosely coupled services with a smaller codebase than the traditional monolithic approach. The designed microservices can communicate using several protocols, such as Advanced Message Queuing Protocol (AMQP) or HTTP/REST. Software developed using microservices design offers the developers great flexibility to choose a preferred technology stack and make independent data storage decisions. On the other hand, containerization is a mechanism that packages together the application code and dependencies to run on any platform uniformly and consistently. Our work utilizes Docker and Kubernetes to manage a containerized application. The Docker platform bundles the application dependencies and runs them in the containers. Moreover, Kubernetes is used for deploying, scaling, and managing containerized applications. On the other hand, microservices-based architecture brings many challenges as multiple services are being built and deployed simultaneously in this design. Similarly, a software developer faces many questions such as where to physically deploy the newly developed service? For example, place the service on a machine with more computing resources or near another service which it often needs to communicate with? Furthermore, it is observed in previous studies that the microservices may bring performance degradation due to increased network calls between the services. To answer these questions, we develop a unique microservices-based containerized application that classifies images using deep learning tools. The application is deployed into the Docker containers, while Kubernetes manages and executes the application on the on-premise machines. In addition, we design experiments to study the impact of container placement on the application performance in terms of latency and throughput. Our experiments reveal that Communication Aware Worst Fit Decreasing (CAWFD) obtained 49%, 55%, and 54% better average latency in microservice placement scenario two. This average latency is lower than CAWFD in scenario one in the 100, 300, 500 images group. Simultaneously, the Spread strategy displayed minimal performance because the Kubernetes scheduler determines the container placements on the nodes. Finally, we discover that CAWFD is the best placement strategy to reduce the average latency and enhance throughput. / Microservices är ett nytt mjukvarudesignkoncept för att utveckla skalbara, löst kopplade tjänster med en mindre kodbas än den traditionella monolitiska metoden. Tjänsterna kan kommunicera med flera protokoll, till exempel AMPQ eller HTTP/REST. Programvaran som utvecklats med hjälp av mikroservicedesign erbjuder en utvecklare stor flexibilitet att välja en föredragen teknikbunt och fatta oberoende datalagringsbeslut. Dessutom är containerisering en mekanism som grupperar applikationskoden och beroenden för att köra på vilken plattform som helst enhetligt och konsekvent. Vårt arbete använde Docker och Kubernetes för att hantera de containeriserade applikationerna. Docker plattformen buntar programberoenden och kör dem i behållarna. Samtidigt används Kubernetes för distribution, skalning och hantering av containeriserade applikationer. Å andra sidan ger mikrotjänstbaserad arkitektur många utmaningar. Många tjänster byggs och distribueras samtidigt i denna design. På samma sätt står en mjukvaruutvecklare inför många frågor. Som, var ska de placera den nyutvecklade tjänsten? Till exempel, placera tjänsten på en maskin med fler datorer och nära en annan tjänst där de ofta behöver kommunicera med varandra. Vidare har det observerats i tidigare studier att mikrotjänsterna försämrar prestandan på grund av ökade nätverkssamtal mellan tjänsterna. För att besvara dessa frågor har vi utvecklat en unik mikrotjänstbaserad containeriserad applikation. Den klassificerar en bild med hjälp av djupa inlärningsverktyg. Programmet distribueras till Docker-behållarna, medan Kubernetes hanterar och kör programmet på lokala datorer. Dessutom utformade vi ett par experiment för att studera behållarnas inverkan på applikationsprestanda när det gäller latens och genomströmning. Våra experiment avslöjar att Communication Aware Worst Fit Decreasing (CAWFD) fick 49%, 55% och 54% bättre genomsnittlig latens i scenario två med mikrotjänstplacering. Denna genomsnittliga latens är lägre än CAWFD i scenario ett i gruppen 100, 300, 500 bilder. Samtidigt visade Spreadstrategin minimal prestanda eftersom Kubernetes-schemaläggaren bestämmer behållarplaceringarna på noderna. Slutligen upptäcker vi att CAWFD är den bästa placeringsstrategin för att minska den genomsnittliga latensen och förbättra genomströmningen.

Page generated in 0.0471 seconds