• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 4
  • 2
  • 1
  • Tagged with
  • 92
  • 57
  • 43
  • 35
  • 27
  • 27
  • 27
  • 27
  • 24
  • 23
  • 21
  • 19
  • 14
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluation of MLOps Tools for Kubernetes : A Rudimentary Comparison Between Open Source Kubeflow, Pachyderm and Polyaxon

Köhler, Anders January 2022 (has links)
MLOps and Kubernetes are two major components of the modern-day information technology landscape, and their impact on the field is likely to grow even stronger in the near future. As a multitude of tools have been developed for the purpose of facilitating effortless creation of cloud native MLOps solutions, many of them have been designed, to varying degrees, to integrate with the Kubernetes system. While numerous evaluations have been conducted on these tools from a general MLOps perspective, this thesis aims to evaluate their qualities specifically within a Kubernetes context, with the focus being on their integration into this ecosystem. The evaluation is conducted in two steps: an MLOps market overview study, as well as an in-depth MLOps tool evaluation. The former represents a macroscopic overview of currently available MLOps tooling, whereas the latter delves into the practical aspects of deploying three Kubernetes native, open source MLOps platforms on cloud-based Kubernetes clusters. The platforms are Kubeflow, Pachyderm, and Polyaxon, and these are evaluated in terms of functionality, usability, vitality, and performance.
2

Effektiviteten hos kluster med befintliga datorer kontra enskilda datorer

Osman, Las January 2022 (has links)
Detta arbete utvärderar nyttan av att konstruera ett kluster med ett företags eller en organisations befintliga datorer. Klustret baserat på vanliga persondatorer kan blisom ett alternativ att driftsätta applikationer och tjänster. Alternativet är i stället föratt införskaffa serverar och högprestandadatorer. Detta kan bidra till att minska kostnader och återanvända resurser som redan finns. Arbetet mäter klustrets prestanda och effektivitet, därefter jämför resultatet med andra datorer och system. Arbetet utförs hos Syntronic och Högskolan i Gävle, där arbetet använder sig av de resurser som både parterna har och arbetet konstruerar varsitt kluster att mäta med.Resultatet visar att ett kluster byggd av datorer från Syntronic och högskolan presterar jämförbart med konsumentklassade processorer som kan hittas i nya datorer. Resultatet visar dock att bägge klustren har varken någon ekonomisk eller ekologisk fördel jämfört med nya datorer. Det är mer gynnsamt att sälja av Syntronics och Högskolans datorer i andrahandsmarknaden, för att sedan införskaffa nya datorer. / This thesis evaluates a cluster according to energy efficiency and performance. A cluster made of computers that a company, or an organization already owns, without purchasing any computers. The cluster based on older commodity computersacts as an alternative for operating and running software instead of buying server or high-end hardware. Reason for creating a cluster out of existing computers is mainly for reducing expenses and reuse hardware that otherwise considers as waste. This thesis measures the clusters performance and efficiency, then compares the results with other hardware and systems. This thesis uses computers from Syntronic and University of Gävle. This thesis creates a cluster each, one at Syntronic, one at University of Gävle and then measures them. The result shows that a cluster made of Syntronics computer performs equally amongst systems with a consumer grade processor. This thesis also shows that both clusters are not a viable option from an economic and environmental perspective. It is more beneficial to sell those computersfrom Syntronic and the University, for then to purchase new hardware.
3

Extending the Kubernetes operator Kubegres to handle database restoration from dump files

Bemm, Rickard January 2023 (has links)
The use of cloud-native technologies has grown in popularity in recent years. With its ability to take advantage of the full benefits of cloud computing, cloud-native architecture has become a hot topic among developers and IT professionals. It refers to building and running applications using cloud services and architectures, including containerization, microservices, and automation tools such as Kubernetes to enable fast and continuous delivery of software applications. In Kubernetes, the desired state of a resource is described declaratively and then handles the details of how to get there. Databases are notoriously hard to deploy in such environments, and the Kubernetes operator pattern extends the resources it manages and how to get to the desired state, called reconcile function. Operators exist to manage PostgreSQL databases with backup and restore functionality, and some require a license to be used. Kubegres is a free-to-use open-source operator, but it lacks restore functionality. This thesis aims to extend the Kubegres operator to support database restoration using dump files. It includes how to create the restore process in Kubernetes, what modifications must be done to the current architecture, and how to make the reconcile function robust and self-healing yet customizable to fit many different needs. Research has been done to explore the design of other operators that already support database restoration. It inspired the design of the resource definition and the restoration process. A new resource definition was added to define the desired state of the database restoration and a new reconcile function to define how to act on it. The state is repeatedly created each time the reconcile function is triggered. During the restoration, a new database is always the target, and once completed, the resources to restore it are deleted, and only the PostgreSQL database is left. The performance of the modified operator impact compared to the original operator was measured to evaluate the operator. The tests consisted of operations both versions of the operator supported, including PostgreSQL database creation, cluster scaling, and changing resource limits. The two collected metrics, CPU- and memory usage, increased by 0.058-0.4 mvCPU (12-33%) and 8.2 MB (29%), respectively. A qualitative evaluation of the operator using qualities such as robustness, self-healing, customizability, and correctness showed that the design fulfils most of the qualities.
4

Möjliga säkerhetsmetoder för en multitenant OpenShift-miljö / Possible security methods for a multitenant OpenShift environment

Rilegård, Daniel January 2022 (has links)
Molntjänster är ett område inom IT som har vuxit i betydelse under många år, till stor del då de ger aktörer tillgång till datorresurser som ofta skulle vara för kostsamma att underhålla själva. OpenShift är ett sådant verktyg baserat på orkesteringsverktyget Kubernetes. Dessa tillsammans med många andra bygger på så kallad kluster-arkitektur med nätverk av noder som utför olika uppgifter. En mjukvaruarkitektur kallad multitenancy som bygger på att utnyttja systemresurser optimalt genom att flera kunders applikationer körs isolerat parallellt på samma hårdvara har utvecklats. Den medför dock säkerhetsproblem då läckor mellan applikationer måste förhindras.Denna studie har baserat på diskussioner med kunniga inom OpenShift försökt kartlägga vilka säkerhetsproblemen är och de möjliga metoder som finns för att åtgärda dem. Svaret var att det huvudsakligen handlade om hur så kallade secrets hanteras på bästa sätt och några exempel på verktyg för detta gavs. Studien avslutas med en jämförelse om vilka som fungerar bäst med multitenancy som konstaterade att det är ett område där mer forskning behövs då problem finns med alla. / Cloud services is an area in IT that has grown in importance for many years, mainly as they give players access to computer resources that often would be too costly for them to maintain themselves. OpenShift is one such tool based on the orchestration tool Kubernetes. These together with many others are based on so-called cluster architecture with a network of nodes that perform various tasks. A software architecture called multitenancy that is based on utilizing system resources optimally by running several customers’ applications isolated in parallel on the same hardware has been developed. However, it creates security issues as leaks between applications must be prevented. Based on discussions with experts in OpenShift, this study has tried to identify the security issues and the possible methods available to address them. The answer was mainly related to how so-called secrets are handled in the best way and some examples of tools for this were given. The study ends with a comparison of which work best with multitenancy and found that it is an area where more research is needed as there are problems with all.
5

Testing the Security of a Kubernetes Cluster in a Production Environment

Giangiulio, Francesco, Malmberg, Sébastien January 2022 (has links)
Enterprise grade Kubernetes solutions which are offered by large corporations like Microsoft have become very popular in the most recent years. To protect the integrity of customer information, which resides on shared resources in the Kubernetes cloud, adequate security measures need to be in place. The requirement of staying up to date with the most recent security implementations andvulnerabilities presented in literature is analyzed. This research is conducted specifically for the company Precio-Fishbone, which sells Omnia, an application which improves the usage of Microsoft services. Because customers can alter Omnia themselves, we assume an attack model where a customer is able to add malicious code to Omnia. We show that it is possible to extract information from other customers within the same Kubernetes cluster and highlight which measures need to be taken to prevent the vulnerabilities. / Kubernetes lösningar som erbjuds av stora företag som Microsoft har blivit mycket populära under de senaste åren. För att skydda integriteten av kundinformation som befinner sig på resurser som delas mellan kunder måste tillräckliga säkerhetsåtgärder finnas på plats. Denna avhandling analyserar både kraven på att hålla sig uppdaterad med de senaste säkerhetsimplementeringarna och sårbarheter som presenterar sig i litteratur. Ett penetrationstest utförs på ett Kubernetes kluster i en produktionsmiljö som underhålls av företaget Precio Fishbone. Klustret värdar applikationen Omnia som förenklar använding av Microsofts tjänster. Eftersom kunder kan ladda upp egen kod till deras instans av denna applikation så utgår detta penetrationstest från en attackmodell där en kund kan lägga in skadlig kod. Avhandligen presenterar att information från andra kunder inom samma Kubernetes-kluster går att extrahera och belyser sedan vilka åtgärder som behöver vidtas för att förhindra de funna sårbarheterna.
6

Container Orchestration in Security Demanding Environments at the Swedish Police Authority

Abdelmassih, Christian January 2018 (has links)
The adoption of containers and container orchestration in cloud computing is motivated by many aspects, from technical and organizational to economic gains. In this climate, even security demanding organizations are interested in such technologies but need reassurance that their requirements can be satisfied. The purpose of this thesis was to investigate how separation of applications could be achieved with Docker and Kubernetes such that it may satisfy the demands of the Swedish Police Authority. The investigation consisted of a literature study of research papers and official documentation as well as a technical study of iterative creation of Kubernetes clusters with various changes. A model was defined to represent the requirements for the ideal separation. In addition, a system was introduced to classify the separation requirements of the applications. The result of this thesis consists of three architectural proposals for achieving segmentation of Kubernetes cluster networking, two proposed systems to realize the segmentation, and one strategy for providing host-based separation between containers. Each proposal was evaluated and discussed with regard to suitability and risks for the Authority and parties with similar demands. The thesis concludes that a versatile application isolation can be achieved in Docker and Kubernetes. Therefore, the technologies can provide a sufficient degree of separation to be used in security demanding environments. / Populariteten av containers och container-orkestrering inom molntjänster motiveras av många aspekter, från tekniska och organisatoriska till ekonomiska vinster. I detta klimat är även säkerhetskrävande organisationer intresserade av sådana teknologier men söker försäkran att deras kravbild går att möta. Syftet med denna avhandling var att utreda hur separation mellan applikationer kan nås vid användning av Docker och Kubernetes så att Polismyndighetens krav kan uppfyllas. Undersökningen omfattade en litterär studie av vetenskapliga publikationer och officiell dokumentation samt en teknisk studie med iterativt skapande av Kubernetes kluster med diverse variationer. En modell definierades för att representera kravbilden för ideal separation. Vidare så introducerades även ett system för klassificering av separationskrav hos applikationer. Resultatet omfattar tre förslag på arkitekturer för att uppnå segmentering av klusternätverk i Kubernetes, två föreslagna systemkomponenter för att uppfylla segmenteringen, och en strategi för att erbjuda värd-baserad separation mellan containers. Varje förslag evaluerades med hänsyn till lämplighet och risker för myndigheten och parter med liknande kravbild. Avhandlingens slutsats är att en mångsidig applikationsisolering kan uppnås i Docker och Kubernetes. Därmed kan teknologierna uppnå en lämplig grad av separation för att kunna användas för säkerhetskrävande miljöer.
7

High Availability in Lifecycle Management of Cloud-Native Network Functions : A Near-Zero Downtime Database Version Change Prototype

Zhang, Ziheng January 2023 (has links)
Ensuring high system availability is a crucial goal for many organizations, such as Ericsson. In this context, databases play a significant role as they represent a fundamental element that affects system availability within today’s complex technological environments. Mitigating downtime and maintaining high availability during database version changes are essential to ensure seamless continuity of business and system operations, such as data transactions, queries, and administrative tasks. In this project, we developed a prototype system to facilitate near-zero downtime during database version changes, thus preserving service availability and ensuring the process remains transparent to end users. Contrary to traditional database versioning approaches in the telecommunication industry, which require extensive downtime for data backup, validation, and migration, our system applies the established Blue-Green release strategy in a novel way. It benefits from the Logical Replication feature of PostgreSQL for data synchronization and further automates it for cloud-native deployments using the Kubernetes Operator Pattern. The entire database version change operation is automated by applying a Kubernetes Operator Pattern, ensuring uninterrupted external access to the system during the version change process. This innovative approach holds significant potential to augment database management practices, leading to enhanced system availability and reliability for applications deployed on cloud-native infrastructure. / Att säkerställa hög systemtillgänglighet är ett avgörande mål för många organisationer, som Ericsson. I detta sammanhang spelar databaser en betydande roll då de representerar ett grundläggande element som påverkar systemtillgängligheten inom dagens komplexa tekniska miljöer. Att minska driftstopp och bibehålla hög tillgänglighet under databasversionsändringar är avgörande för att säkerställa sömlös kontinuitet i affärs- och systemdrift, såsom datatransaktioner, frågor och administrativa uppgifter. I det här projektet utvecklade vi ett prototypsystem för att underlätta nästan noll driftstopp under databasversionsändringar, vilket bevarar tjänstens tillgänglighet och säkerställer att processen förblir transparent för slutanvändarna. I motsats till traditionella databasversionsmetoder, som kräver omfattande driftstopp för säkerhetskopiering, validering och migrering av data, tillämpar vårt system den etablerade Blue-Green releasestrategin på ett nytt sätt. Den drar nytta av den logiska replikeringsfunktionen i PostgreSQL för datasynkronisering och automatiserar den ytterligare för molnbaserade distributioner med hjälp av Kubernetes Operator Pattern. Hela databasversionsändringsoperationen automatiseras genom att tillämpa ett Kubernetes Operator Pattern, vilket säkerställer oavbruten extern åtkomst till systemet under versionsändringsprocessen. Detta innovativa tillvägagångssätt har betydande potential för att utöka databashanteringsmetoderna, vilket leder till förbättrad systemtillgänglighet och tillförlitlighet för applikationer som distribueras på en molnbaserad infrastruktur.
8

Performance Analysis of the Impact of Vertical Scaling on Application Containerized with Docker : Kubernetes on Amazon Web Services - EC2

Midigudla, Dhananjay January 2019 (has links)
Containers are being used widely as a base technology to pack applications and microservice architecture is gaining popularity to deploy large scale applications, with containers running different aspects of the application. Due to the presence of dynamic load on the service, a need to scale up or scale down compute resources to the containerized applications arises in order to maintain the performance of the application. Objectives To evaluate the impact of vertical scaling on the performance of a containerized application deployed with Docker container and Kubernetes that includes identification of the performance metrics that are mostly affected and hence characterize the eventual negative effect of vertical scaling. Method Literature study on kubernetes and docker containers followed by proposing a vertical scaling solution that can add or remove compute resources like cpu and memory to the containerized application. Results and Conclusions Latency and connect times were the analyzed performance metrics of the containerized application. From the obtained results, it was concluded that vertical scaling has no significant impact on the performance of a containerized application in terms of latency and connect times.
9

Evolving geospatial applications: from silos and desktops to Microservices and DevOps

Gao, Bing 30 April 2019 (has links)
The evolution of software applications from single desktops to sophisticated cloud-based systems is challenging. In particular, applications that involve massive data sets, such as geospatial applications and data science applications are challenging for domain experts who are suddenly constructing these sophisticated code bases. Relatively new software practices, such as Microservice infrastructure and DevOps, give us an opportunity to improve development, maintenance and efficiency for the entire software lifecycle. Microservices and DevOps have become adopted by software developers in the past few years, as they have relieved many of the burdens associated with software evolution. Microservices is an architectural style that structures an application as a collection of services. DevOps is a set of practices that automates the processes between software development and IT teams, in order to build, test, and release software faster and increase reliability. Combined with lightweight virtualization solutions, such as containers, this technology will not only improve response rates in cloud-based solutions but also drastically improve the efficiency of software development. This thesis studies two applications that apply Microservices and DevOps within a domain-specific application. The advantages and disadvantages of Microservices architecture and DevOps are evaluated through the design and development on two different platforms---a batch-based cloud system, and a general purpose cloud environment. / Graduate
10

Resource utilization comparison of Cassandra and Elasticsearch

Selander, Nizar January 2019 (has links)
Elasticsearch and Cassandra are two of the widely used databases today withElasticsearch showing a more recent resurgence due to its unique full text searchfeature, akin to that of a search engine, contrasting with the conventional querylanguage-based methods used to perform data searching and retrieval operations. The demand for more powerful and better performing yet more feature rich andflexible databases has ever been growing. This project attempts to study how the twodatabases perform under a specific workload of 2,000,000 fixed sized logs and underan environment where the two can be compared while maintaining the results of theexperiment meaningful for the production environment which they are intended for. A total of three benchmarks were carried, an Elasticsearch deployment using defaultconfiguration and two Cassandra deployments, a default configuration a long with amodified one which reflects a currently running configuration in production for thetask at hand. The benchmarks showed very interesting performance differences in terms of CPU,memory and disk space usage. Elasticsearch showed the best performance overallusing significantly less memory and disk space as well as CPU to some degree. However, the benchmarks were done in a very specific set of configurations and a veryspecific data set and workload. Those differences should be considered whencomparing the benchmark results.

Page generated in 0.0566 seconds