• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 87
  • 4
  • 2
  • 1
  • Tagged with
  • 95
  • 60
  • 45
  • 37
  • 29
  • 29
  • 27
  • 27
  • 25
  • 23
  • 21
  • 19
  • 17
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Cloud-native storage solutions for Kubernetes : A performance comparison

Andersson, Filip January 2023 (has links)
Kubernetes is a container orchestration system that has been rising in popularity in recent years. The modular nature of Kubernetes allows the usage of different storage solutions, and for cloud environments, cloud-native distributed storage solutions maybe attractive due to their redundant nature. There are many tools for cloud-native distributed storage available on the market today with differing features and performance. Choosing the correct one for an organisation can be difficult. Organisations utilising Kubernetes in cloud environments would like to be as performance efficient as possible to save on costs and resources. This study aims to offer a benchmark and analysis for some of the most popular tools, to help organisations choose the ‘best’ solution for their operational needs, from a performance perspective. The benchmarks compare three cloud-native distributed storage solutions, OpenEBS, Portworx, and Rook-Ceph on both Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS). For a baseline comparison, the study will also benchmark the cloud providers own solutions; Azure Disk Storage, and Amazon Elastic Block Storage. The study compares these solutions from three key metrics; bandwidth, latency, and IOPS, in both read and write performance. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p>
92

Predictive vertical CPU autoscaling in Kubernetes based on time-series forecasting with Holt-Winters exponential smoothing and long short-term memory / Prediktiv vertikal CPU-autoskalning i Kubernetes baserat på tidsserieprediktion med Holt-Winters exponentiell utjämning och långt korttidsminne

Wang, Thomas January 2021 (has links)
Private and public clouds require users to specify requests for resources such as CPU and memory (RAM) to be provisioned for their applications. The values of these requests do not necessarily relate to the application’s run-time requirements, but only help the cloud infrastructure resource manager to map requested virtual resources to physical resources. If an application exceeds these values, it might be throttled or even terminated. Consequently, requested values are often overestimated, resulting in poor resource utilization in the cloud infrastructure. Autoscaling is a technique used to overcome these problems. In this research, we formulated two new predictive CPU autoscaling strategies forKubernetes containerized applications, using time-series analysis, based on Holt-Winters exponential smoothing and long short-term memory (LSTM) artificial recurrent neural networks. The two approaches were analyzed, and their performances were compared to that of the default Kubernetes Vertical Pod Autoscaler (VPA). Efficiency was evaluated in terms of CPU resource wastage, and insufficient CPU percentage and amount for container workloads from Alibaba Cluster Trace 2018, and others. In our experiments, we observed that Kubernetes Vertical Pod Autoscaler (VPA) tended to perform poorly on workloads that periodically change. Our results showed that compared to VPA, predictive methods based on Holt- Winters exponential smoothing (HW) and Long Short-Term Memory (LSTM) can decrease CPU wastage by over 40% while avoiding CPU insufficiency for various CPU workloads. Furthermore, LSTM has been shown to generate stabler predictions compared to that of HW, which allowed for more robust scaling decisions. / Privata och offentliga moln kräver att användare begär mängden CPU och minne (RAM) som ska fördelas till sina applikationer. Mängden resurser är inte nödvändigtvis relaterat till applikationernas körtidskrav, utan är till för att molninfrastrukturresurshanteraren ska kunna kartlägga begärda virtuella resurser till fysiska resurser. Om en applikation överskrider dessa värden kan den saktas ner eller till och med krascha. För att undvika störningar överskattas begärda värden oftast, vilket kan resultera i ineffektiv resursutnyttjande i molninfrastrukturen. Autoskalning är en teknik som används för att överkomma dessa problem. I denna forskning formulerade vi två nya prediktiva CPU autoskalningsstrategier för containeriserade applikationer i Kubernetes, med hjälp av tidsserieanalys baserad på metoderna Holt-Winters exponentiell utjämning och långt korttidsminne (LSTM) återkommande neurala nätverk. De två metoderna analyserades, och deras prestationer jämfördes med Kubernetes Vertical Pod Autoscaler (VPA). Prestation utvärderades genom att observera under- och överutilisering av CPU-resurser, för diverse containerarbetsbelastningar från bl. a. Alibaba Cluster Trace 2018. Vi observerade att Kubernetes Vertical Pod Autoscaler (VPA) i våra experiment tenderade att prestera dåligt på arbetsbelastningar som förändras periodvist. Våra resultat visar att jämfört med VPA kan prediktiva metoder baserade på Holt-Winters exponentiell utjämning (HW) och långt korttidsminne (LSTM) minska överflödig CPU-användning med över 40 % samtidigt som de undviker CPU-brist för olika arbetsbelastningar. Ytterligare visade sig LSTM generera stabilare prediktioner jämfört med HW, vilket ledde till mer robusta autoskalningsbeslut.
93

Hybrid Cloud Migration Challenges. A case study at King

Boronin, Mikhail January 2020 (has links)
Migration to the cloud has been a popular topic in industry and academia in recent years. Despite many benefits that the cloud presents, such as high availability and scalability, most of the on-premise application architectures are not ready to fully exploit the benefits of this environment, and adapting them to this environment is a non-trivial task.Therefore, many organizations consider a gradual process of moving to the cloud with Hybrid Cloud architecture. In this paper, the author is making an effort of analyzing particular enterprise case in cloud migration topics like cloud deployment, cloud architecture and cloud management.This paper aims to identify, classify, and compare existing challenges in cloud migration, illustrate approaches to resolve these challenges and discover the best practices in cloud adoption and process of conversion teams to the cloud.
94

Container Orchestration : the Migration Path to Kubernetes

Andersson, Johan, Norrman, Fredrik January 2020 (has links)
As IT platforms grow larger and more complex, so does the underlying infrastructure. Virtualization is an essential factor for more efficient resource allocation, improving both the management and environmental impact. It allows more robust solutions and facilitates the use of IaC (infrastructure ascode). Many systems developed today consist of containerized microservices. Considered the standard of container orchestration, Kubernetes is the natural next step for many companies. But how do we move on from previous solutions to a Kubernetes cluster? We found that there are not a lot of detailed enough guidelines available, and set out to gain more knowledge by diving into the subject - implementing prototypes that would act as a foundation for a resulting guideline of how it can be done.
95

An Empirical Study on AI Workflow Automation for Positioning / En empirisk undersökning om automatiserat arbetsflöde inom AI för positionering

Jämtner, Hannes, Brynielsson, Stefan January 2022 (has links)
The maturing capabilities of Artificial Intelligence (AI) and Machine Learning (ML) have resulted in increased attention in research and development on adopting AI and ML in 5G and future networks. With the increased maturity, the usage of AI/ML models in production is becoming more widespread, and maintaining these systems is more complex and likely to incur technical debt when compared to standard software. This is due to inheriting all the complexities of traditional software in addition to ML-specific ones. To handle these complexities the field of ML Operations (MLOps) has emerged. The goal of MLOps is to extend DevOps to AI/ML and therefore speed up development and ease maintenance of AI/ML-based software, by, for example, supporting automatic deployment, monitoring, and continuous re-training of models. This thesis investigates how to construct an MLOps workflow by selecting a number of tools and using these to implement a workflow. Additionally, different approaches for triggering re-training are implemented and evaluated, resulting in a comparison of the triggers with regards to execution time, memory and CPU consumption, and the average performance of the Machine learning model.

Page generated in 0.0353 seconds