• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 1
  • Tagged with
  • 20
  • 20
  • 15
  • 12
  • 11
  • 9
  • 9
  • 9
  • 9
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluation of CockroachDB in a cloud-native environment

Håkansson, Kristina, Rosenqvist, Andreas January 2021 (has links)
The increased demand for using large databases that scale easily and stay consistent requires service providers to find new solutions for storing data in databases. One solution that has emerged is cloud-native databases. Service providers who effectively can transit to cloud-native databases will benefit from new enterprise applications, industrial automation, Internet of Things (IoT) as well as consumer services, such as gaming and AR/VR. This consequently changes the requirements on a database's architecture and infrastructure in terms of being compatible with the services deployed in a cloud-native environment - this is where CockroachDB comes into the picture. CockroachDB is relatively new and is built from the ground up to run in a cloud-native environment. It is built up with nodes that work as individual machines, and these nodes form a cluster. The authors of this report aim to evaluate the characteristics of the Cockroach database to get an understanding of what it offers to companies that are in a cloud-infrastructure transition phase. For the scope of characteristics, this report is focusing on performance, throughput, stress-test, version hot-swapping, horizontal-/vertical scaling, and node disruptions. To do this, a CockroachDB database was deployed on a Kubernetes cluster, in which simulated traffic was conducted. For the throughput measurement, the TPC-C transaction processing benchmark was used. For scaling, version hot-swapping, and node disruptions, an experimental method was performed. The result of the study confirms the expected outcome. CockroachDB does in fact scale easily, both horizontally and vertically, with minimal effort. It also shows that the throughput remains the same when the cluster is scaled up and out since CockroachDB does not have a master write-node, which is the case with some other databases. CockroachDB also has built-in functionality to handle configuration changes like version hot-swapping and node disruptions. This study concluded that CockroachDB lives up to its promises regarding the subjects handled in the report, and can be seen as a robust, easily scalable database that can be deployed in acloud-native environment.
2

A Comparative Evaluation of Failover Mechanisms for Mission-critical Financial Applications in Public Clouds

Gustavsson, Albert January 2023 (has links)
Computer systems can fail for a vast range of reasons, and handling failures is crucial to any critical computer system. Many modern computer systems are migrating to public clouds, which provides more flexible resource consumption and in many cases reduced costs, while the migration can also require system changes due to limitations in the provided cloud environment. This thesis evaluates a few methods of achieving failover when migrating a system to a public cloud, with the main goal of finding a replacement for failover mechanisms that can only be used in self-managed infrastructure. A few different failover methods are evaluated by looking into different aspects of how each method would change an existing system. Two methods using \textit{etcd} and \textit{Apache ZooKeeper} are used for experimental evaluation where failover time is measured in two simulated scenarios where the primary process terminates and a standby process needs to be promoted to the primary status. In one scenario, the primary process is not able to notify other processes in the system before terminating, and in the other scenario, the primary process can release the primary status to another instance before terminating. The etcd and ZooKeeper solutions are shown to behave quite similarly in the testing setup, while the ZooKeeper solution might be able to achieve lower failover time in low-latency environments.
3

Evaluation of Cloud Native Solutions for Trading Activity Analysis / Evaluering av cloud native lösningar för analys av transaktionsbaserad börshandel

Johansson, Jonas January 2021 (has links)
Cloud computing has become increasingly popular over recent years, allowing computing resources to be scaled on-demand. Cloud Native applications are specifically created to run on the cloud service model. Currently, there is a research gap regarding the design and implementation of cloud native applications, especially regarding how design decisions affect metrics such as execution time and scalability of systems. The problem investigated in this thesis is whether the execution time and quality scalability, ηt of cloud native solutions are affected when housing the functionality of multiple use cases within the same cloud native application. In this work, a cloud native application for trading data analysis is presented, where the functionality of 3 use cases are implemented to the application: (1) creating reports of trade prices, (2) anomaly detection, and (3) analysis of relation diagram of trades. The execution time and scalability of the application are evaluated and compared to readily available solutions, which serve as a baseline for the evaluation. The results of use cases 1 and 2 are compared to Amazon Athena, while use case 3 is compared to Amazon Neptune. The results suggest that having functionalities combined into the same application could improve both execution time and scalability of the system. The impact depends on the use case and hardware configuration. When executing the use cases in a sequence, the mean execution time of the implemented system was decreased up to 17.2% while the quality scalability score was improved by 10.3% for use case 2. The implemented application had significantly lower execution time than Amazon Neptune but did not surpass Amazon Athena for respective use cases. The scalability of the systems varied depending on the use case. While not surpassing the baseline in all use cases, the results show that the execution time of a cloud native system could be improved by having functionality of multiple use cases within one system. However, the potential performance gains differ depending on the use case and might be smaller than the performance gains of choosing another solution. / Cloud computing har de senaste åren blivit alltmer populärt och möjliggör att skala beräkningskapacitet och resurser på begäran. Cloud native-applikationer är specifikt skapade för att köras på distribuerad infrastruktur. För närvarande finns det luckor i forskningen gällande design och implementering av cloud native applikationer, särskilt angående hur designbeslut påverkar mätbara värden som exekveringstid och skalbarhet. Problemet som undersöks i denna uppsats är huruvida exekveringstiden och måttet av kvalitetsskalbarhet, ηt påverkas när funktionaliteten av flera användningsfall intregreras i samma cloud native applikation. I det här arbetet skapades en cloud native applikation som kombinerar flera användningsfall för att analysera transaktionsbaserad börshandelsdata. Funktionaliteten av 3 användningsfall implementeras i applikationen: (1) generera rapporter över handelspriser, (2) detektering av avvikelser och (3) analys av relations-grafer. Applikationens exekveringstid och skalbarhet utvärderas och jämförs med kommersiella cloudtjänster, vilka fungerar som en baslinje för utvärderingen. Resultaten från användningsfall 1 och 2 jämförs med Amazon Athena, medan användningsfall 3 jämförs med Amazon Neptune. Resultaten antyder att systemets exekveringstid och skalbarhet kan förbättras genom att funktionalitet för flera användningsfall implementeras i samma system. Effekten varierar beroende på användningsfall och hårdvarukonfiguration. När samtliga användningsfall körs i en sekvens, minskar den genomsnittliga körtiden för den implementerade applikationen med upp till 17,2% medan kvalitetsskalbarheten ηt förbättrades med 10,3%för användningsfall 2. Den implementerade applikationen har betydligt kortare exekveringstid än Amazon Neptune men överträffar inte Amazon Athena för respektive användningsfall. Systemens skalbarhet varierade beroende på användningsfall. Även om det inte överträffar baslinjen i alla användningsfall, visar resultaten att exekveringstiden för en cloud native applikation kan förbättras genom att kombinera funktionaliteten hos flera användningsfall inom ett system. De potentiella prestandavinsterna varierar dock beroende på användningsfallet och kan vara mindre än vinsterna av att välja en annan lösning.
4

Cloud native design of IoT baseband functions : Introduction to cloud native principles / Cloud native design av IoT basebandfunktioner : Introduktion till molnprinciper

Bakthavathsalu, Lalith Kumar January 2020 (has links)
The exponential growth of research and deployment of 5G networks has led to an increased interest in massive Machine Type Communications (mMTC), as we are on the quest to connect all devices. This can be attributed to the constant development of long-distance and low-powered Internet-of- Things (IoT) technologies, or, Low Power Wide Area Network (LPWAN) technologies such as Long-Range (LoRa) and Narrow Band- IoT (NB-IoT). These technologies are gaining prominence in the IoT domain as the number of LPWAN connected devices has doubled from 2018 to 2019. This increase in devices warrants a proportional number of gateways to push the data to the Internet for further analytics. The traditional LPWAN architectures do not provide dynamic scaling of resources or energy-efficient solutions. Thus, a Cloud-Native (CN) split architecture based on the functional characteristics of the components is a necessity. In this work, a software-based implementation of the LoRa stack on GNU Radio is designed and implemented using Software-Defined Radio (SDR). The LoRa gateway is implemented in software completely, replicating the functions of the hardware for communicating with any LoRa Network Server. Several experiments with different setups have been performed on the testbed to measure the resource utilization and packet delay of the LoRa Physical (PHY) and Medium Access Control (MAC) layers. Also, the testbed has been moved into Docker containers to emulate a cloud-based platform and make the transition faster. Higher throughput and lower delay (Improvement in the range of 1.3x - 6.7x) were recorded upon splitting the testbed into Radio Head (RH) and Edge containers. Finally, three potential functional split architectures including the gateway have been discussed while providing a fair trade-off between pooling gain and consumed bandwidth for a CN split architecture. / Den exponentiella tillväxten av forskning och distribution av 5G-nät har lett till ett ökat intresse för massive Machine Type Communicationsn (mMTC) eftersom vi är på jakt att ansluta alla enheter. Detta kan tillskrivas den ständiga utvecklingen av långdistans- och lågdrivna Internet-of-Things-teknologier (IoT) -teknologier, eller, Low Power Wide Area Network (LPWAN) tekniker som Long-Range (LoRa) och Narrow Band- IoT (NB-IoT). Dessa teknologier blir framträdande inom IoT-domänen eftersom antalet LPWAN-anslutna enheter har fördubblats från 2018 till 2019. Denna ökning av enheterna motiverar ett proportionellt antal portar för att driva data till Internet för ytterligare analys. De traditionella LPWAN-arkitekturerna ger inte dynamisk skalning av resurser eller energieffektiva lösningar. Således är en moln-infödd delad arkitektur baserad på funktionernas egenskaper hos komponenterna en nödvändighet. I detta arbete designas och implementeras en programvarubaserad implementering av LoRa-stacken på GNU Radio med hjälp av Software- Defined Radio (SDR). LoRa-gatewayen implementeras i mjukvara fullständigt, vilket replikerar maskinvarans funktioner för att kommunicera med någon LoRaNetwork Server. Flera experiment med olika inställningar har utförts på testbädden för att mäta resursutnyttjandet och paketfördröjningen för LoRa Physical (PHY) och Medium Access Control (MAC) -skikten. Testbädden har också flyttats in i Docker-behållare för att emulera en molnbaserad plattform och göra övergången snabbare. Högre genomströmning och lägre fördröjning (Förbättring inom intervallet 1,3x - 6,7x) registrerades vid uppdelning av testbädden i Radio Head (RH) och Edge containrar. Slutligen har tre potentiella funktionella splitarkitekturer inklusive gateway diskuterats samtidigt som det ger en rättvis avvägning mellan pooling av vinst och förbrukad bandbredd.
5

Evaluation of MLOps Tools for Kubernetes : A Rudimentary Comparison Between Open Source Kubeflow, Pachyderm and Polyaxon

Köhler, Anders January 2022 (has links)
MLOps and Kubernetes are two major components of the modern-day information technology landscape, and their impact on the field is likely to grow even stronger in the near future. As a multitude of tools have been developed for the purpose of facilitating effortless creation of cloud native MLOps solutions, many of them have been designed, to varying degrees, to integrate with the Kubernetes system. While numerous evaluations have been conducted on these tools from a general MLOps perspective, this thesis aims to evaluate their qualities specifically within a Kubernetes context, with the focus being on their integration into this ecosystem. The evaluation is conducted in two steps: an MLOps market overview study, as well as an in-depth MLOps tool evaluation. The former represents a macroscopic overview of currently available MLOps tooling, whereas the latter delves into the practical aspects of deploying three Kubernetes native, open source MLOps platforms on cloud-based Kubernetes clusters. The platforms are Kubeflow, Pachyderm, and Polyaxon, and these are evaluated in terms of functionality, usability, vitality, and performance.
6

HAALO : A cloud native hardware accelerator abstraction with low overhead

Facchetti, Jeremy January 2019 (has links)
With the upcoming 5G deployment and the exponentially increasing data transmitted over cellular networks, off the shelf hardware won't provide enough performance to cope with the data being transferred over cellular networks. To tackle that problem, hardware accelerators will be of great support thanks to their better performances and lower energy consumption. However, hardware accelerators are not a silver bullet as their very nature prevents them to be as flexible as CPUs. Hardware accelerators integration into Kubernetes and Docker, respectively the most used tools for orchestration and containerization, is still not as flexible as it would need. In this thesis, we developed a framework that allows for a more flexible integration of these accelerators into a Kubernetes cluster using Docker containers making use of an abstraction layer instead of the classic virtualization process. Our results compare the performance of an execution with and without the framework that was developed during this thesis. We found that the framework's overhead depends on the size of the data being processed by the accelerator but does not go over a very low percentage of the total execution time. This framework provides an abstraction for hardware accelerators and thus provides an easy way to integrate hardware accelerated applications into a heterogeneous cluster or even across different clusters with different hardware accelerators types. This framework also moves the hardware specific parts of an accelerated program from the containers to the infrastructure and enables a new kind of service, OpenCL as a service.
7

Forensic Analysis of G Suite Collaborative Protocols

McCulley, Shane 09 August 2017 (has links)
Widespread adoption of cloud services is fundamentally changing the way IT services are delivered and how data is stored. Current forensic tools and techniques have been slow to adapt to new challenges and demands of collecting and analyzing cloud artifacts. Traditional methods focusing only on client data collection are incomplete, as the client may have only a (partial) snapshot and misses cloud-native artifacts that may contain valuable historical information. In this work, we demonstrate the importance of recovering and analyzing cloud-native artifacts using G Suite as a case study. We develop a tool that extracts and processes the history of Google Documents and Google Slides by reverse engineering the web applications private protocol. Combined with previous work that has focused on API-based acquisition of cloud drives, this presents a more complete solution to cloud forensics, and is generalizable to any cloud service that maintains a detailed log of revisions.
8

Performance Modelling and Simulation of Service Chains for Telecom Clouds

Gokan Khan, Michel January 2021 (has links)
New services and ever increasing traffic volumes require the next generation of mobile networks, e.g. 5G, to be much more flexible and scalable. The primary enabler for its flexibility is transforming network functions from proprietary hardware to software using modern virtualization technologies, paving the way of virtual network functions (VNF). Such VNFs can then be flexibly deployed on cloud data centers while traffic is routed along a chain of VNFs through software-defined networks. However, such flexibility comes with a new challenge of allocating efficient computational resources to each VNF and optimally placing them on a cluster. In this thesis, we argue that, to achieve an autonomous and efficient performance optimization method, a solid understanding of the underlying system, service chains, and upcoming traffic is required. We, therefore, conducted a series of focused studies to address the scalability and performance issues in three stages. We first introduce an automated profiling and benchmarking framework, named NFV-Inspector to measure and collect system KPIs as well as extract various insights from the system. Then, we propose systematic methods and algorithms for performance modelling and resource recommendation of cloud native network functions and evaluate them on a real 5G testbed. Finally, we design and implement a bottom-up performance simulator named PerfSim to approximate the performance of service chains based on the nodes’ performance models and user-defined scenarios. / <p>Article 5 part of thesis as manuscript, now published.</p>
9

Extending the Kubernetes operator Kubegres to handle database restoration from dump files

Bemm, Rickard January 2023 (has links)
The use of cloud-native technologies has grown in popularity in recent years. With its ability to take advantage of the full benefits of cloud computing, cloud-native architecture has become a hot topic among developers and IT professionals. It refers to building and running applications using cloud services and architectures, including containerization, microservices, and automation tools such as Kubernetes to enable fast and continuous delivery of software applications. In Kubernetes, the desired state of a resource is described declaratively and then handles the details of how to get there. Databases are notoriously hard to deploy in such environments, and the Kubernetes operator pattern extends the resources it manages and how to get to the desired state, called reconcile function. Operators exist to manage PostgreSQL databases with backup and restore functionality, and some require a license to be used. Kubegres is a free-to-use open-source operator, but it lacks restore functionality. This thesis aims to extend the Kubegres operator to support database restoration using dump files. It includes how to create the restore process in Kubernetes, what modifications must be done to the current architecture, and how to make the reconcile function robust and self-healing yet customizable to fit many different needs. Research has been done to explore the design of other operators that already support database restoration. It inspired the design of the resource definition and the restoration process. A new resource definition was added to define the desired state of the database restoration and a new reconcile function to define how to act on it. The state is repeatedly created each time the reconcile function is triggered. During the restoration, a new database is always the target, and once completed, the resources to restore it are deleted, and only the PostgreSQL database is left. The performance of the modified operator impact compared to the original operator was measured to evaluate the operator. The tests consisted of operations both versions of the operator supported, including PostgreSQL database creation, cluster scaling, and changing resource limits. The two collected metrics, CPU- and memory usage, increased by 0.058-0.4 mvCPU (12-33%) and 8.2 MB (29%), respectively. A qualitative evaluation of the operator using qualities such as robustness, self-healing, customizability, and correctness showed that the design fulfils most of the qualities.
10

Managing Microservices with a Service Mesh : An implementation of a service mesh with Kubernetes and Istio

Mara Jösch, Ronja January 2020 (has links)
The adoption of microservices facilitates extending computer systems in size, complexity, and distribution. Alongside their benefits, they introduce the possibility of partial failures. Besides focusing on the business logic, developers have to tackle cross-cutting concerns of service-to-service communication which now defines the applications' reliability and performance. Currently, developers use libraries embedded into the application code to address these concerns. However, this increases the complexity of the code and requires the maintenance and management of various libraries. The service mesh is a relatively new technology that possibly enables developers staying focused on their business logic. This thesis investigates one of the available service meshes called Istio, to identify its benefits and limitations. The main benefits found are that Istio adds resilience and security, allows features currently difficult to implement, and enables a cleaner structure and a standard implementation of features within and across teams. Drawbacks are that it decreases performance by adding CPU usage, memory usage, and latency. Furthermore, the main disadvantage of Istio is its limited testing tools. Based on the findings, the Webcore Infra team of the company can make a more informed decision whether or not Istio is to be introduced. / Tillämpningen av microservices underlättar utvidgningen av datorsystem i storlek, komplexitet och distribution. Utöver fördelarna introducerar de möjligheten till partiella misslyckanden. Förutom att fokusera på affärslogiken måste utvecklare hantera övergripande problem med kommunikation mellan olika tjänster som nu definierar applikationernas pålitlighet och prestanda. För närvarande använder utvecklare bibliotek inbäddade i programkoden för att hantera dessa problem. Detta ökar dock kodens komplexitet och kräver underhåll och hantering av olika bibliotek. Service mesh är en relativt ny teknik som kan möjliggöra för utvecklare att hålla fokus på sin affärslogik. Denna avhandling undersöker ett av de tillgängliga service mesh som kallas Istio för att identifiera dess fördelar och begränsningar. De viktigaste fördelarna som hittas är att Istio lägger till resistens och säkerhet, tillåter funktioner som för närvarande är svåra att implementera och möjliggör en renare struktur och en standardimplementering av funktioner inom och över olika team. Nackdelarna är att det minskar prestandan genom att öka CPU-användning, minnesanvändning och latens. Dessutom är Istios största nackdel dess begränsade testverktyg. Baserat på resultaten kan Webcore Infra-teamet i företaget fatta ett mer informerat beslut om Istio ska införas eller inte.

Page generated in 0.061 seconds