• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 5
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 37
  • 27
  • 25
  • 24
  • 23
  • 21
  • 19
  • 19
  • 16
  • 16
  • 14
  • 14
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fehleranalyse in Microservices mithilfe von verteiltem Tracing

Sinner, Robin Andreas 26 April 2022 (has links)
Mit dem Architekturkonzept von Microservices und der steigenden Anzahl an heterogenen Services, ergeben sich neue Herausforderungen hinsichtlich des Debuggings, Monitorings und Testens solcher Anwendungen. Verteiltes Tracing bietet einen Ansatz zur Lösung dieser Herausforderungen. Das Ziel in der vorliegenden Arbeit ist es zu untersuchen, wie verteiltes Tracing für eine automatisierte Fehleranalyse von Microservices genutzt werden kann. Dazu wird die folgende Forschungsfrage gestellt: Wie können Traces ausgewertet werden, um die Fehlerursachen beim Testen von Microservices zu identifizieren? Um die Forschungsfrage zu beantworten, wurde ein Datenformat zur automatisierten Auswertung von Tracing-Daten definiert. Zur Auswertung wurden Algorithmen konzipiert, welche die Fehlerpropagierung zwischen Services anhand kausaler Beziehungen auflösen. Dieses Vorgehen wurde in Form einer prototypischen Implementierung in Python umgesetzt und dessen Funktionalität evaluiert. Die Ergebnisse zeigen, dass in rund 77 % der durchgeführten Testszenarien, die Fehlerursache mithilfe des Prototyps korrekt aus den Tracing-Daten abgeleitet werden konnte. Ohne Einsatz des Prototyps und ohne weiteres Debugging konnte lediglich in circa 5 % der Testszenarien die Fehlerursache anhand der Fehlerausgabe der Anwendung selbst erkannt werden. Damit bietet das Konzept sowie der Prototyp eine Erleichterung des Debuggings von Pythonbasierten Microservice-Anwendungen.:1. Einleitung 1.1. Motivation 1.2. Abgrenzung 1.3. Methodik 2. Grundlagen 2.1. Verwandte Arbeiten 2.1.1. Automatisierte Analyse von Tracing-Informationen 2.1.2. Automatisierte Fehlerursachenanalyse 2.1.3. Fehlerursachenanalyse in Microservices 2.1.4. Ursachenanalyse von Laufzeitfehlern in verteilten Systemen 2.1.5. Tracing-Tool zur Fehlererkennung 2.2. Theoretische Grundlagen 2.2.1. Microservices 2.2.2. Verteiltes Tracing 2.2.3. OpenTracing 2.2.4. Jaeger 2.2.5. Exemplarische Anwendung für Untersuchungen 2.2.6. Continuous Integration/ Continuous Delivery/ Continuous Deployment 3. Konzeption 3.1. Definition des Datenformats 3.1.1. Analyse des Datenformats der OpenTracing Spezifikation 3.1.2. Erweiterungen 3.1.3. Resultierendes Datenformat für eine automatisierte Auswertung 3.1.4. Zeitversatz verteilter Systeme 3.2. Algorithmen zur Fehlerursachenanalyse 3.2.1. Erstellung eines Abhängigkeitsgraphen 3.2.2. Pfad-basierte Untersuchung von Fehlerursachen 3.2.3. Auswertung nach zeitlicher Abfolge und kausaler Beziehung 3.2.4. Bewertung potenzieller Fehlerursachen 3.3. Konzeption des Prototyps 3.3.1. Integration in den Entwicklungszyklus 3.3.2. Funktionale Anforderungen 3.3.3. Architektur des Prototyps 4. Durchführung/ Implementation 4.1. Implementation des Prototyps zur Fehlerursachenanalyse 4.2. Einbindung des Prototyps in Testszenarien und Continuous Integration 4.3. Tests zur Evaluation des Prototyps 5. Ergebnisse 51 5.1. Evaluation des Konzepts/ Prototyps 5.2. Evaluation der Methoden zur Fehlerursachen-Bewertung 5.3. Wirtschaftliche Betrachtung 6. Fazit/ Ausblick 6.1. Fazit 6.2. Ausblick Literatur Selbstständigkeitserklärung A. Abbildungen A.1. Mockups der Auswertungsberichte des Prototyps B. Tabellen B.1. Felder der Tags nach OpenTracing B.2. Felder der Logs nach OpenTracing B.3. Auswertung der Testergebnisse C. Listings C.1. Datenformat zur automatisierten Auswertung C.2. Definition von Regeln zur Auswertung D. Anleitung für den Prototyp
2

Automatisierte Anwendung von Chaos Engineering Methoden zur Untersuchung der Robustheit eines verteilten Softwaresystems

Hampel, Brian 13 April 2022 (has links)
Verteilte Softwaresysteme bringen ein sehr komplexes Verhalten unter echten Einsatzbedingungen mit sich, meist resultiert dies auch in sehr komplexen Fehlerzuständen, die durch den Betrieb unter widrigen Netzwerkbedingungen wie beispielsweise hohen Latenzen und zunehmenden Paketverlusten entstehen. Diese Fehlerzustände können mit herkömmlichen Softwaretestverfahren wie Unit- und Integrationstests nicht mehr hinreichend provoziert, getestet und verifiziert werden. Mit der Methode des Chaos-Engineerings werden komplexe Chaos-Szenarien entworfen, die es ermöglichen dieses unbekannte Verhalten der Software in Grenzfällen strukturiert zu entdecken. Am Beispiel einer verteilten Software, die bereits seit über 10 Jahren am Deutschen Zentrum für Luft- und Raumfahrt (DLR) entwickelt wird, werden Chaos-Engineering-Methoden angewandt und sowohl konzeptuell in existierende Softwaretestverfahren eingeordnet als auch praktisch in einer Experimental-Cloud-Umgebung erprobt. Innerhalb eines Experteninterviews mit den RCE-Entwicklern wird ein Chaos-Szenario entworfen, in der die Robustheit der Software mit Chaos-Experimenten auf die Probe gestellt wird. Aufbauend auf einem Softwareprojekt zur automatischen Erstellung von RCE-Testnetzwerken, wird eine Softwarelösung entwickelt, die eine automatische Ausführung von Chaos-Szenarien innerhalb der Experimental-Cloud-Umgebung ermöglicht. Anschließend wird das aus den Experteninterviews resultierende Chaos-Szenario in der Praxis durchgeführt. Abschließend werden die Erkenntnisse aus der Ausführung des Chaos- Szenarios vorgestellt und weiterführende Fragestellungen und Arbeiten aufgezeigt:1 Einleitung 2 Grundlagen 2.1 Softwareentwicklung und Testverfahren 2.2 Verteilte Software 2.3 Containerorchestrierung 2.4 Chaos Engineering 3 Betrachtetes System 3.1 Remote Component Environment 3.2 Testing von RCE Releases 3.3 Methode Experteninterview 3.4 Fragestellungen entwerfen 3.5 Resultate aus Interview 3.6 Integration von Chaos-Engineering 4 Konzepte des Chaos-Engineering am Beispiel 4.1 Ausgangssituation 4.1.1 Systemumgebung 4.1.2 Automatisierte Erstellung von Testnetzwerken 4.1.3 Microservices 4.1.4 Systemarchitektur 4.1.5 Netzwerkbeschreibung 4.2 Anforderungen an die zu entwickelnde Software 4.3 Erweiterung des vorhandenen Gesamtsystems 4.3.1 Chaos Mesh 4.4 Chaos-Operator Microservice 4.4.1 Erweiterung der Systemarchitektur 4.4.2 Erweiterung der Schnittstellen 4.4.3 Beschreibung eines Chaos-Experiments 4.4.4 Probes 4.4.5 Ablaufsteuerung 5 Evaluierung und Diskussion 5.1 Geplantes Chaos-Szenario 5.1.1 JSON Beschreibung eines Chaos-Szenarios 5.2 Durchführung des entworfenen Chaos-Szenarios 5.2.1 Ausführung mit Chaos-Sequencer 5.2.2 Validierung 5.3 Resultate 6 Fazit Literaturverzeichnis Abbildungsverzeichnis Listings
3

Performance Analysis of Service in Heterogeneous Operational Environments

Tipirisetty, Venkat Sivendra January 2016 (has links)
In recent years there is a rapid increase in demand for cloud services, as cloud computing has become a flexible platform for hosting microservices over the Internet. Microservices are the core elements of service oriented architecture (SOA) that facilitate the deployment of distributed software systems. As the user requires good quality of service the response time of microservices is critical in assessing the performance of the application from the end user perspective.This thesis work aims at developing a typical service architecture to facilitate the deployment of compute and I/O intensive services. The work also aims at evaluating the service times of these service when their respective subservices are deployed in heterogeneous environments with various loads.The research work has been carried out using an experimental testbed in order to evaluate the performance. The transport level performance metric called Response time is measured. It is the time taken by the server to serve the request sent by the client. Experiments have been conducted based on the objectives that are to be achieved.The results obtained from the experimentation contain the average service times of a service when it is deployed on both virtual and non-virtual environment. The virtual environment is provided by Docker containers. They also include the variation in position of their subservices. From results it can be concluded that the total service times obtained are less in case of non-virtual environments when compared to container environment.
4

Performance characteristics between monolithic and microservice-based systems

Flygare, Robin, Holmqvist, Anthon January 2017 (has links)
A new promising technology to face the problem of scalability and availability is the microservice architecture. The problem with this architecture is that there is no significant study that clearly proves the performance differences compared to the monolithic architecture. Our thesis aims to provide a more conclusive answer of how the microservice architecture differs performance wise compared to the monolithic architecture. In this study, we conducted several experiments on a self-developed microservice and monolithic system. We used JMeter to simulate users and after running the tests we looked at the latency, successful throughput for the tests and measured the RAM and CPU usage with Datadog. Results that were found, were that the microservice architecture can be more beneficial than the monolithic architecture. Docker was also proven to not have any negative impact on performance and computer cluster can improve performance.  We have presented a conclusive answer that microservices can be better in some cases than a monolithic architecture.
5

Värt mödan? : En litteraturstudie om migrationer av legacyapplikationer till molnet

Manojlovic, Manna January 2022 (has links)
Abstract This paper is a thesis work during the Computer Science Bachelor programme at Malmö University spring 2022. The thesis conducts a litterature review on benefits and risks of legacy applications to cloud. What are the challenges of migrating legacy applications to cloud? Why do businesses migrate legacy applications to cloud in relation to the challenges presented in the litterature? The thesis reviews ten research articles and a number of books in order to answer and discuss the questions. It concludes that in spite of a great number of challenges presented in the litterature, both technical and operational, businesses still see the benefits of legacy application migration and perform the back breaking technical migrations of monolithic architecures in legacy applications, to modern, fast-deploying microservices. This in order to achieve cost reductions, fast deployments and reduced maintenance in software over time. / Uppsatsen är ett kandidatarbete inom ramen för Systemutvecklarprogrammet 180 hp på Malmö Universitet och riktar sig till systemutvecklarstudenter. Den syftar till att öka kunskapen om migration av legacyapplikationer till molnet och diskuterar detta utifrån frågeställningarna ”Vilka utmaningar finns det med migrationer till molnet?” och ”Varför väljer företag att migrera legacy applikationer till cloudmiljö i relation till de utmaningar som beskrivs i litteraturen?”. Uppsatsen bygger på en litteraturstudie där flera av artiklarna är systematiska litteraturstudier där omfattande arbete har analyserats av andra forskare inom sektorn och visar att migration av legacyapplikationer åtföljs av en stor rad utmaningar så som byte av arkitektur – från traditionella monolitiska till serviceorienterade så som microservices, omskrivning av programkod och kulturella förändringar inom verksamheten. Det finns också olika drivkrafter bakom en migration, så som snabbare distribuering av tjänster genom microservices, besparingar gällande kostnader och personal eller skalbarhet. Dessa drivkrafter visar sig vara värda mödan för en del företag, eftersom en ständig ökning av molnkunder registreras fortfarande och för att kunna konkurrera mot andra, likartade företag blir migrationen ett krav.
6

Measuring the Modeling Complexity of Microservice Choreography and Orchestration: The Case of E-commerce Applications

Haj Ali, Mahtab 22 July 2021 (has links)
With the increasing popularity of microservices for software application development, businesses are migrating from monolithic approaches towards more scalable and independently deployable applications using microservice architectures. Each microservice is designed to perform one single task. However, these microservices need to be composed together to communicate and deliver complex system functionalities. There are two major approaches to compose microservices, namely choreography and orchestration. Microservice compositions are mainly built around business functionalities, therefore businesses need to choose the right composition style that best serves their business needs. In this research, we follow a five-step process for conducting a Design Science Research (DSR) methodology to define, develop and evaluate BPMN-based models for microservice compositions. We design a series of BPMN workflows as the artifacts to investigate choreography and orchestration of microservices. The objective of this research is to compare the complexity of the two leading composition techniques on small, mid-sized, and end-to-end e-commerce scenarios, using complexity metrics from the software engineering and business process literature. More specifically, we use the metrics to assess the complexity of BPMN-based models representing the abovementioned e-commerce scenarios. An important aspect of our research is the fact that we model, deploy, and run our scenarios to make sure we are assessing the modeling complexity of realistic applications. For that, we rely on Zeebe Modeler and CAMUNDA workflow engine. Finally, we use the results of our complexity assessment to uncover insights on modeling microservice choreography and orchestration and discuss the impacts of complexity on the modifiability and understandability of the proposed models.
7

Benchmarking microservices: effects of tracing and service mesh

Unnikrishnan, Vivek 04 November 2023 (has links)
Microservices have become the current standard in software architecture. As the number of microservices increases, there is an increased need for better visualization, debugging and configuration management. Developers currently adopt various tools to achieve the above functionalities two of which are tracing tools and service meshes. Despite the advantages, they bring to the table, the overhead they add is also significant. In this thesis, we try to understand these overheads in latency and throughput by conducting experiments on known benchmarks with different tracing tools and service meshes. We introduce a new tool called Unified Benchmark Runner (UBR) that allows easy benchmark setup, enabling a more systematic way to run multiple benchmark experiments under different scenarios. UBR supports Jaeger, TCP Dump, Istio, and three popular microservice benchmarks, namely, Social Network, Hotel Reservation, and Online Boutique. Using UBR, we conduct experiments with all three benchmarks and report performance for different deployments and configurations.
8

Design of Secure Scalable Frameworks for Next Generation Cellular Networks

Atalay, Tolga Omer 06 June 2024 (has links)
Leveraging Network Functions Virtualization (NFV), the Fifth Generation (5G) core, and Radio Access Network (RAN) functions are implemented as Virtual Network Functions (VNFs) on Commercial-off-the-Shelf (COTS) hardware. The use of virtualized micro-services to implement these 5G VNFs enables the flexible and scalable construction of end-to-end logically isolated network fragments denoted as network slices. The goal of this dissertation is to design more scalable, flexible, secure, and visible 5G networks. Thus, each chapter will present a design and evaluation that addresses one or more of these aspects. The first objective is to understand the limits of 5G core micro-service virtualization when using lightweight containers for constructing various network slicing models with different service guarantees. The initial deployment model consists of the OpenAirInterface (OAI) 5G core in a containerized setting to create a universally deployable testbed. Operational and computational stress tests are performed on individual 5G core VNFs where different network slicing models are created that are applicable to real-life scenarios. The analysis captures the increase in compute resource consumption of individual VNFs during various core network procedures. Furthermore, using different network slicing models, the progressive increase in resource consumption can be seen as the service guarantees of the slices become more demanding. The framework created using this testbed is the first to provide such analytics on lightweight virtualized 5G core VNFs with large-scale end-to-end connections. Moving into the cloud-native ecosystem, 5G core deployments will be orchestrated by middle-men Network-slice-as-a-Service (NSaaS) providers. These NSaaS providers will consume Infrastructure-as-a-service (IaaS) offerings and offer network slices to Mobile Virtual Network Operators (MVNOs). To investigate this future model, end-to-end emulated 5G deployments are conducted to offer insight into the cost implications surrounding such NSaaS offerings in the cloud. The deployment features real-life traffic patterns corresponding to practical use cases which are matched with specific network slicing models. These models are implemented in a 5G testbed to gather compute resource consumption metrics. The obtained data are used to formulate infrastructure procurement costs for popular cloud providers such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. The results show steady patterns in compute consumption across multiple use cases, which are used to make high-scale cost projections for public cloud deployments. In the end, the trade-off between cost and throughput is achieved by decentralizing the network slices and offloading the user plane. The next step is the demystification of 5G traffic patterns using the Over-the-Air (OTA) testbed. An open-source OTA testbed is constructed leveraging advanced features of 5G radio access and core networks developed by OAI. The achievable Quality of Service (QoS) is evaluated to provide visibility into the compute consumption of individual components. Additionally, a method is presented to utilize WiFi devices for experimenting with 5G QoS. Resource consumption analytics are collected from the 5G user plane in correlation to raw traffic patterns. The results show that the open-source 5G testbed can sustain sub-20ms latency with up to 80Mbps throughput over a 25m range using COTS devices. Device connection remains stable while supporting different use cases such as AR/VR, online gaming, video streaming, and Voice-over IP (VoIP). It illustrates how these popular use cases affect CPU utilization in the user plane. This provides insight into the capabilities of existing 5G solutions by demystifying the resource needs of specific use cases. Moving into public cloud-based deployments, creates a growing demand for general-purpose compute resources as 5G deployments continue to expand. Given their existing infrastructures, cloud providers such as AWS are attractive platforms to address this need. Therefore, it is crucial to understand the control and user plane QoS implications associated with deploying the 5G core on top of AWS. To this end, a 5G testbed is constructed using open-source components spanning multiple global locations within the AWS infrastructure. Using different core deployment strategies by shuffling VNFs into AWS edge zones, an operational breakdown of the latency overhead is conducted for 5G procedures. The results show that moving specific VNFs into edge regions reduces the latency overhead for key 5G operations. Multiple user plane connections are instantiated between availability zones and edge regions with different traffic loads. As more data sessions are instantiated, it is observed that the deterioration of connection quality varies depending on traffic load. Ultimately, the findings provide new insights for MVNOs to determine favorable placements of their 5G core entities in the cloud. The transition into cloud-native deployments has encouraged the development of supportive platforms for 5G. One such framework is the OpenRAN initiative, led by the O-RAN Alliance. The OpenRAN initiative promotes an open Radio Access Network (RAN) and offers operators fine-grained control over the radio stack. To that end, O-RAN introduces new components to the 5G ecosystem, such as the near real-time RAN Intelligent Controller (near-RT RIC) and the accompanying Extensible Applications (xApps). The introduction of these entities expands the 5G threat surface. Furthermore, with the movement from proprietary hardware to virtual environments enabled by NFV, attack vectors that exploit the existing NFV attack surface pose additional threats. To deal with these threats, the textbf{xApp repository function (XRF)} framework is constructed for scalable authentication, authorization, and discovery of xApps. In order to harden the XRF microservices, deployments are isolated using Intel Software Guard Extensions (SGX). The XRF modules are individually benchmarked to compare how different microservices behave in terms of computational overhead when deployed in virtual and hardware-based isolation sandboxes. The evaluation shows that the XRF framework scales efficiently in a multi-threaded Kubernetes environment. Isolation of the XRF microservices introduces different amounts of processing overhead depending on the sandboxing strategy. A security analysis is conducted to show how the XRF framework addresses chosen key issues from the O-RAN and 5G standardization efforts. In the final chapter of the dissertation, the focus shifts towards the development and evaluation of 5G-STREAM, a service mesh tailored for rapid, efficient, and authorized microservices in cloud-based 5G core networks. 5G-STREAM addresses critical scalability and efficiency challenges in the 5G core control plane by optimizing traffic and reducing signaling congestion across distributed cloud environments. The framework enhances Virtual Network Function (VNF) service chains' topology awareness, enabling dynamic configuration of communication pathways which significantly reduces discovery and authorization signaling overhead. A prototype of 5G-STREAM was developed and tested, showing a reduction of up to 2× in inter-VNF latency per HTTP transaction in the core network service chains, particularly benefiting larger service chains with extensive messaging. Additionally, 5G-STREAM's deployment strategies for VNF placement are explored to further optimize performance and cost efficiency in cloud-based infrastructures, ultimately providing a scalable solution that can adapt to increasing network demands while maintaining robust service levels. This innovative approach signifies a pivotal advancement in managing 5G core networks, paving the way for more dynamic, efficient, and cost-effective cellular network infrastructures. Overall, this dissertation is devoted to designing, building, and evaluating scalable and secure 5G deployments. / Doctor of Philosophy / Ever since the emergence of the Global System for Mobile Communications (GSM), humanity has relied on cellular communications for the fast and efficient exchange of information. Today, with the Fifth Generation (5G) of mobile networks, what may have passed for science fiction 40 years ago, is now slowly becoming reality. In addition to enabling extremely fast data rates and low latency for user handsets, 5G networks promise to deliver a very rich and integrated ecosystem. This includes a plethora of interconnected devices ranging from smart home sensors to Augmented/Virtual Reality equipment. To that end, the stride from the Fourth Generation (4G) of mobile networks to 5G is yet to be the biggest evolutionary step in cellular networks. In 4G, the backbone entities that glued the base stations together were deployed on proprietary hardware. With 5G, these entities have been moved to Commercial off-the-shelf (COTS) hardware which can be hosted by cloud providers (e.g., Amazon, Google, Microsoft) or various Small to Medium Enterprises (SMEs). This substantial paradigm shift in cellular network deployments has introduced a variety of security, flexibility, and scalability concerns around the deployment of 5G networks. Thus, this thesis is a culmination of a wide range of studies that seek to collectively facilitate the secure, scalable, and flexible deployment of 5G networks in different types of environments. Starting with small-scale optimizations and building up towards the analysis of global 5G deployments, the goal of this work is to demystify the scalability implications of deploying 5G networks. On this journey, several security flaws are identified within the 5G ecosystem, and frameworks are constructed to address them in a fluent manner.
9

Self-Adaptive Edge Services: Enhancing Reliability, Efficiency, and Adaptiveness under Unreliable, Scarce, and Dissimilar Resources

Song, Zheng 27 May 2020 (has links)
As compared to traditional cloud computing, edge computing provides computational, sensor, and storage resources co-located with client requests, thereby reducing network transmission and providing context-awareness. While server farms can allocate cloud computing resources on demand at runtime, edge-based heterogeneous devices, ranging from stationary servers to mobile, IoT, and energy harvesting devices are not nearly as reliable and abundant. As a result, edge application developers face the following obstacles: 1) heterogeneous devices provide hard-to-access resources, due to dissimilar capabilities, operating systems, execution platforms, and communication interfaces; 2) unreliable resources cause high failure rates, due to device mobility, low energy status, and other environmental factors; 3) resource scarcity hinders the performance; 4) the dissimilar and dynamic resources across edge environments make QoS impossible to guarantee. Edge environments are characterized by the prevalence of equivalent functionalities, which satisfy the same application requirements by different means. The thesis of this research is that equivalent functionalities can be exploited to improve the reliability, efficiency, and adaptiveness of edge-based services. To prove this thesis, this dissertation comprises three key interrelated research thrusts: 1) create a system architecture and programming support for providing edge services that run on heterogeneous and ever changing edge devices; 2) introduce programming abstractions for executing equivalent functionalities; 3) apply equivalent functionalities to improve the reliability, efficiency, and adaptiveness of edge services. We demonstrate how the connected devices with unreliable, dynamic, and scarce resources can automatically form a reliable, adaptive, and efficient execution environment for sensing, computing, and other non-trivial tasks. This dissertation is based on 5 conference papers, presented at ICDCS'20, ICWS'19, EDGE'19, CLOUD'18, and MobileSoft'18 / Doctor of Philosophy / As mobile and IoT devices are generating ever-increasing volumes of sensor data, it has become impossible to transfer this data to remote cloud-based servers for processing. As an alternative, edge computing coordinates nearby computing resources that can be used for local processing. However, while cloud computing resources are abundant and reliable, edge computing ones are scarce and unreliable. This dissertation research introduces novel execution strategies that make it possible to provide reliable, efficient, and flexible edge-based computing services in dissimilar edge environments.
10

Fiabilisation du change dans le Cloud au niveau Platform as a Service / Reliability of changes in cloud environment at PaaS level

Tao, Xinxiu 29 January 2019 (has links)
Les architectures de microservices sont considérées comme une architecture qui promet pour réaliser DevOps dans les organisations informatiques, car elles divisent les applications en services pouvant être mis à jour indépendamment. Toutefois, pour protéger les propriétés SLA (Service Level Agreement) lors de la mise à jour des microservices, les équipes de DevOps doivent gérer des scripts d'opérations complexes et sujets aux erreurs. Dans cet article, on utilise une approche basée sur l'architecture pour fournir un moyen simple et sûr pour mettre à jour les microservices. / Microservice architectures are considered really promising to achieve DevOps in IT organizations, because they split applications into services that can be updated independently from each others. But to protect SLA (Service Level Agreement) properties when updating microservices, DevOps teams have to deal with complex and error-prone scripts of management operations. In this paper, we leverage an architecture-based approach to provide an easy and safe way to update microservices.

Page generated in 0.0372 seconds