• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 36
  • 25
  • 24
  • 23
  • 22
  • 20
  • 19
  • 18
  • 16
  • 16
  • 14
  • 14
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fehleranalyse in Microservices mithilfe von verteiltem Tracing

Sinner, Robin Andreas 26 April 2022 (has links)
Mit dem Architekturkonzept von Microservices und der steigenden Anzahl an heterogenen Services, ergeben sich neue Herausforderungen hinsichtlich des Debuggings, Monitorings und Testens solcher Anwendungen. Verteiltes Tracing bietet einen Ansatz zur Lösung dieser Herausforderungen. Das Ziel in der vorliegenden Arbeit ist es zu untersuchen, wie verteiltes Tracing für eine automatisierte Fehleranalyse von Microservices genutzt werden kann. Dazu wird die folgende Forschungsfrage gestellt: Wie können Traces ausgewertet werden, um die Fehlerursachen beim Testen von Microservices zu identifizieren? Um die Forschungsfrage zu beantworten, wurde ein Datenformat zur automatisierten Auswertung von Tracing-Daten definiert. Zur Auswertung wurden Algorithmen konzipiert, welche die Fehlerpropagierung zwischen Services anhand kausaler Beziehungen auflösen. Dieses Vorgehen wurde in Form einer prototypischen Implementierung in Python umgesetzt und dessen Funktionalität evaluiert. Die Ergebnisse zeigen, dass in rund 77 % der durchgeführten Testszenarien, die Fehlerursache mithilfe des Prototyps korrekt aus den Tracing-Daten abgeleitet werden konnte. Ohne Einsatz des Prototyps und ohne weiteres Debugging konnte lediglich in circa 5 % der Testszenarien die Fehlerursache anhand der Fehlerausgabe der Anwendung selbst erkannt werden. Damit bietet das Konzept sowie der Prototyp eine Erleichterung des Debuggings von Pythonbasierten Microservice-Anwendungen.:1. Einleitung 1.1. Motivation 1.2. Abgrenzung 1.3. Methodik 2. Grundlagen 2.1. Verwandte Arbeiten 2.1.1. Automatisierte Analyse von Tracing-Informationen 2.1.2. Automatisierte Fehlerursachenanalyse 2.1.3. Fehlerursachenanalyse in Microservices 2.1.4. Ursachenanalyse von Laufzeitfehlern in verteilten Systemen 2.1.5. Tracing-Tool zur Fehlererkennung 2.2. Theoretische Grundlagen 2.2.1. Microservices 2.2.2. Verteiltes Tracing 2.2.3. OpenTracing 2.2.4. Jaeger 2.2.5. Exemplarische Anwendung für Untersuchungen 2.2.6. Continuous Integration/ Continuous Delivery/ Continuous Deployment 3. Konzeption 3.1. Definition des Datenformats 3.1.1. Analyse des Datenformats der OpenTracing Spezifikation 3.1.2. Erweiterungen 3.1.3. Resultierendes Datenformat für eine automatisierte Auswertung 3.1.4. Zeitversatz verteilter Systeme 3.2. Algorithmen zur Fehlerursachenanalyse 3.2.1. Erstellung eines Abhängigkeitsgraphen 3.2.2. Pfad-basierte Untersuchung von Fehlerursachen 3.2.3. Auswertung nach zeitlicher Abfolge und kausaler Beziehung 3.2.4. Bewertung potenzieller Fehlerursachen 3.3. Konzeption des Prototyps 3.3.1. Integration in den Entwicklungszyklus 3.3.2. Funktionale Anforderungen 3.3.3. Architektur des Prototyps 4. Durchführung/ Implementation 4.1. Implementation des Prototyps zur Fehlerursachenanalyse 4.2. Einbindung des Prototyps in Testszenarien und Continuous Integration 4.3. Tests zur Evaluation des Prototyps 5. Ergebnisse 51 5.1. Evaluation des Konzepts/ Prototyps 5.2. Evaluation der Methoden zur Fehlerursachen-Bewertung 5.3. Wirtschaftliche Betrachtung 6. Fazit/ Ausblick 6.1. Fazit 6.2. Ausblick Literatur Selbstständigkeitserklärung A. Abbildungen A.1. Mockups der Auswertungsberichte des Prototyps B. Tabellen B.1. Felder der Tags nach OpenTracing B.2. Felder der Logs nach OpenTracing B.3. Auswertung der Testergebnisse C. Listings C.1. Datenformat zur automatisierten Auswertung C.2. Definition von Regeln zur Auswertung D. Anleitung für den Prototyp
2

Automatisierte Anwendung von Chaos Engineering Methoden zur Untersuchung der Robustheit eines verteilten Softwaresystems

Hampel, Brian 13 April 2022 (has links)
Verteilte Softwaresysteme bringen ein sehr komplexes Verhalten unter echten Einsatzbedingungen mit sich, meist resultiert dies auch in sehr komplexen Fehlerzuständen, die durch den Betrieb unter widrigen Netzwerkbedingungen wie beispielsweise hohen Latenzen und zunehmenden Paketverlusten entstehen. Diese Fehlerzustände können mit herkömmlichen Softwaretestverfahren wie Unit- und Integrationstests nicht mehr hinreichend provoziert, getestet und verifiziert werden. Mit der Methode des Chaos-Engineerings werden komplexe Chaos-Szenarien entworfen, die es ermöglichen dieses unbekannte Verhalten der Software in Grenzfällen strukturiert zu entdecken. Am Beispiel einer verteilten Software, die bereits seit über 10 Jahren am Deutschen Zentrum für Luft- und Raumfahrt (DLR) entwickelt wird, werden Chaos-Engineering-Methoden angewandt und sowohl konzeptuell in existierende Softwaretestverfahren eingeordnet als auch praktisch in einer Experimental-Cloud-Umgebung erprobt. Innerhalb eines Experteninterviews mit den RCE-Entwicklern wird ein Chaos-Szenario entworfen, in der die Robustheit der Software mit Chaos-Experimenten auf die Probe gestellt wird. Aufbauend auf einem Softwareprojekt zur automatischen Erstellung von RCE-Testnetzwerken, wird eine Softwarelösung entwickelt, die eine automatische Ausführung von Chaos-Szenarien innerhalb der Experimental-Cloud-Umgebung ermöglicht. Anschließend wird das aus den Experteninterviews resultierende Chaos-Szenario in der Praxis durchgeführt. Abschließend werden die Erkenntnisse aus der Ausführung des Chaos- Szenarios vorgestellt und weiterführende Fragestellungen und Arbeiten aufgezeigt:1 Einleitung 2 Grundlagen 2.1 Softwareentwicklung und Testverfahren 2.2 Verteilte Software 2.3 Containerorchestrierung 2.4 Chaos Engineering 3 Betrachtetes System 3.1 Remote Component Environment 3.2 Testing von RCE Releases 3.3 Methode Experteninterview 3.4 Fragestellungen entwerfen 3.5 Resultate aus Interview 3.6 Integration von Chaos-Engineering 4 Konzepte des Chaos-Engineering am Beispiel 4.1 Ausgangssituation 4.1.1 Systemumgebung 4.1.2 Automatisierte Erstellung von Testnetzwerken 4.1.3 Microservices 4.1.4 Systemarchitektur 4.1.5 Netzwerkbeschreibung 4.2 Anforderungen an die zu entwickelnde Software 4.3 Erweiterung des vorhandenen Gesamtsystems 4.3.1 Chaos Mesh 4.4 Chaos-Operator Microservice 4.4.1 Erweiterung der Systemarchitektur 4.4.2 Erweiterung der Schnittstellen 4.4.3 Beschreibung eines Chaos-Experiments 4.4.4 Probes 4.4.5 Ablaufsteuerung 5 Evaluierung und Diskussion 5.1 Geplantes Chaos-Szenario 5.1.1 JSON Beschreibung eines Chaos-Szenarios 5.2 Durchführung des entworfenen Chaos-Szenarios 5.2.1 Ausführung mit Chaos-Sequencer 5.2.2 Validierung 5.3 Resultate 6 Fazit Literaturverzeichnis Abbildungsverzeichnis Listings
3

Performance Analysis of Service in Heterogeneous Operational Environments

Tipirisetty, Venkat Sivendra January 2016 (has links)
In recent years there is a rapid increase in demand for cloud services, as cloud computing has become a flexible platform for hosting microservices over the Internet. Microservices are the core elements of service oriented architecture (SOA) that facilitate the deployment of distributed software systems. As the user requires good quality of service the response time of microservices is critical in assessing the performance of the application from the end user perspective.This thesis work aims at developing a typical service architecture to facilitate the deployment of compute and I/O intensive services. The work also aims at evaluating the service times of these service when their respective subservices are deployed in heterogeneous environments with various loads.The research work has been carried out using an experimental testbed in order to evaluate the performance. The transport level performance metric called Response time is measured. It is the time taken by the server to serve the request sent by the client. Experiments have been conducted based on the objectives that are to be achieved.The results obtained from the experimentation contain the average service times of a service when it is deployed on both virtual and non-virtual environment. The virtual environment is provided by Docker containers. They also include the variation in position of their subservices. From results it can be concluded that the total service times obtained are less in case of non-virtual environments when compared to container environment.
4

Performance characteristics between monolithic and microservice-based systems

Flygare, Robin, Holmqvist, Anthon January 2017 (has links)
A new promising technology to face the problem of scalability and availability is the microservice architecture. The problem with this architecture is that there is no significant study that clearly proves the performance differences compared to the monolithic architecture. Our thesis aims to provide a more conclusive answer of how the microservice architecture differs performance wise compared to the monolithic architecture. In this study, we conducted several experiments on a self-developed microservice and monolithic system. We used JMeter to simulate users and after running the tests we looked at the latency, successful throughput for the tests and measured the RAM and CPU usage with Datadog. Results that were found, were that the microservice architecture can be more beneficial than the monolithic architecture. Docker was also proven to not have any negative impact on performance and computer cluster can improve performance.  We have presented a conclusive answer that microservices can be better in some cases than a monolithic architecture.
5

Self-Adaptive Edge Services: Enhancing Reliability, Efficiency, and Adaptiveness under Unreliable, Scarce, and Dissimilar Resources

Song, Zheng 27 May 2020 (has links)
As compared to traditional cloud computing, edge computing provides computational, sensor, and storage resources co-located with client requests, thereby reducing network transmission and providing context-awareness. While server farms can allocate cloud computing resources on demand at runtime, edge-based heterogeneous devices, ranging from stationary servers to mobile, IoT, and energy harvesting devices are not nearly as reliable and abundant. As a result, edge application developers face the following obstacles: 1) heterogeneous devices provide hard-to-access resources, due to dissimilar capabilities, operating systems, execution platforms, and communication interfaces; 2) unreliable resources cause high failure rates, due to device mobility, low energy status, and other environmental factors; 3) resource scarcity hinders the performance; 4) the dissimilar and dynamic resources across edge environments make QoS impossible to guarantee. Edge environments are characterized by the prevalence of equivalent functionalities, which satisfy the same application requirements by different means. The thesis of this research is that equivalent functionalities can be exploited to improve the reliability, efficiency, and adaptiveness of edge-based services. To prove this thesis, this dissertation comprises three key interrelated research thrusts: 1) create a system architecture and programming support for providing edge services that run on heterogeneous and ever changing edge devices; 2) introduce programming abstractions for executing equivalent functionalities; 3) apply equivalent functionalities to improve the reliability, efficiency, and adaptiveness of edge services. We demonstrate how the connected devices with unreliable, dynamic, and scarce resources can automatically form a reliable, adaptive, and efficient execution environment for sensing, computing, and other non-trivial tasks. This dissertation is based on 5 conference papers, presented at ICDCS'20, ICWS'19, EDGE'19, CLOUD'18, and MobileSoft'18 / Doctor of Philosophy / As mobile and IoT devices are generating ever-increasing volumes of sensor data, it has become impossible to transfer this data to remote cloud-based servers for processing. As an alternative, edge computing coordinates nearby computing resources that can be used for local processing. However, while cloud computing resources are abundant and reliable, edge computing ones are scarce and unreliable. This dissertation research introduces novel execution strategies that make it possible to provide reliable, efficient, and flexible edge-based computing services in dissimilar edge environments.
6

Värt mödan? : En litteraturstudie om migrationer av legacyapplikationer till molnet

Manojlovic, Manna January 2022 (has links)
Abstract This paper is a thesis work during the Computer Science Bachelor programme at Malmö University spring 2022. The thesis conducts a litterature review on benefits and risks of legacy applications to cloud. What are the challenges of migrating legacy applications to cloud? Why do businesses migrate legacy applications to cloud in relation to the challenges presented in the litterature? The thesis reviews ten research articles and a number of books in order to answer and discuss the questions. It concludes that in spite of a great number of challenges presented in the litterature, both technical and operational, businesses still see the benefits of legacy application migration and perform the back breaking technical migrations of monolithic architecures in legacy applications, to modern, fast-deploying microservices. This in order to achieve cost reductions, fast deployments and reduced maintenance in software over time. / Uppsatsen är ett kandidatarbete inom ramen för Systemutvecklarprogrammet 180 hp på Malmö Universitet och riktar sig till systemutvecklarstudenter. Den syftar till att öka kunskapen om migration av legacyapplikationer till molnet och diskuterar detta utifrån frågeställningarna ”Vilka utmaningar finns det med migrationer till molnet?” och ”Varför väljer företag att migrera legacy applikationer till cloudmiljö i relation till de utmaningar som beskrivs i litteraturen?”. Uppsatsen bygger på en litteraturstudie där flera av artiklarna är systematiska litteraturstudier där omfattande arbete har analyserats av andra forskare inom sektorn och visar att migration av legacyapplikationer åtföljs av en stor rad utmaningar så som byte av arkitektur – från traditionella monolitiska till serviceorienterade så som microservices, omskrivning av programkod och kulturella förändringar inom verksamheten. Det finns också olika drivkrafter bakom en migration, så som snabbare distribuering av tjänster genom microservices, besparingar gällande kostnader och personal eller skalbarhet. Dessa drivkrafter visar sig vara värda mödan för en del företag, eftersom en ständig ökning av molnkunder registreras fortfarande och för att kunna konkurrera mot andra, likartade företag blir migrationen ett krav.
7

Measuring the Modeling Complexity of Microservice Choreography and Orchestration: The Case of E-commerce Applications

Haj Ali, Mahtab 22 July 2021 (has links)
With the increasing popularity of microservices for software application development, businesses are migrating from monolithic approaches towards more scalable and independently deployable applications using microservice architectures. Each microservice is designed to perform one single task. However, these microservices need to be composed together to communicate and deliver complex system functionalities. There are two major approaches to compose microservices, namely choreography and orchestration. Microservice compositions are mainly built around business functionalities, therefore businesses need to choose the right composition style that best serves their business needs. In this research, we follow a five-step process for conducting a Design Science Research (DSR) methodology to define, develop and evaluate BPMN-based models for microservice compositions. We design a series of BPMN workflows as the artifacts to investigate choreography and orchestration of microservices. The objective of this research is to compare the complexity of the two leading composition techniques on small, mid-sized, and end-to-end e-commerce scenarios, using complexity metrics from the software engineering and business process literature. More specifically, we use the metrics to assess the complexity of BPMN-based models representing the abovementioned e-commerce scenarios. An important aspect of our research is the fact that we model, deploy, and run our scenarios to make sure we are assessing the modeling complexity of realistic applications. For that, we rely on Zeebe Modeler and CAMUNDA workflow engine. Finally, we use the results of our complexity assessment to uncover insights on modeling microservice choreography and orchestration and discuss the impacts of complexity on the modifiability and understandability of the proposed models.
8

Benchmarking microservices: effects of tracing and service mesh

Unnikrishnan, Vivek 04 November 2023 (has links)
Microservices have become the current standard in software architecture. As the number of microservices increases, there is an increased need for better visualization, debugging and configuration management. Developers currently adopt various tools to achieve the above functionalities two of which are tracing tools and service meshes. Despite the advantages, they bring to the table, the overhead they add is also significant. In this thesis, we try to understand these overheads in latency and throughput by conducting experiments on known benchmarks with different tracing tools and service meshes. We introduce a new tool called Unified Benchmark Runner (UBR) that allows easy benchmark setup, enabling a more systematic way to run multiple benchmark experiments under different scenarios. UBR supports Jaeger, TCP Dump, Istio, and three popular microservice benchmarks, namely, Social Network, Hotel Reservation, and Online Boutique. Using UBR, we conduct experiments with all three benchmarks and report performance for different deployments and configurations.
9

Fiabilisation du change dans le Cloud au niveau Platform as a Service / Reliability of changes in cloud environment at PaaS level

Tao, Xinxiu 29 January 2019 (has links)
Les architectures de microservices sont considérées comme une architecture qui promet pour réaliser DevOps dans les organisations informatiques, car elles divisent les applications en services pouvant être mis à jour indépendamment. Toutefois, pour protéger les propriétés SLA (Service Level Agreement) lors de la mise à jour des microservices, les équipes de DevOps doivent gérer des scripts d'opérations complexes et sujets aux erreurs. Dans cet article, on utilise une approche basée sur l'architecture pour fournir un moyen simple et sûr pour mettre à jour les microservices. / Microservice architectures are considered really promising to achieve DevOps in IT organizations, because they split applications into services that can be updated independently from each others. But to protect SLA (Service Level Agreement) properties when updating microservices, DevOps teams have to deal with complex and error-prone scripts of management operations. In this paper, we leverage an architecture-based approach to provide an easy and safe way to update microservices.
10

Microservices in data intensive applications

Remeika, Mantas, Urbanavicius, Jovydas January 2018 (has links)
The volumes of data which Big Data applications have to process are constantly increasing. This requires for the development of highly scalable systems. Microservices is considered as one of the solutions to deal with the scalability problem. However, the literature on practices for building scalable data-intensive systems is still lacking. This thesis aims to investigate and present the benefits and drawbacks of using microservices architecture in big data systems. Moreover, it presents other practices used to increase scalability. It includes containerization, shared-nothing architecture, data sharding, load balancing, clustering, and stateless design. Finally, an experiment comparing the performance of a monolithic application and a microservices-based application was performed. The results show that with increasing amount of load microservices perform better than the monolith. However, to cope with the constantly increasing amount of data, additional techniques should be used together with microservices.

Page generated in 0.0505 seconds