1 |
A case study of performance comparison between monolithic and microservice-based quality control system / En fallstudie på prestandajämförelse mellan monolitisk och mikrotjänst arkitektur baserat på ett kvalitetskontroll systemEriksson, Mats January 2020 (has links)
Microservice architecture has emerged as a new way to create large complex applications by removing some problems that exist for a monolithic counterpart. While this will asset agility, resilience, maintainability and scalability within the application, other problems will be predominant such as performance. This case study aims to provide more clarity on this matter by comparing a microservice architecture with a monolithic architecture. By conducting several experiment on two self-developed systems it could be found that microservice architecture will must likely show a lower performance in terms of throughput and latency on HTTP requests which use internal communication requests. On small intensive HTTP requests with minimum internal communication the difference between the architectures is so low it could almost be neglected. With microservice architecture comes other challenges that a company must keep into account such loadbalancing, caching and orchestration which are beneficial for the performance.
|
2 |
A Comparison Between the Quality Characteristics of Two MicroserviceApplicationsBahnan, Filip January 2021 (has links)
With the rise of cloud computing and the migration to web-based applications, scalable systems have become highly desirable. And while developing software is hard, designing a scalable system is even harder. The microservice architecture is an attempt to improve the scalability but may introduce additional challenges. In order to correctly implement the microservice architecture, it is important to understand how the different mechanisms used in the architecture affect the quality of the application. The purpose of this research is to show how to evaluate microservice applications and how much they can differentiate from each other. A literature study and an architectural analysis are performed by reviewing research related to web applications and microservices. Subsequently, the empirical data is collected by evaluating and comparing two different microservice applications based on their quality characteristics. The results of the literature study indicate that performance efficiency, compatibility, reliability, security, maintainability and portability are the most relevant quality characteristics of the microservice architecture. Furthermore, the architectural analysis describes how microservices affect these quality characteristics. Lastly, the evaluation showed that different approaches can significantly alter the strength of the different characteristics. For this specific comparison between the two selected applications, it was determined that the biggest differentiating factor is the asynchronous and synchronous messaging. To conclude, the results show it is possible to evaluate a microservice application by its qualities. Additionally, while microservice applications may use completely different technologies, the fundamental concept behind them remains the same. What differs is the approaches used and how they affect the quality characteristics.
|
3 |
Comparative Study of REST and gRPC for Microservices in Established Software ArchitecturesJohansson, Martin, Isabella, Olivos January 2023 (has links)
This study compares two commonly used communication architectural styles for distributed systems, REST and gRPC. With the increase of microservice usage when migrating from monolithic structures, the importance of network performance plays a significantly larger role. Companies rely on their users, and they demand higher performance for applications to enhance their experience. This study aims to determine which of these frameworks performs faster in different scenarios regarding response time. We performed four tests that reflect real-life scenarios within an established API and baseline performance tests to evaluate them. The results imply that gRPC performs better than REST the larger the size of transmitted data is. The study provides a brief understanding of how REST performs compared to newer frameworks and that exploring new options is valuable. A more in-depth evaluation is needed to understand the different factors of performance influences further.
|
4 |
Achieving a Reusable Reference Architecture for Microservices in Cloud EnvironmentsLeo, Zacharias January 2019 (has links)
Microservices are a new trend in application development. They allow for breaking down big monolithic applications into smaller parts that can be updated and scaled independently. However, there are still many uncertainties when it comes to the standards of the microservices, which can lead to costly and time consuming creations or migrations of system architectures. One of the more common ways of deploying microservices is through the use of containers and container orchestration platform, most commonly the open-source platform Kubernetes. In order to speed up the creation or migration it is possible to use a reference architecture that acts as a blueprint to follow when designing and implementing the architecture. Using a reference architecture will lead to more standardized architectures, which in turn are most time and cost effective. This thesis proposes such a reference architecture to be used when designing microservice architectures. The goal of the reference architecture is to provide a product that meets the needs and expectations of companies that already use microservices or might adopt microservices in the future. In order to achieve the goal of the thesis, the work was divided into three main phases. First, a questionnaire was conducted and sent out to be answered by experts in the area of microservices or system architectures. Second, literature studies were made on the state of the art and practice of reference architectures and microservice architectures. Third, studies were made on the Kubernetes components found in the Kubernetes documentation, which were evaluated and chosen depending on how well they reflected the needs of the companies. This thesis finally proposes a reference architecture with components chosen according to the needs and expectations of the companies found from the questionnaire.
|
5 |
Real-time auto-test monitoring systemBlixt, Fanny January 2021 (has links)
At Marginalen Bank, there are several microservices containing endpoints that are covered bytest automation. The documentation of which microservices and endpoints that are covered byautomated tests is currently done manually and is proven to contain mistakes. In the documentation, the test coverage for all microservices together and for every individual microserviceis presented. Marginalen Bank needs a way to automate this process with a system that cantake care of test coverage documentation and present the calculated data. Therefore, the purpose of this research is to find a way to create a real-time auto-test monitoring system thatautomatically detects and monitors microservices, endpoints, and test automation to documentand present test automation coverage on a website. The system is required to daily detect andupdate the documentation to be accurate and regularly find eventual changes. The implemented system that detects and documents the test automation coverage is calledTest Autobahn. For the system to detect all microservices, a custom hosted service was implemented that registers microservices. All microservices with the custom hosted service installedand extended to registers to Test Autobahn when deployed on a server. For the system todetect all endpoints of each microservice, a custom middleware was implemented that exposesall endpoints of a microservice with it installed. For the microservices to be able to install theseand get registered, a NuGet package containing the custom hosted service and the custom middleware, was created. To detect test automations, custom attributes models were created thatare supposed to be inserted into each test automation project. The custom attributes are placedin every test class and method within a project, to mark which microservice and endpoint thatis being tested within every automated test. The attributes of a project can be read throughthe assembly. To read the custom attributes within every test automation project, a consoleapplication, called Test Autobahn Automation Detector (TAAD), was implemented. TAADreads the assembly to detect the test automations and sends them to Test Autobahn. Test Autobahn couples the found test automation to the corresponding microservices and endpoints.TAAD is installed and ran on the build pipeline in Azure DevOps for each test automationproject to register the test automations. To daily detect and update the documentation of the test coverage, Quartz.NET hosted serviceis used. With Quartz.NET implemented, Test Autobahn can execute a specified job on a schedule. Within the job, Test Autobahn detects microservices and endpoints and calculates the testautomation coverage for the detection. The calculation of the test coverage from the latestdetection is presented on the webpage, containing both the test coverage for all microservicestogether and the test coverage for each microservice. According to the evaluations, the systemseems to function as anticipated, and the documentation is displaying the expected data.
|
6 |
Hybrid Composition of Microservices: A Metrics-based AnalysisHasan, Razibul 24 August 2023 (has links)
"Microservices" is an architectural and organizational style in software design and development in which there is a composition mechanism for independent microservices to call, communicate, and message each other within an application. The microservices composition approach makes design easier to scale, faster to develop, and can accelerate the introduction of new features into applications. To satisfy business requirements, selecting the proper composition style is important for software development; otherwise, application development may fail.
The objective of this research is to investigate the hybrid method for composing microservices and compare it with other composition approaches (choreography and orchestration), using quality metrics from the software engineering and business process modeling literature. More precisely, we make use of coupling, cohesion, and scalability metrics to analyze BPMN models representing e-commerce scenarios modeled as microservice compositions.
This thesis follows the five steps: research problem identification and objectives, requirement analysis and system design, model design and development, model testing and deployment, and evaluate our BPMN models representing microservice compositions. We develop multiple BPMN workflows as artifacts to analyze choreography, orchestration, and hybrid styles for the microservices composition of e-commerce scenarios. We propose several hybrid models by integrating orchestrations and choreographies in the same workflow.
We created a series of small, mid-sized, and end-to-end workflows of e-commerce scenarios. At the tool level, we use the Camunda Modeler, Camunda Platform 8 (as the automation process engine), and Amazon Web Services (AWS) to design, develop, and deploy our models.
Finally, we use our calculations of the coupling, cohesion, and scalability measures to reveal an understanding of modeling microservices choreography, orchestration, and hybrid approaches, and we discuss when to use a specific approach for microservices composition. We have found from the evaluation that our proposed models are less tightly coupled compared to those modeled using orchestration and choreography. However, we also discovered that the orchestration style offers better scalability and a lower ratio of coupling and cohesion compared to choreography and hybrid approaches.
|
7 |
From monolithic architectural style to microservice one : structure-based and task-based approaches / Du style architectural monolithique vers le style microservice : approches basées sur la structure et sur les tâchesSelmadji, Anfel 03 October 2019 (has links)
Les technologies logicielles ne cessent d'évoluer pour faciliter le développement, le déploiement et la maintenance d'applications dans différents domaines. En parallèle, ces applications évoluent en continu pour garantir une bonne qualité de service et deviennent de plus en plus complexes. Cette évolution implique souvent des coûts de développement et de maintenance de plus en plus importants, auxquels peut s'ajouter une augmentation des coûts de déploiement sur des infrastructures d'exécution récentes comme le cloud. Réduire ces coûts et améliorer la qualité de ces applications sont actuellement des objectifs centraux du domaine du génie logiciel. Récemment, les microservices sont apparus comme un exemple de technologie ou style architectural favorisant l'atteinte de ces objectifs.Alors que les microservices peuvent être utilisés pour développer de nouvelles applications, il existe des applications monolithiques (i.e., monolithes) cons-truites comme une seule unité et que les propriétaires (e.g., entreprise, etc.) souhaitent maintenir et déployer sur le cloud. Dans ce cas, il est fréquent d'envisager de redévelopper ces applications à partir de rien ou d'envisager une migration vers de nouveaux styles architecturaux. Redévelopper une application ou réaliser une migration manuellement peut devenir rapidement une tâche longue, source d'erreurs et très coûteuse. Une migration automatique apparaît donc comme une solution évidente.L'objectif principal de notre thèse est de contribuer à proposer des solutions pour l'automatisation du processus de migration d'applications monolithiques orientées objet vers des microservices. Cette migration implique deux étapes : l'identification de microservices et le packaging de ces microservices. Nous nous focalisons sur d'identification en s'appuyant sur une analyse du code source. Nous proposons en particulier deux approches.La première consiste à identifier des microservices en analysant les relations structurelles entre les classes du code source ainsi que les accès aux données persistantes. Dans cette approche, nous prenons aussi en compte les recommandations d'un architecte logiciel. L'originalité de ce travail peut être vue sous trois aspects. Tout d'abord, les microservices sont identifiés en se basant sur l'évaluation d'une fonction bien définie mesurant leur qualité. Cette fonction repose sur des métriques reflétant la "sémantique" du concept "microservice". Deuxièmement, les recommandations de l'architecte logiciel ne sont exploitées que lorsqu'elles sont disponibles. Enfin, deux modèles algorithmiques ont été utilisés pour partitionner les classes d'une application orientée objet en microservices : un algorithme de regroupement hiérarchique et un algorithme génétique.La deuxième approche consiste à extraire à partir d'un code source orienté objet un workflow qui peut être utilisé en entrée de certaines approches existantes d'identification des microservices. Un workflow décrit le séquencement de tâches constituant une application suivant deux formalismes: un flot de contrôle et/ou un flot de données. L'extraction d'un workflow à partir d'un code source nécessite d'être capable de définir une correspondance entre les concepts du mon-de objet et ceux d'un workflow.Pour valider nos deux approches, nous avons implémenté deux prototypes et mené des expérimentations sur plusieurs cas d'étude. Les microservices identifiés ont été évalués qualitativement et quantitativement. Les workflows obtenus ont été évalués manuellement sur un jeu de tests. Les résultats obtenus montrent respectivement la pertinence des microservices identifiés et l'exactitude des workflows obtenus. / Software technologies are constantly evolving to facilitate the development, deployment, and maintenance of applications in different areas. In parallel, these applications evolve continuously to guarantee an adequate quality of service, and they become more and more complex. Such evolution often involves increased development and maintenance costs, that can become even higher when these applications are deployed in recent execution infrastructures such as the cloud. Nowadays, reducing these costs and improving the quality of applications are main objectives of software engineering. Recently, microservices have emerged as an example of a technology or architectural style that helps to achieve these objectives.While microservices can be used to develop new applications, there are monolithic ones (i.e., monoliths) built as a single unit and their owners (e.g., companies, etc.) want to maintain and deploy them in the cloud. In this case, it is common to consider rewriting these applications from scratch or migrating them towards recent architectural styles. Rewriting an application or migrating it manually can quickly become a long, error-prone, and expensive task. An automatic migration appears as an evident solution.The ultimate aim of our dissertation is contributing to automate the migration of monolithic Object-Oriented (OO) applications to microservices. This migration consists of two steps: microservice identification and microservice packaging. We focus on microservice identification based on source code analysis. Specifically, we propose two approaches.The first one identifies microservices from the source code of a monolithic OO application relying on code structure, data accesses, and software architect recommendations. The originality of our approach can be viewed from three aspects. Firstly, microservices are identified based on the evaluation of a well-defined function measuring their quality. This function relies on metrics reflecting the "semantics" of the concept "microservice". Secondly, software architect recommendations are exploited only when they are available. Finally, two algorithmic models have been used to partition the classes of an OO application into microservices: clustering and genetic algorithms.The second approach extracts from an OO source code a workflow that can be used as an input of some existing microservice identification approaches. A workflow describes the sequencing of tasks constituting an application according to two formalisms: control flow and /or data flow. Extracting a workflow from source code requires the ability to map OO conceptsinto workflow ones.To validate both approaches, we implemented two prototypes and conducted experiments on several case studies. The identified microservices have been evaluated qualitatively and quantitatively. The extracted workflows have been manually evaluated relying on test suites. The obtained results show respectively the relevance of the identified microservices and the correctness of the extracted workflows.
|
8 |
Utveckling och utvärdering av mikroservicetjänster för att stärka web cookies i webbanalysverktyg / Development and evaluation of microservices to strengthen web cookies in web analytics toolsRoth, Benjamin January 2020 (has links)
Data klassificeras numera som världens mest värdefulla resurs. Den växande och storskaliga användningen av internet är en bidragande faktor till de enorma mängder data som genereras och florerar i våra digitala miljöer. Genom att analysera data som samlas in från internet, kan insikter och förståelse för internetanvändares beteendemönster utvinnas. Därför har datainsamling och webbanalys på senare år blivit en nyckelaktivitet för många internetaktörer. Med ett effektivt arbete kring dessa områden kan internetaktörer skapa sig fördelar gentemot sina konkurrenter, och därmed skapa sig marknadsmässiga försprång. I takt med att data blir allt mer eftertraktat ökar också kraven på att internetanvändares integritet ska prioriteras så att datainsamlingen inte bryter mot några etiska principer. Detta ämne har på senare år blivit allt mer aktuellt efter att det visat sig att internetanvändares integritet ofta åsidosätts i strävan efter att samla in data. Många webbläsare har därför börjat arbeta aktivt för att skydda sina användare i större utsträckning, bland annat genom att hantera kakor allt mer restriktivt. Detta har orsakatproblem för webbanalysverktyg, då de använder kakor för att kunna identifiera, binda samman och samla in data kring en besökares beteenden och interaktioner på en webbplats. Syftet med denna studie är att utveckla, utvärdera och jämföra metoder som stärkerde kakor som används av webbanalysverktyg. Med hjälp av de metoder som utvecklas är studiens mål att höja kvalitén på den data som samlas in av verktygen. Studien har genomförts med en kvalitativ forskningsmetod i fem olika faser. För att utvärdera de metoder som utvecklas i studien har en utvärderingsmodell introducerats. Via utvärderingsmodellen har underlag till studiens resultat kunnat genereras. Resultatet visar att det med hjälp av mikroservicetjänster, i form av en proxyserver, är möjligt att åstadkomma en markant förbättring av kvalitén i den data som samlas in av webbanalysverktyg. / Data is now classified as the world’s most valuable resource. The growing and large-scale usage of internet is a contributing factor to the huge amount of data that are generated and flourish in our digital environments. By analyzing the data that can be collected from internet, insights and understanding of internet users behavioral patterns can be extracted. In recent years, web tracking and web analytics has therefore become a key activity for many players on the internet. With an effective work in these areas, internet players can create an advantage on thier competitors. As data becomes more and more sought after, the demands on the privacy aspects for internet users are also increasing. In recent years, this topic has become even more relevant, after it has been found that the privacy of internet users is often violated in the effort of data collection. Many web browsers has therefore actively begun to protect their users, for instance by handling cookies more restrictively. This has casued problems for web analytics tools, as they use cookies to identify, bind and collect data about users interactions and behavior patterns on a website. The purpose of this study is to develop, evaluate and compare methods that strengthen cookies used by web analytic tools. Using the methods developed in the study, the goal of the study is to improve the quality of the data that are collected by the tools. The study was conducted using a qualitative research method in five different phases. In order to evaluate the methods developed in the study, an evaluation model has been introduced. Through the evaluation model, data to the study’s result have been generated. The results shows that with help of microservices, in the form of aproxy server, it is possible to achieve a significant improvement in the quality of the data collected by web analytics tools.
|
9 |
Microservices in the context of a fast-growing company / Microservices inom ramen för ett snabbväxande företagHändel, Ludwig January 2020 (has links)
During the last decade, there has been a progressive shift towards a more modularized and distributive way of developing software to faster react to the changing environment, with the use of Microservices. This has forced companies to adjust their software organization in order to utilize the full capabilities of microservices. However, this process is no easy task. The way teams are formed, their size, communication methods, and the level of freedom they have to innovate can highly impact the code produced. Furthermore, there is, however, during the time of this research, still very limited qualitative research on how the companies work with autonomy and how this affects the transferring of knowledge within the company. Therefore, the purpose of this study was from an industrial perspective to investigate how fast-growing companies work with microservices on an organizational level and how team autonomy affects knowledge sharing within the organization. In order to achieve this purpose a multi-case study was conducted across 9 different companies. The result shows that companies are trying to achieve as much team autonomy as possible by forming self-manage cross-functional teams. However, autonomy needs to be balanced with the challenges that arise from growing fast. This can force the company to move to a functional team. In order to compensate for this lack of natural communication as well as improve knowledge sharing, in general, the participating companies had implemented several activities. The weekly session was one type of activity that was frequently used among companies. / Under det senaste decenniet har det skett ett progressiv skift mot ett mer modulariserat och distribuerande sätt att utveckla mjukvara för att snabbare reagera på den förändrade miljön med hjälp av Microservices. Detta har tvingat företag att anpassa sin mjukvaruorganisation för att utnyttja de fulla kapaciteten för mikroservicen. Men denna process är ingen enkel uppgift. Hur team bildas, deras storlek, kommunikationsmetoder och den frihet teamen har kan starkt påverka koden de producerar. Dessutom finns det, under tiden för detta arbete, fortfarande mycket begränsad kvalitativ forskning om hur företagen arbetar med självständiga team och hur detta påverkar kunskapsöverföring inom företaget. Därför var syftet med denna studie från ett industriellt perspektiv att undersöka hur snabbväxande företag arbetar med mikroservices på organisatorisk nivå och hur självständiga team påverkar kunskapsdelning inom organisationen. För att uppnå detta syfte genomfördes en fler-fallstudie med nio olika företag. Resultatet visar att företag försöker uppnå så självständiga team som möjligt genom att bilda självstyrande tvärfunktionella team. Självständigeten måste dock balanseras med de utmaningar som uppstår av att växa snabbt. Detta kan tvinga företaget att flytta till ett funktionellt team. För att kompensera för denna brist på naturlig kommunikation och förbättra kunskapsdelningen hade de deltagande företagen i allmänhet genomfört flera aktiviteter. Veckosessionen var en typ av aktivitet som ofta användes bland företagen.
|
10 |
Microservice Migration Patterns and how Continuous Integration and Continuous Delivery are affected : A Case study of Indicio’s journey towards microserviceLiu, Kasper January 2021 (has links)
Microservice is an architectural design that promises a more elastic system where a microservice can be allocated compute power according to demand. Through the separation of components, each microservice can have its own hardware or cloud setup. As a result, the code becomes more maintainable through smaller repositories. Development and Operations (DevOps) is a set of best practices to improve software development and operations. Two important components of DevOps are Continuous Integration (CI) and Continuous Delivery (CD). CI is a set of practices that aims to automate testing and increase development velocity through continuously integrate code changes. While CD aims to streamline the deployment process of the code, enabling a shorter time to market. When migrating a monolithic codebase towards a microservice architecture, one faces a lot of decisions that can have a deep impact on the whole organization. From a CI/CD perspective, some decisions can impact the efficiency of the CI/CD pipeline. This thesis investigated how Indicio’s CI/CD pipeline changed when going from a monolith towards a microservice architecture. It documents the decisions Indicio made along the way and investigates how the CI buildtime and CD deploy time were affected. The result showed that Indicio’s decision to keep the new microservice in the same repository added 44% to the median buildtime. The time increase was acceptable since it only resulted in an average of 20 seconds increase in median buildtime. Although, the decision to separate the CD into two independent CD pipelines, one for the old monolith and one for the new microservice didn’t affect the deploy time by any considerate margin. The new microservice was deployed to Microsoft Azure to be able to take advantage of the elastic compute power. The big advantage from a CD perspective by utilizing Azure was the blue-green deployment method resulting in zero downtime. / Mikrotjänster är en arkitektdesign som lovar ett mer flexibelt system där en mikrotjänst kan tilldelas den nödvändiga datakraften. Genom att dela upp komponenter kan varje mikrotjänst ha sin egen hårdvara eller molninställning. Det resulterar i mindre stycken kod som är lättare att underhålla. Development and Operations (DevOps) är en samling av bästa praxis för att förbättra mjukvaruutveckling och operationer. Två viktiga komponenter av DevOps är Continuous Integration (CI) och Continuous Delivery (CD). CI är en samling av verktyg som försöker automatisera tester och öka utvecklingshastigheten genom att kontinuerligt integrera kodändringar. Medan syftet med CD är att effektivisera distribution av kod, vilket möjliggör en kortare tid till marknaden. När man migrerar en monolitiskt kodbas mot en mikrotjänst arkitektur står man inför flera val som kan påverka hela organisationen. Utifrån ett CI/CD perspektiv så kan dessa val påverka effektiviteten av CI/CD processen. Denna uppsats undersöker hur Indicios CI/CD process förändras när dem går mot en mikrotjänstarkitektur från en monolit. Uppsatsen dokumenterar de val Indicio har gjort under migrationen och hur det påverkar CI byggnadstid och CD distribution tid. Resultaten visar att Indicios beslut att behålla den nya mikrotjänsten i samma förvar resulterade i 44% ökad medianbyggtid. Tidsökningen var acceptabel då det endast innebar en snittökning på 20 sekunder. Fastän Indicio beslutade att separera på CD processen till två nya, en för den nya mikrotjänsten och en för den nya monoliten så påverkades inte distribueringstiden särskilt mycket. Den nya mikrotjänsten distribuerades på Microsoft Azure för att kunna utnyttja den elastiska datakraften. Den stora fördelen från ett CD perspektiv med Azure var att man kunde utnyttja blågrön distributionsmetod, vilket gjorde att driftstopp tiden försvann.
|
Page generated in 0.0177 seconds