• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 5
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 37
  • 28
  • 26
  • 24
  • 23
  • 21
  • 19
  • 19
  • 16
  • 16
  • 14
  • 14
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Market driven elastic secure infrastructure

Tikale, Sahil 30 May 2023 (has links)
In today’s Data Centers, a combination of factors leads to the static allocation of physical servers and switches into dedicated clusters such that it is difficult to add or remove hardware from these clusters for short periods of time. This silofication of the hardware leads to inefficient use of clusters. This dissertation proposes a novel architecture for improving the efficiency of clusters by enabling them to add or remove bare-metal servers for short periods of time. We demonstrate by implementing a working prototype of the architecture that such silos can be broken and it is possible to share servers between clusters that are managed by different tools, have different security requirements, and are operated by tenants of the Data Center, which may not trust each other. Physical servers and switches in a Data Center are grouped for a combination of reasons. They are used for different purposes (staging, production, research, etc); host applications required for servicing specific workloads (HPC, Cloud, Big Data, etc); and/or configured to meet stringent security and compliance requirements. Additionally, different provisioning systems and tools such as Openstack-Ironic, MaaS, Foreman, etc that are used to manage these clusters take control of the servers making it difficult to add or remove the hardware from their control. Moreover, these clusters are typically stood up with sufficient capacity to meet anticipated peak workload. This leads to inefficient usage of the clusters. They are under-utilized during off-peak hours and in the cases where the demand exceeds capacity the clusters suffer from degraded quality of service (QoS) or may violate service level objectives (SLOs). Although today’s clouds offer huge benefits in terms of on-demand elasticity, economies of scale, and a pay-as-you-go model yet many organizations are reluctant to move their workloads to the cloud. Organizations that (i) needs total control of their hardware (ii) has custom deployment practices (iii) needs to match stringent security and compliance requirements or (iv) do not want to pay high costs incurred from running workloads in the cloud prefers to own its hardware and host it in a data center. This includes a large section of the economy including financial companies, medical institutions, and government agencies that continue to host their own clusters outside of the public cloud. Considering that all the clusters may not undergo peak demand at the same time provides an opportunity to improve the efficiency of clusters by sharing resources between them. The dissertation describes the design and implementation of the Market Driven Elastic Secure Infrastructure (MESI) as an alternative to the public cloud and as an architecture for the lowest layer of the public cloud to improve its efficiency. It allows mutually non-trusting physically deployed services to share the physical servers of a data center efficiently. The approach proposed here is to build a system composed of a set of services each fulfilling a specific functionality. A tenant of the MESI has to trust only a minimal functionality of the tenant that offers the hardware resources. The rest of the services can be deployed by each tenant themselves MESI is based on the idea of enabling tenants to share hardware they own with tenants they may not trust and between clusters with different security requirements. The architecture provides control and freedom of choice to the tenants whether they wish to deploy and manage these services themselves or use them from a trusted third party. MESI services fit into three layers that build on each other to provide: 1) Elastic Infrastructure, 2) Elastic Secure Infrastructure, and 3) Market-driven Elastic Secure Infrastructure. 1) Hardware Isolation Layer (HIL) – the bottommost layer of MESI is designed for moving nodes between multiple tools and schedulers used for managing the clusters. It defines HIL to control the layer 2 switches and bare-metal servers such that tenants can elastically adjust the size of the clusters in response to the changing demand of the workload. It enables the movement of nodes between clusters with minimal to no modifications required to the tools and workflow used for managing these clusters. (2) Elastic Secure Infrastructure (ESI) builds on HIL to enable sharing of servers between clusters with different security requirements and mutually non-trusting tenants of the Data Center. ESI enables the borrowing tenant to minimize its trust in the node provider and take control of trade-offs between cost, performance, and security. This enables sharing of nodes between tenants that are not only part of the same organization by can be organization tenants in a co-located Data Center. (3) The Bare-metal Marketplace is an incentive-based system that uses economic principles of the marketplace to encourage the tenants to share their servers with others not just when they do not need them but also when others need them more. It provides tenants the ability to define their own cluster objectives and sharing constraints and the freedom to decide the number of nodes they wish to share with others. MESI is evaluated using prototype implementations at each layer of the architecture. (i) The HIL prototype implemented with only 3000 Lines of Code (LOC) is able to support many provisioning tools and schedulers with little to no modification; adds no overhead to the performance of the clusters and is in active production use at MOC managing over 150 servers and 11 switches. (ii) The ESI prototype builds on the HIL prototype and adds to it an attestation service, a provisioning service, and a deterministically built open-source firmware. Results demonstrate that it is possible to build a cluster that is secure, elastic, and fairly quick to set up. The tenant requires only minimum trust in the provider for the availability of the node. (iii) The MESI prototype demonstrates the feasibility of having a one-of-kind multi-provider marketplace for trading bare-metal servers where providers also use the nodes. The evaluation of the MESI prototype shows that all the clusters benefit from participating in the marketplace. It uses agents to trade bare-metal servers in a marketplace to meet the requirements of their clusters. Results show that compared to operating as silos individual clusters see a 50% improvement in the total work done; up to 75% improvement (reduction) in waiting for queues and up to 60% improvement in the aggregate utilization of the test bed. This dissertation makes the following contributions: (i) It defines the architecture of MESI allows mutually non-trusting tenants of the data center to share resources between clusters with different security requirements. (ii) Demonstrates that it is possible to design a service that breaks the silos of static allocation of clusters yet has a small Trusted Computing Base (TCB) and no overhead to the performance of the clusters. (iii) Provides a unique architecture that puts the tenant in control of its own security and minimizes the trust needed in the provider for sharing nodes. (iv) A working prototype of a multi-provider marketplace for bare-metal servers which is a first proof-of-concept that demonstrates that it is possible to trade real bare-metal nodes at practical time scales such that moving nodes between clusters is sufficiently fast to be able to get some useful work done. (v) Finally results show that it is possible to encourage even mutually non-trusting tenants to share their nodes with each other without any central authority making allocation decisions. Many smart, dedicated engineers and researchers have contributed to this work over the years. I have jointly led the efforts to design the HIL and the ESI layer; led the design and implementation of the bare-metal marketplace and the overall MESI architecture.
52

Designing Microservices with Use Cases and UML

Akhil Reddy, Bommareddy 03 August 2023 (has links)
No description available.
53

Managing Microservices with a Service Mesh : An implementation of a service mesh with Kubernetes and Istio

Mara Jösch, Ronja January 2020 (has links)
The adoption of microservices facilitates extending computer systems in size, complexity, and distribution. Alongside their benefits, they introduce the possibility of partial failures. Besides focusing on the business logic, developers have to tackle cross-cutting concerns of service-to-service communication which now defines the applications' reliability and performance. Currently, developers use libraries embedded into the application code to address these concerns. However, this increases the complexity of the code and requires the maintenance and management of various libraries. The service mesh is a relatively new technology that possibly enables developers staying focused on their business logic. This thesis investigates one of the available service meshes called Istio, to identify its benefits and limitations. The main benefits found are that Istio adds resilience and security, allows features currently difficult to implement, and enables a cleaner structure and a standard implementation of features within and across teams. Drawbacks are that it decreases performance by adding CPU usage, memory usage, and latency. Furthermore, the main disadvantage of Istio is its limited testing tools. Based on the findings, the Webcore Infra team of the company can make a more informed decision whether or not Istio is to be introduced. / Tillämpningen av microservices underlättar utvidgningen av datorsystem i storlek, komplexitet och distribution. Utöver fördelarna introducerar de möjligheten till partiella misslyckanden. Förutom att fokusera på affärslogiken måste utvecklare hantera övergripande problem med kommunikation mellan olika tjänster som nu definierar applikationernas pålitlighet och prestanda. För närvarande använder utvecklare bibliotek inbäddade i programkoden för att hantera dessa problem. Detta ökar dock kodens komplexitet och kräver underhåll och hantering av olika bibliotek. Service mesh är en relativt ny teknik som kan möjliggöra för utvecklare att hålla fokus på sin affärslogik. Denna avhandling undersöker ett av de tillgängliga service mesh som kallas Istio för att identifiera dess fördelar och begränsningar. De viktigaste fördelarna som hittas är att Istio lägger till resistens och säkerhet, tillåter funktioner som för närvarande är svåra att implementera och möjliggör en renare struktur och en standardimplementering av funktioner inom och över olika team. Nackdelarna är att det minskar prestandan genom att öka CPU-användning, minnesanvändning och latens. Dessutom är Istios största nackdel dess begränsade testverktyg. Baserat på resultaten kan Webcore Infra-teamet i företaget fatta ett mer informerat beslut om Istio ska införas eller inte.
54

Detecting and mitigating software security vulnerabilities through secure environment programming

Blair, William 26 March 2024 (has links)
Adversaries continue to exploit software in order to infiltrate organizations’ networks, extract sensitive information, and hijack control of computing resources. Given the grave threat posed by unknown security vulnerabilities, continuously monitoring for vulnerabilities during development and evidence of exploitation after deployment is now standard practice. While the tools that perform this analysis and monitoring have evolved significantly in the last several decades, many approaches require either directly modifying a program’s source code or its intermediate representation. In this thesis, I propose methods for efficiently detecting and mitigating security vulnerabilities in software without requiring access to program source code or instrumenting individual programs. At the core of this thesis is a technique called secure environment programming (SEP). SEP enhances execution environments, which may be CPUs, language interpreters, or computing clouds, to detect security vulnerabilities in production software artifacts. Furthermore, environment based security features allow SEP to mitigate certain memory corruption and system call based attacks. This thesis’ key insight is that a program’s execution environment may be augmented with functionality to detect security vulnerabilities or protect workloads from specific attack vectors. I propose a novel vulnerability detection technique called micro-fuzzing which automatically detects algorithmic complexity (AC) vulnerabilities in both time and space. The detected bugs and vulnerabilities were confirmed by vendors of real-world Java libraries. Programs implemented in memory unsafe languages like C/C++ are popular targets for memory corruption exploits. In order to protect programs from these exploits, I enhance memory allocators with security features available in modern hardware environments. I use efficient hash algorithm implementations and memory protection keys (MPKs) available on recent CPUs to enforce security policies on application memory. Finally, I deploy a microservice-aware policy monitor (MPM) that detects security policy deviations in container telemetry. These security policies are generated from binary analysis over container images. Embedding MPMs derived from binary analysis in micro-service environments allows operators to detect compromised components without modifying container images or incurring high performance overhead. Applying SEP at varying levels of the computing stack, from individual programs to popular micro-service architectures, demonstrates that SEP efficiently protects diverse workloads without requiring program source or instrumentation.
55

Monólise: Uma técnica para decomposição de aplicações monolíticas em microsserviços

Rocha, Diego Pereira da 17 September 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-12-21T15:54:25Z No. of bitstreams: 1 Diego Pereira da Rocha_.pdf: 4394542 bytes, checksum: c92aa948a9a1de248deed850d6c19d5b (MD5) / Made available in DSpace on 2018-12-21T15:54:25Z (GMT). No. of bitstreams: 1 Diego Pereira da Rocha_.pdf: 4394542 bytes, checksum: c92aa948a9a1de248deed850d6c19d5b (MD5) Previous issue date: 2018-09-17 / Nenhuma / A recorrente necessidade de as empresas entregarem seus softwares em curto espaço de tempo e de forma contínua, combinada ao alto nível de exigência dos usuários, está fazendo a indústria, de um modo geral, repensar como devem ser desenvolvidas as aplicações para o mercado atual. Nesse cenário, microsserviços é o estilo arquitetural utilizado para modernizar as aplicações monolíticas. No entanto, o processo para decompor uma aplicação monolítica em microsserviços é ainda um desafio que precisa ser investigado, já que, na indústria, atualmente, não há uma estrutura padronizada para fazer a decomposição das aplicações. Encontrar uma técnica que permita definir o grau de granularidade de um microsserviço também é um tema que desperta discussão na área de Engenharia de Software. Partindo dessas considerações, este trabalho propôs a Monólise, uma técnica que utiliza um algoritmo chamado Monobreak, que possibilita decompor uma aplicação monolítica a partir de funcionalidades e também definir o grau de granularidade dos microsserviços a serem gerados. Nesta pesquisa, a Monólise foi avaliada através de um estudo de caso. Tal avaliação consistiu na comparação da decomposição realizada pela Monólise com a decomposição executada por um especialista na aplicação-alvo utilizada no estudo de caso. Essa comparação permitiu avaliar a efetividade da Monólise através de oito cenários realísticos de decomposição. O resultado dessa avaliação permitiu verificar as semelhanças e diferenças ao decompor uma aplicação monolítica em microsserviços de forma manual e a partir de uma técnica semiautomática. O desenvolvimento deste trabalho demonstrou que a técnica de Monólise apresenta-se com uma grande potencialidade na área de Engenharia de Software referente à decomposição de aplicações. Além disso, as considerações do estudo evidenciaram que essa técnica poderá ser um motivador para encorajar desenvolvedores e arquitetos na jornada de modernização de suas aplicações monolíticas em microsserviços bem como diminuir possíveis erros cometidos nessa atividade por profissionais com pouca experiência em decomposição de aplicações. / The recurring need for companies to deliver their software in a short time and on a continuous basis combined with the high level of demand of users is making the industry in general rethink how to develop the applications for the current market. In this scenario microservice is the architectural style used to modernize monolithic applications. However the process of decomposing a monolithic application into microservices is still a challenge that needs to be investigated since in industry there is currently no standardized framework for decomposing applications. Finding a technique that allows defining the degree of granularity of a microservice is also a topic that arouses discussion in the area of Software Engineering. Based on these considerations this work proposed the Monolise a technique that uses an algorithm called Mono- Break that allows to decompose a monolithic application from functionalities and also to define the degree of granularity of the microservices to be generated. In this research the Monolise was evaluated through a case study. Such evaluation consisted of comparing the decomposition performed by the Monolise with the decomposition performed by a specialist in the target application used in the case study. This comparison allowed to evaluate the effectiveness of the Monolise through eight realistic scenarios of decomposition. The result of this evaluation allowed to verify the similarities and differences in the decomposition of a monolithic application in microservices manually and from a semiautomatic technique. The development of this work demonstrated that the Monolise technique presents with great potentiality in the area of Software Engineering regarding the decomposition of applications. In addition the study’s considerations showed that this technique could be a motivator to encourage developers and architects in the modernization of their monolithic applications in microservices as well as to reduce possible mistakes made in this activity by professionals with little experience in decomposing applications.
56

Message brokers in a microservice architecture / Meddelandemäklare i en mikrotjänstarkitektur

Antonio, Christian, Fredriksson, Björn January 2021 (has links)
The microservice architectural pattern refers to a system consisting of independently deployable services that communicate across networks. RabbitMQ is a popular message broker that can be used to make this communication possible. An alternative to this is Amazon Simple Queuing Service (SQS), which is a fully managed queuing service. By performing a literature- and case study, two systems with a microservice architecture are developed. One using RabbitMQ to communicate between the services, and the other using Amazon SQS. The systems are compared, with regards to message latency, ease of use and maintainability. The results show that RabbitMQ provides much lower message latency than Amazon SQS. Amazon SQS is however both easier to maintain and to use than RabbitMQ. / En mikrotjänstarkitektur syftar till ett system bestående av tjänster som kan driftsättas oberoende av varandra och som kommunicerar över nätverk. RabbitMQ är en populär meddelandemäklare som nyttjas för att möjliggöra ovan nämnd kommunikation. Ett alternativ till detta är Amazon Simple Queueing Service (SQS), vilket är en meddelandetjänst som helt och hållet förvaltas av Amazon. Genom att utföra en litteratur- och fallstudie utvecklas två system med en mikrotjänstarkitektur. Det ena nyttjar RabbitMQ för kommunikation mellan tjänster, medan det andra använder Amazon SQS. Båda systemen jämförs därefter med hänsyn till meddelandens fördröjning, användarvänlighet samt enkelhet att underhålla. Resultaten visar att meddelanden skickade genom RabbitMQ har mycket lägre fördröjning än de skickade genom Amazon SQS. Ur perspektiven användarvänlighet och enkelhet att underhålla är Amazon SQS ett mer fördelaktigt akternativ än RabbitMQ.
57

[en] A LIBRARY FOR DETERMINISTIC TESTS IN DISTRIBUTED SYSTEMS WITH ASYNCHRONOUS COMMUNICATION / [pt] UMA BIBLIOTECA PARA TESTES DETERMINÍSTICOS EM SISTEMAS DISTRIBUÍDOS COM COMUNICAÇÃO ASSÍNCRONA

PEDRO FELIPE SANTOS MAGALHAES 15 June 2023 (has links)
[pt] Observamos que cada vez mais desenvolvedores estão adotando a arquitetura de microsserviços para o desenvolvimento de sistemas distribuídos. Usualmente nesse tipo de arquitetura há um serviço de fila de mensagens que fica responsável em fazer a comunicação assíncrona entre os microsserviços, um serviço bastante utilizado para isso é o Kafka. Nesse ambiente assíncrono, os testes de integração de um determinado serviço ficam complexos pela dificuldade de criar cenários reprodutíveis. No nosso trabalho propomos e avaliamos o uso de uma biblioteca em Go que ajuda no desenvolvimento de testes de integração para microsserviços que utilizam Docker e Kafka, garantindo a ordenação de eventos nos cenários de teste desenvolvidos. / [en] Nowadays more and more developers are adopting the microservices architecture for the development of distributed systems. Usually in this type of architecture there is a message queue service that is responsible for asynchronous communication between microservices; a service that is widely used for this is the Apache Kafka. In this asynchronous environment, integration tests for a given service become complex due to the difficulty of creating reproducible scenarios. In our work, we propose and evaluate the use of a library we developed in Go for the construction of integration tests for microservices that use Docker and Kafka, guaranteeing in the ordering of events as described in the test script.
58

Abandoning Monolithic Architecture: Leaving an old paradigm for the possibilities of containerized microservices using an automated orchestration tool

Cardell, Sabina, Widén, Oscar January 2023 (has links)
Många stora organisationer som myndigheter och banker arbetar med en monolitisk applikationsarkitektur som är ett gammalt sätt att strukturera applikationer. Flera faktorer som att attrahera och behålla talang, vara skalbar och flexibel, samt en bra tjänsteleverans driver dessa organisationer att byta till en mikrotjänstorienterad arkitektur. Att migrera stora applikationer och samtidigt leverera tjänster till kunder eller användare är en stor och svår uppgift. Problemet är att det inte finns tillräckligt med forskning om hur man arbetar under denna typ av modernisering av applikationsarkitekturen samtidigt som organisatorisk stabilitet upprätthålls. Denna studie syftar till att bättre förstå hur organisatorisk stabilitet kan upprätthållas under tider av stora tekniska förändringar i arbetssätt under övergångsperioden för arkitekturer. Studien utgick från följande forskningsfråga: Hur upprätthålls organisatorisk stabilitet under övergångsperioden för modernisering av arkitekturer under flytten mot mikrotjänster? Studien har baserats på en kvalitativ ansats, där en fallstudie har använts för att samla in empiriskt material. Studiens empiriska material har samlats in genom åtta semistrukturerade intervjuer med anställda med olika roller på myndigheten som utför ett storskaligt applikationsarkitektur projekt; containerprojektet. Datan analyserades med hjälp av tematisk analys. Studiens resultat visar hur både förberedande och löpande hantering är viktiga för framgång. I de förberedande stadierna är faktorer relaterade till risktagande och hantering av projektets arbetsstyrka viktiga att besluta om. När projektet väl har startat är det viktigt att aktivt arbeta med förändringsarbete och att vara flexibel, kommunicera med riktad information och hantera varje specifikt hinder noggrant. Studien visade också hur valet av teknik inte är avgörande för projektets framgång utan en metod för att nå dit. Resultaten har visat hur uppdelningen av en stor plan i mindre projekt, som vidare delas upp i faser, är en framgångsfaktor. Studien har bidragit med nya insikter till forskningen inom IT-hantering och applikationsarkitektur. / Many large organizations, such as government entities and banks, operate with a monolithic application architecture, an old way of structuring applications. Several factors including attracting and maintaining talent, being scalable and flexible, as well as a good service delivery, are driving these organizations to change toward a microservice-oriented architecture. To migrate large applications while simultaneously delivering the services to clients or users is a large and challenging task. The problem is that there is insufficient research on how to work during this type of application architecture modernization while maintaining organizational stability. This thesis aims to better understand how organizational stability can be maintained during times of disruptive technological change in the workspace during the transition period of architecture. The study utilized the following research question: How to maintain organizational stability in the transition period of architectural modernization moving towards microservices? The study has been based on a qualitative approach, where one case has been used in gathering empirical material. The study's empirical material has been collected through eight semi-structured interviews with employees of various roles at the Swedish agency performing a large-scale application architecture project; the containerization-project. The data were analyzed using thematic analysis. The thesis findings show how both preparatory and ongoing management contributions are essential for success. In the preparatory stages, factors related to risk-taking and managing the project workforce are essential to decide. Once the project has started, it is crucial to work on change management efforts actively and to be flexible, communicate with targeted information, and handle each specific obstacle carefully. The study also showed how the choice of technologies is not central to the project's success but a method to get there. The findings have shown how dividing a large plan into smaller projects, further divided into phases, is a success factor. The study contributed new insights to IT management and application architecture research.
59

The run-time impact of business functionality when decomposing and adopting the microservice architecture / Påverkan av körtid för system funktionaliteter då de upplöses och microservice architektur appliceras

Faradj, Rasti January 2018 (has links)
In line with the growth of software, code bases are getting bigger and more complex. As a result of this, the architectural patterns, which systems rely upon, are becoming increasingly important. Recently, decomposed architectural styles have become a popular choice. This thesis explores system behavior with respect to decomposing system granularity and external communication between the resulting decomposed services. An e-commerce scenario was modeled and implemented at different granularity levels to measure the response time. In establishing the communication, both REST with HTTP and JSON and the gRPC framework were utilized. The results showed that decomposition has impact on run-time behaviour and external communication. The highest granularity level implemented with gRPC for communication establishment adds 10ms. In the context of how the web behaves today, it can be interpreted as feasible but there is no discussion yet on whether it is theoretically desirable. / I linje med de växande mjukvarusystemen blir kodbaserna större och mer komplexa. Arkitekturerna som systemen bygger på får allt större betydelse. Detta examensarbete utforskar hur upplösning av system som tillämpar mikroservicearkitektur beter sig, och hur de påverkas av kommunikationsupprättande bland de upplösta och resulterande tjänsterna. Ett e-handelsscenario modelleras i olika granularitetsnivåer där REST med HTTP och JSON samt gRPC används för att upprätta kommunikationen. Resultaten visar att upplösningen påverkar runtimebeteendet och den externa kommunikationen blir långsammare. En möjlig slutsats är att påverkan från den externa kommunikationen i förhållande till hur webben beter sig idag är acceptabel. Men om man ska ligga inom teoretiskt optimala gränser kan påverkan ses som för stor.
60

Information visualization of microservice architecture relations and system monitoring : A case study on the microservices of a digital rights management company - an observability perspective / Informationsvisualisering av mikrotjänsters relationer och system monitorering : En studie angående mikrotjänster hos ett förvaltningsföretag av digitala rättigheter - ett observerbarhetsperspektiv

Frisell, Marcus January 2018 (has links)
90% of the data that exists today has been created over the last two years only. Part of the data space is created and collected by machines, sending logs of internal measurements to be analyzed and used to evaluate service incidents. However, efficiently comprehending datasets requires more than just access to data, as Richard Hamming puts it; "The purpose of computing is insight, not numbers." A tool to simplify apprehension of complex datasets is information visualization, which works by transforming layers of information into a visual medium, enabling the human perception to quickly extract valuable information and recognise patterns. This was an experimental design-oriented research study, set out to explore if an information visualization of microservice architecture relations combined with system health data could help developers at a Swedish digital rights management company (DRMC) to find root cause incidents, increase observability and decision support, i.e. simplifying the incident handling process. To explore this, a prototype was developed and user tests consisting of a set of tasks as well as a semi-structured interview was executed by ten developers at DRMC. The results concluded that the proposed solution provided a welcomed overview of service health and dependencies but that it lacked the ability to effectively focus on certain services, essentially making it difficult to find root causes. Visualizations like this seems to be best suited for overview-, rather than focused, comprehension. Further research could be conducted on how to efficiently render large complex datasets while maintaining focus and how to account for external factors. / 90% av alla data som finns idag har skapats under de senaste två åren. En del av datautrymmet skapas och samlas in av maskiner som genererar loggar innehållandes interna systemmätningar för att utvärdera felaktiga tjänster. För att effektivt förstå ett dataset krävs mer än bara tillgång till data, som Rickard Hamming har sagt; “Syftet med datoranvändning är insikt, inte siffror.” Ett verktyg för att förenkla ens uppfattning av komplexa dataset är informationsvisualisering. Det fungerar genom att transformera lager av information till ett visuellt medium, och på så sätt tillåta mänsklig perception att snabbt extrahera värdefull information och utläsa mönster. Det här var en experimentell, design-orienterad, forskningsstudie med syftet att utforska ifall en informationsvisualisering av mikrotjänsters relationer kombinerat med system-hälso-data kunde hjälpa utvecklare på ett svenskt förvaltningsföretag av digitala rättigheter (DRMC) att hitta grundorsaken till felaktiga mikrotjänster samt utöka observerbarhet och beslutstöd, d.v.s. förenkla felhanteringsprocessen. För att utforska detta problem så utvecklades en prototyp som testades genom att låta tio utvecklare på DRMC utföra ett antal olika uppgifter samt svara på en semi-strukturerad intervju. Resultatet visade på att den föreslagna lösningen möjliggjorde en välkommen överblick över systemets hälsa och relationer, men också att den saknade möjligheten att effektivt fokusera på specifika tjänster, vilket ledde till att grundorsaksproblem var svåra att hitta. Visualiseringar som denna verkar fungera bäst för att presentera en överblick av ett system, snarare än ett fokus på specifika tjänster. Framtida forskning skulle kunna utföras för att utreda hur visualiseringar effektivt kan återge komplexa dataset utan att förlora fokus på specifika delar, samt hur externa faktorer kan integreras.

Page generated in 0.0636 seconds