• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 17
  • 10
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 125
  • 59
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 24
  • 23
  • 23
  • 21
  • 21
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Framework to set up a generic environment for applications / Ramverk för uppsättning av generisk miljö för applikationer

Das, Ruben January 2021 (has links)
Infrastructure is a common word used to express the basic equipment and structures that are needed e.g.  for a country or organisation to function properly. The same concept applies in the field of computer science, without infrastructure one would have problems operating software at scale. Provisioning and maintaining infrastructure through manual labour is a common occurrence in the "iron age" of IT. As the world is progressing towards the "cloud age" of IT, systems are decoupled from physical hardware enabling anyone who is software savvy to automate provisioning and maintenance of infrastructure. This study aims to determine how a generic environment can be created for applications that can run on Unix platforms and how that underlying infrastructure can be provisioned effectively. The results show that by utilising OS-level virtualisation, also known as "containers", one can deploy and serve any application that can use the Linux kernel in the sense that is needed. To further support realising the generic environment, hardware virtualisation was applied to provide the infrastructure needed to be able to use containers. This was done by provisioning a set of virtual machines on different cloud providers with a lightweight operating system that could support the container runtime needed. To manage these containers at scale a container orchestration tool was installed onto the cluster of virtual machines. To provision the said environment in an effective manner, the principles of infrastructure as code (IaC) were used to create a “blueprint" of the infrastructure that was desired. By using the metric mean time to environment (MTTE) it was noted that a cluster of virtual machines with a container orchestration tool installed onto it could be provisioned under 10 minutes for four different cloud providers.
72

Webbserver från ASP.NET 4.8 till Blazor Server

Söderlund, Malin January 2021 (has links)
This report aims to address the design of a web server with Blazor Server for the company FunRock and their mobile strategy game MMA Manager. The web server is intended to administer settings of the game's different components, where staff can, for example, search for a user or a user's belongings. The report is limited to the design of specific pages from the most frequently used ones on the web server. Furthermore, the report wishes to address usability and accessibility analysis of the previous web server due to the basis for the design of the new Blazor Server application. The purpose of converting to Blazor Server has been to contribute to more flexible hosting, faster development and better performance than their previous one, written in ASP.NET 4.8. Finally, the later part of the report addresses a more subjective analysis of the author around the work performed where reflections on the project's results are treated. / Denna rapport önskar behandla utformningen av en webbserver med Blazor Server åt företaget FunRock och deras mobila strategispel MMA Manager. Webbservern ämnar till att administrera inställningar av spelets beståndsdelar, där personal kan exempelvis söka efter en användare eller en användares tillhörigheter. Rapporten erhåller avgränsning till utformning av specifika sidor från den tidigare webbservern som används mest frekvent. Vidare önskar rapporten behandla användbarhets- och tillgänglighetsanalys av den tidigare webbservern med anledning att agera grund till den nya Blazor Server applikationens design. Syftet med utformningen av den nya webbservern har varit att bidra till mer flexibel hostning, snabbare utveckling och bättre prestanda än deras tidigare som var skriven i ASP.NET 4.8. Slutligen tar rapportens senare del upp en mer subjektiv analys av författaren kring det utförda arbetet där reflektioner över projektets resultat behandlas.
73

IT-Grundschutz für die Container-Virtualisierung mit dem neuen BSI-Baustein SYS. 1.6

Haar, Christoph, Buchmann, Erik 07 February 2019 (has links)
Mit Hilfe der Container-Virtualisierung lassen sich Anwendungen flexibel in die Cloud auslagern, administrieren, zwischen Rechenzentren migrieren, etc. Dafür baut die Containervirtualisierung auf eine komplexe IT-Landschaft auf, in der Hardware, Betriebssystem und Anwendungen von verschiedenen Parteien bereitgestellt und genutzt werden. Der IT-Sicherheit kommt daher eine große Bedeutung zu. Das Bundesamt für Sicherheit in der Informationstechnik (BSI) stellt mit dem ITGrundschutz eine Methode zur Umsetzung von angemessenen Schutzmaßnahmen im IT-Umfeld zur Verfügung. Es gibt jedoch wenig Erfahrung mit der Absicherung der Container-Virtualisierung gemäß IT-Grundschutz: Das Grundschutz-Kompendium und die Standards zur Risikoanalyse wurden erst im November 2017 in überarbeiteter Form neu eingeführt, und der Baustein SYS. 1.6 zur Container-Virtualisierung wurde erst im Mai 2018 als Community Draft veröffentlicht. In dieser Arbeit untersuchen wir, wie gut sich der aktuelle IT-Grundschutz auf einen Web-Shop anwenden lässt, der mittels Docker virtualisiert wurde. Wir gehen dabei insbesondere auf die Gefährdungsanalyse, Docker-spezifische Gefährdungen sowie entsprechende Maßnahmen zur Abwendung dieser Gefährdungen ein. Darüber hinaus diskutieren wir, wie sich unsere Erkenntnisse über das Docker-Szenario hinaus auf Container-Technologie verallgemeinern lassen.Wir haben festgestellt, dass der Baustein SYS. 1.6 das Grundschutz-Kompendiums eine umfassende Hilfestellung zur Absicherung von Containern bietet. Wir haben jedoch zwei zusätzliche Gefährdungen identifiziert.
74

Analysis of Diameter Log Files with Elastic Stack / Analysering av Diameter log filer med hjälp av Elastic Stack

Olars, Sebastian January 2020 (has links)
There is a growing need for more efficient tools and services for log analysis. A need that comes from the ever-growing use of digital services and applications, each one generating thousands of lines of log event message for the sake of auditing and troubleshooting. This thesis was initiated on behalf of one of the departments of the IT consulting company TietoEvry in Karlstad. The purpose of this thesis project was to investigate whether the log analysis service Elastic Stack would be a suitable solution for TietoEvry’s need for a more efficient method of log event analysis. As part of this investigation, a small-scale deployment of Elastic Stack was created, used as proof of concept. The investigation showed that Elastic Stack would be a suitable tool for the monitoring and analysis needs of TietoEvry. The final version of deployment was, however, not able to fulfill all of the requirements that were initially set out by TietoEvry, however, this was mainly due to a lack of time and rather than limitations of Elastic Stack.
75

Integrating third-party APIs as a microservice

Eriksson, David January 2021 (has links)
Microservices are a way of decentralizing software services into smaller, isolated environments with contained, specific responsibilities. The traditional approach of monolithic applications introduces many problems regarding complexity due to scaling of functionality. Microservices emerged as a way of dealing with these problems by separating services into modules independent of one another, promoting communication between each component to fulfill the service requirements. This architectural style of software development separates concern of business logic, data models, and other domain specific modules to their respective domains where they are isolated from the rest of the system. Communication is key in the world of microservices as modules rely on transferring information to the rest of the system rather than mutating and operating on global data bound to the entirety of the system. APIs (Application Programming Interfaces) expose data from individual software modules to other parts of the application, and this can be done in a multitude of ways. This thesis focuses on APIs following the REST (Representational State Transfer) protocol as a means to exchange data between software modules. This project dives into the concept of microservices by developing a service through an iterative development process in order to incrementally implement the requirements of the service. The purpose of the microservice is to integrate third-party APIs into the existing service, Link Visualizer. Instead of directly implementing the required functionalities from the external API into the core source code of Link Visualizer, a microservice was built to isolate the responsibilities, removing co-dependence from the integrating APIs.
76

Leveraging Commercial and Open Source Software to Process and Visualize Advanced 3D Models on a Web-Based Software Platform

Saraf, Nikita Sandip January 2020 (has links)
No description available.
77

Performance evaluation of wireguard in kubernetes cluster

Gunda, Pavan, Voleti, Sri Datta January 2021 (has links)
Containerization has gained popularity for deploying applications in a lightweight environment. Kubernetes and Docker have gained a lot of dominance for scalable deployments of applications in containers. Usually, kubernetes clusters are deployed within a single shared network. For high availability of the application, multiple kubernetes clusters are deployed in multiple regions, due to which the number of kubernetes clusters keeps on increasing over time. Maintaining and managing mul-tiple kubernetes clusters is a challenging and time-consuming process for system administrators or DevOps engineers. These issues can be addressed by deploying a kubernetes cluster in a multi-region environment. A multi-region kubernetes de-ployment reduces the hassle of handling multiple kubernetes masters by having onlyone master with worker nodes spread across multiple regions. In this thesis, we investigated a multi-region kubernetes cluster’s network performance by deploying a multi-region kubernetes cluster with worker nodes across multiple openstack regions and tunneled using wireguard(a VPN protocol). A literature review on the common factors that influence the network performance in a multi-region deployment is conducted for the network performance metrics. Then, we compared the request-response time of this multi-region kubernetes cluster with the regular kubernetes cluster to evaluate the performance of the deployed multi-region kubernetescluster. The results obtained show that a kubernetes cluster with worker nodes ina single shared network has an average request-response time of 2ms. In contrast, the kubernetes cluster with worker nodes in different openstack projects and regions has an average request-response time of 14.804 ms. This thesis aims to provide a performance comparison of the kubernetes cluster with and without wireguard, fac-tors affecting the performance, and an in-depth understanding of concepts related to kubernetes and wireguard.
78

Security in Rootless Containers : Measuring the Attack Surface of Containers

Engström Ericsson, Matilda January 2022 (has links)
Rootless containers are commonly perceived as more secure, as they run without added privileges. To the best of my knowledge, this hypothesis has never been proven.  This thesis aims to contribute to addressing knowledge gaps in research by measuring the attack surface of Rootless Podman, Rootless Docker, as well as Rootful Docker for comparison. Furthermore, different Rootless Container Engines are analysed in a prestudy to summarise what current options exist on the market today. The attack surface is systematically measured using the Attack Surface Measurement Method. The method identifies resources and groups them into different attack classes, based on the resource attackability. The authors of the method defines attackability as the likelihood of a successful attack. Finally, the total attackability of the container engines is computed.  The study concludes that attack surface is significantly reduced when a local container image is used, instead of downloading one. In addition, the design choice of the container engine influences the attack surface more than whether the container is rootless or rootful.
79

Evaluation and Improvement of Application Deployment in Hybrid Edge Cloud Environment : Using OpenStack, Kubernetes, and Spinnaker

Jendi, Khaled January 2020 (has links)
Traditional mechanisms of deployment of deferent applications can be costly in terms of time and resources, especially when the application requires a specific environment to run upon and has a different kind of dependencies so to set up such an application, it would need an expert to find out all required dependencies. In addition, it is difficult to deploy applications with efficient usage of resources available in the distributed environment of the cloud. Deploying different projects on the same resources is a challenge. To solve this problem, we evaluated different deployment mechanisms using heterogeneous infrastructure-as-a-service (IaaS) called OpenStack and Microsoft Azure. we also used platform-as-a-service called Kubernetes. Finally, to automate and auto integrate deployments, we used Spinnaker as the continuous delivery framework. The goal of this thesis work is to evaluate and improve different deployment mechanisms in terms of edge cloud performance. Performance depends on achieving efficient usage of cloud resources, reducing latency, scalability, replication and rolling upgrade, load balancing between data nodes, high availability and measuring zero- downtime for deployed applications. These problems are solved basically by designing and deploying infrastructure and platform in which Kubernetes (PaaS) is deployed on top of OpenStack (IaaS). In addition, the usage of Docker containers rather than regular virtual machines (containers orchestration) will have a huge impact. The conclusion of the report would demonstrate and discuss the results along with various test cases regarding the usage of different methods of deployment, and the presentation of the deployment process. It includes also suggestions to develop more reliable and secure deployment in the future when having heterogeneous container orchestration infrastructure. / Traditionella mekanismer för utplacering av deferentapplikationer kan vara kostsamma när det gäller tid och resurser, särskilt när applikationen kräver en specifik miljö att löpa på och har en annan typ av beroende, så att en sådan applikation upprättas, skulle det behöva en expert att hitta ut alla nödvändiga beroenden. Dessutom är det svårt att distribuera applikationer med effektiv användning av resurser tillgängliga i molnens distribuerade i Edge Cloud Computing. Att distribuera olika projekt på samma resurser är en utmaning. För att lösa detta problem skulle jag utvärdera olika implementeringsmekanismer genom att använda heterogen infrastruktur-as-a-service (IaaS) som heter OpenStack och Microsoft Azure. Jag skulle också använda plattform-som-en-tjänst som heter Kubernetes. För att automatisera och automatiskt integrera implementeringar skulle jag använda Spinnaker som kontinuerlig leveransram. Målet med detta avhandlingsarbete är att utvärdera och förbättra olika implementeringsmekanismer när det gäller Edge Cloud prestanda. Prestanda beror på att du uppnår effektiv användning av Cloud resurser, reducerar latens, skalbarhet, replikering och rullningsuppgradering, lastbalansering mellan datodenoder, hög tillgänglighet och mätning av nollstanntid för distribuerade applikationer. Dessa problem löses i grunden genom att designa och distribuera infrastruktur och plattform där Kubernetes (PaaS) används på toppen av OpenStack (IaaS). Dessutom kommer användningen av Docker- behållare istället för vanliga virtuella maskiner (behållare orkestration) att ha en stor inverkan. Slutsatsen av rapporten skulle visa och diskutera resultaten tillsammans med olika testfall angående användningen av olika metoder för implementering och presentationen av installationsprocessen. Det innehåller också förslag på att utveckla mer tillförlitlig och säker implementering i framtiden när den har heterogen behållareorkesteringsinfrastruktur.
80

Tracing Control with Linux Tracing Toolkit, next generation in a Containerized Environment

Ravi, Vikhram January 2021 (has links)
5G is becoming reality with companies rolling out the technology around the world. In 5G,the Radio Access Network (RAN) is moving from a monolithic-based architecture into a cloud-based microservice architecture for the purpose of simplifying deployment and manageability,and explore scalability and flexibility. Thus, the transition of functionalities from a proprietaryhardware-based system into a more distributed and flexible virtualized system is ongoing. Insuch systems, legacy methods performance monitoring is relevant, wheresystem tracingplaysan important role. System tracing is important for the purpose of performance analysis of anygiven system. However, current tools were designed thinking about monolith architectureswhere, therefore, in new distributed architectures, new tracing tools need to be developed. System tracing often requires special permissions to be executed in applications running ina virtualized third-party environment. Unfortunately, not all applications running in a dis-tributed virtualized environment can be given such special access, at the risk of compromis-ing security and stability of the system. However, tracing data needs to be also collected fromapplications running in such environments. This thesis addresses the challenge of remotely configuring and controlling the system tracingtool with the example of LTTng in applications that run as part of a distributed virtualizedenvironment with Kubernetes. We explore the problem of remotely controlling and configuringsystem tracing as well as to optimize data collection. The main outcome is a tool able to re-motely control and configure system tracing tools. In addition, a proof-of-concept is presentedwith working demos for basic system tracing commands. It was discovered that a relay-based solution can be exposed outside the cluster via node-portwhich can relay incoming requests on-wards to any number of microservices. However, dis-covery of these microservices that are running system tracing tools is critial. Service discoverymechanism’s were brought forth and introduced to the system for the purpose of disoveringmicroservices with system tracing tools. Tracing data that is saved locally can be extracted bythe user through the relay-based solution or sent directly to any remote system using LTTngrelay daemon functionality. Comparison between directly executing commands in a bash shelland the remote CLI was measured. It has been concluded that the overall the response timeof both Linux and LTTng commands that are sent through the remote CLI is 1.96 times longerthan directly executing these commands in a bash shell. This was accounted to the fact thatcommands sent over the network traffic within the kubernetes cluster which is the cost ofremotely being able to control and configure system tracing tools. This being said, there arestill many steps that can be taken to improve the solution and to develop a more productionready solution.i

Page generated in 0.0243 seconds