• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 26
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Portabilita distribuovaných výpočtů v rámci cloudových infrastruktur / Portability of Distributed Computing in Cloud Infrastructures

Duong, Cuong Tuan January 2019 (has links)
The master’s thesis focuses on analysis of solution to distributed computing of metage-nomics data in cloud infrastructures. It describes specific META-pipe platform based onclient-server architecture in infrastructure of public academic cloud EGI Federated Cloud,sponsored by european project ELIXIR-EXCELERATE. Thesis is focusing especially onopen-source software like Terraform and Ansible.
12

Efficient parallel installation of software collections on a PaaS

Boraie, Alexander January 2021 (has links)
This master thesis analyses and investigates how to speed up the deployment of a suite of services to a Platform as a Service. The project use IBM’s Cloud Pak for Applications together with Red Hats OpenShift to provide insights on important factors that influences the deployment process. In this thesis, a modification was done on the installer such that the deployment instructions were sent in parallel instead of sequentially. Except for the parallel suggestion, this thesis also investigates different options on how constraints could be applied to the CPU and what the consequences are. At the end of this report, the reader will also see how the deployment times are affected by cluster scaling. An implementation of the parallel deployment showed that the installation time of Cloud Pak for Applications could be decreased. It was also shown that the CPU was not utilized fully and that there exists significant CPU saturation during the deployment. The evaluation of the scaling analysis showed that, in regards of this thesis, it is more beneficial both timewise and cost-wise to scale horizontally rather than vertically.
13

Security smells in open-source infrastructure as code scripts : A replication study

Hortlund, Andreas January 2021 (has links)
With the rising number of servers used in productions, virtualization technology engineers needed a new a tool to help them manage the rising configuration workload. Infrastructure as code(IaC), a term that consists mainly of techniques and tools to define wanted configuration states of servers in machine readable code files, which aims at solving the high workload induced by the configuration of several servers. With new tools, new challenges rise regarding the security of creating the infrastructure as code scripts that will take over the processing load. This study is about finding out how open-source developers perform when creating IaC scripts in regard to how many security smells they insert into their scripts in comparison to previous studies and such how developers can mitigate these risks. Security smells are code patterns that show vulnerability and can lead to exploitation. Using data gathered from GitHub with a web scraper tool created for this study, the author analyzed 400 repositories from Ansible and Puppet with a second tool created, tested and validated from previous study. The Security Linter for Infrastructure as Code uses static code analysis on these repositories and tested these against a certain ruleset for weaknesses in code such as default admin and hard-coded password among others. The present study used both qualitative and quantitative methods to analyze the data. The results show that developers that actively participated in developing these repositories with a creation date of at latest 2019-01-01 produced less security smells than Rahman et al (2019b, 2020c) with a data source ranging to November 2018. While Ansible produced 9,2 compared to 28,8 security smells per thousand lines of code and Puppet 13,6 compared to 31,1. Main limitation of the study come mainly in looking only at the most popular and used tools of the time of writing, being Ansible and Puppet. Further mitigation on results from both studies can be achieved through training and education. As well as the use of tools such as SonarQube for static code analysis against custom rulesets before the scripts are being pushed to public repositories.
14

Analyse und Vergleich des Quellcode‐basierten Ressourcenmanagements und des automatischen Deployments von Webapplikationen auf Cloud‐Plattformen: Am Beispiel von Microsoft Azure und der Open Telekom Cloud

Prumbach, Peter 17 April 2023 (has links)
In dieser Thesis werden unterschiedliche Wege erläutert, Webanwendungen mit einem Cloud‐agnostischen Ansatz bereitzustellen. Ein Cloud‐agnostischer Ansatz zielt auf die Unabhängigkeit von einem bestimmten Cloud Service Provider (CSP) und dessen Technologien ab. Um dies zu ermöglichen, werden verschiedene Tools unter anderem hinsichtlich ihrer unterstützten Sprachen und Technologien, ihrer Modularität, ihres State und Secret Managements, ihres Bekanntheitsgrades und des Community Supports verglichen. Die Einführung erfolgt entlang der theoretischen Grundlagen, der Erläuterungen und Vorteile des Konzepts der Infrastructure‐as‐Code (IaC), anhand der Grundlagen zur imperativen und deklarativen Programmierung und mittels der Unterscheidung zwischen Domain‐Specific Languages und General‐Purpose Languages. In den folgenden Kapiteln folgt, bezogen auf die in dieser Thesis behandelten Beispiele Microsoft Azure (Azure) und Open Telekom Cloud (OTC), ein Vergleich der unterschiedlichen Möglichkeiten, Webanwendungen auf diesen Plattformen bereitzustellen. Dieser Ansatz soll anschließend durch eine Automatisierung mittels eines ausgewählten Frameworks als Prototyp anhand einer bestehenden Webanwendung implementiert werden. Zur Implementierung werden vorher die bekanntesten Frameworks auf Grundlage dieser Problemstellung verglichen und das passendste ausgewählt. Als Abschluss der Thesis folgt eine Zusammenfassung, in welcher die gelernten Kenntnisse und Erfahrungen im Umgang mit der Bereitstellung von Infrastruktur für Webanwendungen mittels IaC in einem Cloud‐agnostischen Einsatz dargelegt werden.:Abkürzungsverzeichnis Abbildungsverzeichnis Tabellenverzeichnis Quellcodeverzeichnis 1 Einleitung 1.1 Problemstellung 1.2 Zielstellung 1.3 Aufbau der Arbeit 2 Theoretische Grundlagen 2.1 Prozess der Infrastrukturbereitstellung 2.2 Einführung Infrastructure‐as‐Code 2.3 Deklarativer und imperativer Ansatz 2.4 Domain‐Specific Languages 2.5 Abstraktionsebene 2.6 Stand der Technik 3 Analyse 3.1 Methodik und Umsetzung 3.2 Kriterien für einen Vergleich verschiedener IaC‐Werkzeuge 3.3 Kriterien für einen funktionalen Vergleich von Tools zur Orchestrierung 3.4 Detaillierterer Vergleich von Terraform und Pulumi 3.5 Docker zur Bereitstellung von Webanwendungen 4 Konzeption 4.1 Modellierung der Abstraktion 4.2 Betrachtung der Konzepte 4.3 Bewertung der Konzepte 5 Prototyp 5.1 Vorstellung der Voraussetzungen 5.2 Implementierung des Prototyps 5.3 Analyse des Prototyps 6 Fazit 6.1 Ausblick 6.2 Zusammenfassung Literaturverzeichnis Glossar
15

System Upgrade Verification : An automated test case study / Verifikation av Systemuppdatering : En fallstudie om automatiserad testning

Rotting Tjädermo, Viktor, Tanskanen, Alex January 2019 (has links)
We live in a society where automatization is becoming more common, whether it be cars or artificial intelligence. Software needs to be updated using patches, however, these patches have the possibility of breaking components. This study takes such a patch in the context of Ericsson, identifies what needs to be tested, investigates whether the tests can be automated and assesses how maintainable they are. Interviews were used for the identification of system and software parts in need of testing. Then tests were implemented in an automated test suite to test functionality of either a system or software. The goal was to reduce time of troubleshooting for employees without interrupting sessions for users as well as set up a working test suite. When the automated testing is completed and implemented in the test suite, the study is concluded by measuring the maintainability of the scripts using both metrics and human assessment through interviews. The result showed the testing suite proved maintainable, both from the metric point of view and from human assessment.
16

Разработка инфраструктуры и серверного приложения для проекта «Мониторинг IT-конференций» : магистерская диссертация / Development of infrastructure and server application for the project "Monitoring IT conferences"

Сухарев, Н. В., Sukharev, N. V. January 2021 (has links)
Цель работы – разработка серверной части приложения и инфраструктурных компонентов для проекта «Мониторинг IT-конференций». Методы исследования: анализ, сравнение, систематизацию и обобщение данных о существующих и разработанных инфраструктурных компонентах, апробация современных подходов при построении архитектуры инфраструктуры. В результате работы сконфигурированы две виртуальные машины для работы Kubernetes и Gitlab Runner, настроены компоненты хранения постоянных данных для PostgreSQL, RabbitMQ и S3-хранилища на базе Rook Ceph, создано приложение на базе Django для предоставления API клиентскому приложению, написана конфигурация для Gitlab CI, обеспечивающая сборку образа приложения и его развертывание в Kubernetes. Созданное приложение предоставляет функционал управления контентом для администраторов сервиса (загрузка видео в S3-хранилище, разметка с помощью системы тегов, привязывание конференций к спикерам) и HTTP API для клиентского приложения с возможностью регистрации, аутентификации через JWT-токены, иерархическому поиску по системе тегов и отдаче подписанных ссылок на S3-хранилище для просмотра видео. / The purpose of the work is to develop the server part of the application and infrastructure components for the project "Monitoring IT conferences". Research methods: analysis, comparison, systematization and generalization of data on existing and developed infrastructure components, approbation of modern approaches in building infrastructure architecture. As a result of the work, two virtual machines were configured for Kubernetes and Gitlab Runner, persistent data storage components for PostgreSQL, RabbitMQ and S3 storage based on Rook Ceph were configured, an application based on Django was created to provide an API to a client application, a configuration for Gitlab CI was written, providing building an application image and deploying it to Kubernetes. The created application provides content management functionality for service administrators (uploading videos to S3 storage, marking using a tag system, binding conferences to speakers) and an HTTP API for a client application with the ability to register, authenticate through JWT tokens, hierarchical search using the tag system, and giving back signed links to S3 storage for watching videos.
17

Návrh laboratorních úloh v oblasti programovatelnosti sítí / Design of laboratory exercises in the field of network programmability

Dubovyi, Dmytro January 2020 (has links)
The aim of the graduation thesis is to evaluate the current development in the field of SDN and the possibility of programmability of SDN elements using the application programming interface. The first theoretical chapter describes the following: the basic architecture of SDN, the traffic within SDN between its individual layers, the communication protocols Southbound interface and Northbound interface. The second chapter of the thesis deals with the programmability of SDN elements with the help of API. The third theoretical chapter describes the current development in the field of SDN. The practical part of the thesis is devoted to creation of two laboratory tasks dealing with the programming of the SDN API. Laboratory tasks include BIG-IP programming from F5 Network and routers from Arista Network. Programming is done using Python via REST API for BIG-IP, or eAPI for Arista EOS. The Ansible setup tool is also used for the same purpose.
18

A mapping approach for configuration management tools to close the gap between two worlds and to regain trust: Or how to convert from docker to legacy tools (and vice versa)

Meissner, Roy, Kastner, Marcus 30 October 2018 (has links)
In this paper we present the tool 'DockerConverter', an approach and a software to map a Docker configuration to various matured systems and also to reverse engineer any available Docker image in order to increase the confidence (or trust) into it. We show why a mapping approach is more promising than constructing a Domain Specific Language and why we chose a Docker image instead of the Dockerfile as the source model. Our overall goal is to enable Semantic Web research projects and especially Linked Data enterprise services to be better integrated into enterprise applications and companies.
19

Podpora průběžné integrace v rámci systému Copr / Continues Integration Support for Copr Build System

Klusoň, Martin January 2018 (has links)
This thesis deals with implementation of continuous integration for build system Copr. The implementation uses framework Citool and its modules, which are already used for continuous integration of build system Koji. The outcome system can run the tests for the new package from the build system Copr and test it on virtual machine. This thesis shows way how to implement continuous integration for build system Copr.
20

Systém pro automatické filtrování testů / System for Automatic Filtering of Tests

Lysoněk, Milan January 2020 (has links)
Cílem této práce je vytvořit systém, který je schopný automaticky určit množinu testů, které mají být spuštěny, když dojde v ComplianceAsCode projektu ke změně. Navržená metoda vybírá množinu testů na základě statické analýzy změněných zdrojových souborů, přičemž bere v úvahu vnitřní strukturu ComplianceAsCode. Vytvořený systém je rozdělen do čtyř částí - získání změn s využitím verzovacího systému, statická analýza různých typů souborů, zjištění souborů, které jsou ovlivněny těmi změnami, a výpočet množiny testů, které musí být spuštěny pro danou změnu. Naimplementovali jsme analýzu několika různých typů souborů a náš systém je navržen tak, aby byl jednoduše rozšiřitelný o analýzy dalších typů souborů. Vytvořená implementace je nasazena na serveru, kde automaticky analyzuje nové příspěvky do ComplianceAsCode projektu. Automatické spouštění informuje přispěvatelé a vývojáře o nalezených změnách a doporučuje, které testy by pro danou změnu měly být spuštěny. Tím je ušetřen čas strávený při kontrole správnosti příspěvků a čas strávený spouštěním testů.

Page generated in 0.0413 seconds