Spelling suggestions: "subject:"infrastructure as pode"" "subject:"infrastructure as mode""
1 |
Nástroj pro generování náhodné konfigurace kybernetické arény / A tool for generating a random configuration of a cyber arenaMatisko, Maroš January 2020 (has links)
The master's thesis is focused on the design and implementation of a tool for generating configuration named Ansible. The result of using this tool is generated configuration, which contains random values chosen according to specified parameters and it was deployed on a virtual testing infrastructure. The theoretical part describes approaches of network automation in the process of deploying and configuration of network devices called Infrastructure as code. It also describes programme Ansible, which will be using the output of the implemented tool. The practical part of the thesis is focused on designing the functionality and internal structure of the tool, implementation of the tool and testing implemented tool as well as generated configuration.
|
2 |
Automating and increasing efficiency of component documentation maintenance : A case studyWilsson, Lowe January 2022 (has links)
Context. Maintaining good software documentation quality is an important aspect of software maintenance. To decrease associated costs, interest in automating documentation tasks has recently increased. However, there is a lack of guidance regarding how practitioners may apply related research findings to pre-existing documentation. Objectives. In this case study we investigate what the most important documentation properties for manually produced component documentation quality are and how these can be linked to issues in real-world documentation. We also examine to what extent they can be tackled based on current tools and approaches. Methods. A literature study is employed to identify previously reported documentation properties and related automation strategies. These are then mapped against issues identified in the documentation of a telecommunications company department. Finally, strategies for maintaining documentation quality are proposed and the applicability of these is retrospectively evaluated. Results. Current deeply automated strategies are found to be largely ineffective for the analyzed documentation issues specifically and for improving documentation content not tightly linked to source code in general. Manually produced automated tests and version control as well as raw documentation formatting enabling these are identified as more promising approaches for improving documentation maintenance. Propositions for diminishing department documentation issues. Four suggestions are made on how documentation issues could be tackled. Particular attention is given to making code samples testable. Conclusions and Future Work. Manual documentation is still often needed, and writing it in a way that enables automation where possible will become increasingly important. Containerization technology is widely used but there is a dearth of corresponding testing knowledge and tooling. Problems with code samples in documentation commonly cause critical issues. Improving methods for maintaining code-related sections of documentation, especially containerization-related code samples, is an urgent challenge.
|
3 |
Framework to set up a generic environment for applications / Ramverk för uppsättning av generisk miljö för applikationerDas, Ruben January 2021 (has links)
Infrastructure is a common word used to express the basic equipment and structures that are needed e.g. for a country or organisation to function properly. The same concept applies in the field of computer science, without infrastructure one would have problems operating software at scale. Provisioning and maintaining infrastructure through manual labour is a common occurrence in the "iron age" of IT. As the world is progressing towards the "cloud age" of IT, systems are decoupled from physical hardware enabling anyone who is software savvy to automate provisioning and maintenance of infrastructure. This study aims to determine how a generic environment can be created for applications that can run on Unix platforms and how that underlying infrastructure can be provisioned effectively. The results show that by utilising OS-level virtualisation, also known as "containers", one can deploy and serve any application that can use the Linux kernel in the sense that is needed. To further support realising the generic environment, hardware virtualisation was applied to provide the infrastructure needed to be able to use containers. This was done by provisioning a set of virtual machines on different cloud providers with a lightweight operating system that could support the container runtime needed. To manage these containers at scale a container orchestration tool was installed onto the cluster of virtual machines. To provision the said environment in an effective manner, the principles of infrastructure as code (IaC) were used to create a “blueprint" of the infrastructure that was desired. By using the metric mean time to environment (MTTE) it was noted that a cluster of virtual machines with a container orchestration tool installed onto it could be provisioned under 10 minutes for four different cloud providers.
|
4 |
Security smells in open-source infrastructure as code scripts : A replication studyHortlund, Andreas January 2021 (has links)
With the rising number of servers used in productions, virtualization technology engineers needed a new a tool to help them manage the rising configuration workload. Infrastructure as code(IaC), a term that consists mainly of techniques and tools to define wanted configuration states of servers in machine readable code files, which aims at solving the high workload induced by the configuration of several servers. With new tools, new challenges rise regarding the security of creating the infrastructure as code scripts that will take over the processing load. This study is about finding out how open-source developers perform when creating IaC scripts in regard to how many security smells they insert into their scripts in comparison to previous studies and such how developers can mitigate these risks. Security smells are code patterns that show vulnerability and can lead to exploitation. Using data gathered from GitHub with a web scraper tool created for this study, the author analyzed 400 repositories from Ansible and Puppet with a second tool created, tested and validated from previous study. The Security Linter for Infrastructure as Code uses static code analysis on these repositories and tested these against a certain ruleset for weaknesses in code such as default admin and hard-coded password among others. The present study used both qualitative and quantitative methods to analyze the data. The results show that developers that actively participated in developing these repositories with a creation date of at latest 2019-01-01 produced less security smells than Rahman et al (2019b, 2020c) with a data source ranging to November 2018. While Ansible produced 9,2 compared to 28,8 security smells per thousand lines of code and Puppet 13,6 compared to 31,1. Main limitation of the study come mainly in looking only at the most popular and used tools of the time of writing, being Ansible and Puppet. Further mitigation on results from both studies can be achieved through training and education. As well as the use of tools such as SonarQube for static code analysis against custom rulesets before the scripts are being pushed to public repositories.
|
5 |
Software Development Environment : the DevOps perspectiveChristensen, Olga January 2020 (has links)
DevOps is a collaborative effort based on a set of practices and cultural values developed in order to improve software quality and release cycle while operating at the lowest cost possible. To enable DevOps principles such as automation, collaboration, and knowledge sharing, Continuous Delivery and Deployment, as well as Infrastructure as Code are heavily employed throughout the whole process of software development. One of the main building blocks of this process is a development environment where the application code is being developed and tested before it is released into production. This research investigates with the help of a systematic literature review the DevOps perspective regarding provisioning and deployment of a development environment in a controlled, automated way, specifically the benefits and challenges of this process.
|
6 |
Analyse und Vergleich des Quellcode‐basierten Ressourcenmanagements und des automatischen Deployments von Webapplikationen auf Cloud‐Plattformen: Am Beispiel von Microsoft Azure und der Open Telekom CloudPrumbach, Peter 17 April 2023 (has links)
In dieser Thesis werden unterschiedliche Wege erläutert, Webanwendungen mit
einem Cloud‐agnostischen Ansatz bereitzustellen. Ein Cloud‐agnostischer Ansatz
zielt auf die Unabhängigkeit von einem bestimmten Cloud Service Provider (CSP)
und dessen Technologien ab. Um dies zu ermöglichen, werden verschiedene
Tools unter anderem hinsichtlich ihrer unterstützten Sprachen und Technologien,
ihrer Modularität, ihres State und Secret Managements, ihres Bekanntheitsgrades
und des Community Supports verglichen. Die Einführung erfolgt entlang
der theoretischen Grundlagen, der Erläuterungen und Vorteile des Konzepts
der Infrastructure‐as‐Code (IaC), anhand der Grundlagen zur imperativen und
deklarativen Programmierung und mittels der Unterscheidung zwischen Domain‐Specific
Languages und General‐Purpose Languages. In den folgenden Kapiteln
folgt, bezogen auf die in dieser Thesis behandelten Beispiele Microsoft Azure
(Azure) und Open Telekom Cloud (OTC), ein Vergleich der unterschiedlichen
Möglichkeiten, Webanwendungen auf diesen Plattformen bereitzustellen. Dieser
Ansatz soll anschließend durch eine Automatisierung mittels eines ausgewählten
Frameworks als Prototyp anhand einer bestehenden Webanwendung implementiert
werden. Zur Implementierung werden vorher die bekanntesten Frameworks
auf Grundlage dieser Problemstellung verglichen und das passendste ausgewählt.
Als Abschluss der Thesis folgt eine Zusammenfassung, in welcher die gelernten
Kenntnisse und Erfahrungen im Umgang mit der Bereitstellung von Infrastruktur
für Webanwendungen mittels IaC in einem Cloud‐agnostischen Einsatz dargelegt
werden.:Abkürzungsverzeichnis
Abbildungsverzeichnis
Tabellenverzeichnis
Quellcodeverzeichnis
1 Einleitung
1.1 Problemstellung
1.2 Zielstellung
1.3 Aufbau der Arbeit
2 Theoretische Grundlagen
2.1 Prozess der Infrastrukturbereitstellung
2.2 Einführung Infrastructure‐as‐Code
2.3 Deklarativer und imperativer Ansatz
2.4 Domain‐Specific Languages
2.5 Abstraktionsebene
2.6 Stand der Technik
3 Analyse
3.1 Methodik und Umsetzung
3.2 Kriterien für einen Vergleich verschiedener IaC‐Werkzeuge
3.3 Kriterien für einen funktionalen Vergleich von Tools zur Orchestrierung
3.4 Detaillierterer Vergleich von Terraform und Pulumi
3.5 Docker zur Bereitstellung von Webanwendungen
4 Konzeption
4.1 Modellierung der Abstraktion
4.2 Betrachtung der Konzepte
4.3 Bewertung der Konzepte
5 Prototyp
5.1 Vorstellung der Voraussetzungen
5.2 Implementierung des Prototyps
5.3 Analyse des Prototyps
6 Fazit
6.1 Ausblick
6.2 Zusammenfassung
Literaturverzeichnis
Glossar
|
7 |
DevOps: Assessing the Factors Influencing the Adoption of Infrastructure as Code, and the Selection of Infrastructure as Code Tools : A Case Study with Atlas Copco / DevOps: En värdering av de faktorer som påverkar integrationen av infrastruktur som kod, och valet av infrastruktur som kod-verktyg : En fallstudie med Atlas CopcoLjunggren, David January 2023 (has links)
This research initiative, which takes the shape of an interpretive qualitative case study, intends to investigate the key considerations for organizations that are to adopt IaC and select an IaC tool. Interviews with operations specialists with varying experience with Infrastructure as Code were conducted for data collection, which was then followed by thematic data analysis. The gathered data included insights based on the experiences of various professionals at Atlas Copco. The thematic analysis approach was applied in order to detect repeating patterns and themes in the gathered data, which paved the way to extract significant conclusions. The case study’s findings highlight five critical elements in two different domains for successful IaC integration and tool selection. The first identified domain was that of adoption and integration. To begin with, technical expertise such as programming skills, version control skills, and cloud computing was identified to be a critical consideration belonging to this domain. Secondly, resources such as time, learning materials, courses, and tools were identified as important factors for the integration, and perhaps especially so for the individuals with less prior experience with DevOps and IaC. Thirdly, organizational change was identified as a critical component for successful integration. The two remaining themes belonged to a domain that was named tool selection. These themes were ease of use and security. In summary, this paper provides insights into the key consideration of IaC adoption and IaC tool selection. Its findings underscore organizational change, resources, expertise for successful adoption, ease of use, and security for successful tool selection. It aims to be valuable to any individual or organization who is to adopt IaC or conduct research on the topic of software engineering and IaC. Due to the small sample sizes and the absence of software developers in the data collection, there is clear need for future research to enhance the academic understanding of IaC tool selection and IaC adoption. / Detta forskningsinitiativ, som tar formen av en interpretivistisk kvalitativ fallstudie, avser att undersöka de viktigaste övervägandena för organisationer som ska anta IaC och välja ett IaC-verktyg. Intervjuer med utvecklings- och driftspecialister med varierande erfarenhet av IaC genomfördes för datainsamling, som sedan följdes av en tematisk dataanalys. Den insamlade datan inkluderade insikter baserade på erfarenheter från olika yrkesverksamma på Atlas Copco. Den tematiska analysmetoden användes för att upptäcka återkommande mönster och teman i insamlad data, vilket banade vägen för att dra viktiga slutsatser. Fallstudiens resultat lyfter fram fem kritiska element inom två olika domäner för framgångsrik IaC-integration och verktygsval. Den första identifierade domänen var antagande och integration. Till att börja med identifierades teknisk expertis såsom programmeringsskicklighet, versionskontrollfärdighet och erfarenhet av molntjänster som kritiska faktorer i denna domän. För det andra identifierades resurser som tid, läromedel, kurser och verktyg som viktiga faktorer för integrationen, i synnerhet för de individer med mindre tidigare erfarenhet av DevOps och IaC. För det tredje identifierades organisatorisk förändring som en kritisk komponent för en framgångsrik integration. De två återstående teman tillhörde en domän som fick namnet verktygsval. Dessa teman var användarvänlighet och säkerhet. Sammanfattningsvis ger den här rapporten insikter i de viktigaste aspekterna av IaC-antagande och val av IaC-verktyg. Dess resultat understryker organisationsförändringar, resurser, expertis för en framgångsrikt antagande och integration och användarvänlighet och säkerhet för framgångsrikt val av verktyg. Den avses vara värdefull för individer och organisationer som strävar efter att integrera IaC i sitt arbete, eller för den som forskar inom mjukvaruutveckling och IaC. På grund av den relativt lilla sticksprovsstorleken och frånvaron av mjukvaruutvecklare i datainsamlingen finns det ett tydligt behov av framtida forskning för att förbättra den akademiska förståelsen av IaC-verktygsval och IaC-antagande.
|
8 |
A DevOps Approach to the EA Blueprint Architectural PatternPersson, Susanna January 2022 (has links)
In the world of software development, there is an increasing demand for software to keep up with rapid changes in its real-world context. A Resilient Digital Twin of an Organization is a type of software whose purpose is to digitally represent an organization or a component of an organization - as a Digital Twin -, and to keep doing so accurately throughout the real-world organization’s changes - a Resilient Digital Twin. An architectural pattern, called the EA Blueprint Pattern, has recently been proposed as a pattern to use for developing Resilient Digital Twins that can change together with the changes in the organization. However, software architecture is not the only factor that enables continuous change and adaptability in software. For software development teams to be able to deliver software rapidly and reliably, the software development process itself must be adapted to allow for frequent and fast changes. From this need, the Agile methodology and subsequently the set of work practises called DevOps has emerged. DevOps leverages automation and fast feedback as tools to facilitate a shorter system development life cycle and continuous delivery. The usage of DevOps is becoming increasingly popular in the software development field. It stands to reason that there is a need to ensure that the EA Blueprint Pattern is appropriate even in a DevOps context, where different tools and routines may be used than in traditional development. To complete this project, a use case of the EA Blueprint Pattern has been moved from a traditionally developed and deployed setting to a DevOps setting that includes essential DevOps tools such as Infrastructure as Code, a cloud environment, and a CI/CD pipeline that enables automatic deployment and therefore a shorter system development life cycle. By doing this, it can be gauged how well the EA Blueprint Pattern is adapted to a modern software development process which utilises the advantages of DevOps.
|
9 |
An infrastructure for autonomic and continuous long-term software evolutionJiménez, Miguel 29 April 2022 (has links)
Increasingly complex dynamics in the software operations pose formidable software evolution challenges to the software industry. Examples of these dynamics include the globalization of software markets, the massive increase of interconnected devices worldwide with the internet of things, and the digital transformation to large-scale cyber-physical systems. To tackle these challenges, researchers and practitioners have developed impressive bodies of knowledge, including adaptive and autonomic systems, run-time models, continuous software engineering, and the practice of combining software development and operations (i.e., DevOps). Despite the tremendous strides the software engineering community has made toward managing highly dynamic systems, software-intensive industries face major challenges to match the ever-increasing pace. To cope with this rapid rate at which operational contexts for software systems change, organizations are required to automate and expedite software evolution on both the development and operations sides.
The aim of our research is to develop continuous and autonomic methods, infrastructures, and tools to realize software evolution holistically. In this dissertation, we shift the prevalent autonomic computing paradigm and provide new perspectives and foci on integrating autonomic computing techniques into continuous software engineering practices, such as DevOps. Our methods and approaches are based on online experimentation and evolutionary optimization. Experimentation allows autonomic managers to make in- formed data-driven and explainable decisions and present evidence to stakeholders. As a result, autonomic managers contribute to the continuous and holistic evolution of design, configuration and deployment artifacts, providing guarantees on the validity, quality and effectiveness of enacted changes. Ultimately, our approach turns autonomic managers into online stakeholders whose contributions are subject to quality control.
Our contributions are threefold. We focus on effecting long-lasting software changes through self-management, self-improvement, and self-regulation. First, we propose a framework for continuous software evolution pipelines for bridging offline and online evolution processes. Our framework’s infrastructure captures run-time changes and turns them into configuration and deployment code updates. Our functional validation on cloud infrastructure management demonstrates its feasibility and soundness. It effectively contributes to eliminate technical debt from the Infrastructure-as-Code (IAC) life cycle, allowing development teams to embrace the benefits of IAC without sacrificing existing automation. Second, we provide a comprehensive implementation for the continuous IAC evolution pipeline. Third, we design a feedback loop to conduct experimentation-driven continuous exploration of design, configuration and deployment alternatives. Our experimental validation demonstrates its capacity to enrich the software architecture with additional components, and to optimize the computing cluster’s configuration, both aiming to reduce service latency. Our feedback loop frees DevOps engineers from incremental improvements, and allows them to focus on long-term mission-critical software evolution changes. Fourth, we define a reference architecture to support short-lived and long-lasting evolution actions at run-time. Our architecture incorporates short-term and long-term evolution as alternating autonomic operational modes. This approach keeps internal models relevant over prolonged system operation, thus reducing the need for additional maintenance. We demonstrate the usefulness of our research in case studies that guide the designs of cloud management systems and a Colombian city transportation system with historical data.
In summary, this dissertation presents a new approach on how to manage software continuity and continuous software improvement effectively. Our methods, infrastructures, and tools constitute a new platform for short-term and long-term continuous integration and software evolution strategies and processes for large-scale intelligent cyber-physical systems. This research is a significant contribution to the long-standing challenges of easing continuous integration and evolution tasks across the development-time and run-time boundary. Thus, we expand the vision of autonomic computing to support software engineering processes from development to production and back. This dissertation constitutes a new holistic approach to the challenges of continuous integration and evolution that strengthens the causalities in current processes and practices, especially from execution back to planning, design, and development. / Graduate
|
10 |
A comparative study of Docker and Vagrant regarding performance on machine level provisioningZenk, Viktor, Malmström, Martin January 2020 (has links)
Software projects can nowadays have complex infrastructures behind them, in the form of libraries and various other dependencies which need to be installed on the machines they are being developed on. Setting up this infrastructure on a new machine manually can be a tedious process prone to errors. This can be avoided by automating the process using a software provisioning tool, which can automatically transfer infrastructure between machines based on instructions which can be version controlled in similar ways as the source code. Docker and Vagrant are two tools which can achieve this. Docker encapsulates projects into containers, while Vagrant handles automatic setup of virtual machines. This study compares Docker and Vagrant regarding their performance for machine level provisioning, both when setting up an infrastructure for the first time on a new machine, as well as when implementing a change in the infrastructure configuration. This was done by provisioning a project using both tools, and performing experiments measuring the time taken for each tool to perform the tasks. The results of the experiments were analyzed, and showed that Docker performed significantly better than Vagrant in both tests. However, due to limitations of the study, this cannot be assumed to be true for all use cases and scenarios, and performance is not the only factor to consider when choosing a provisioning tool. According to the data collected in this study, Docker is thereby the recommended tool to choose, but more research is needed to determine whether other test cases yield different results. / Moderna mjukvaruprojekt kan ha en komplex infrastruktur bakom sig, i form av bibliotek och andra beroenden som måste installeras på utvecklarmaskiner. Att konfigurera denna infrastruktur på en ny maskin manuellt kan vara en tidskrävande process, som även kan leda till en ofullständigt eller felaktigt konfigurerad lösning. Detta kan undvikas genom att automatisera processen med hjälp av provisioneringsverktyg, som automatiskt kan överföra infrastrukturer mellan maskiner baserat på instruktioner som kan versionshanteras på liknande sätt som källkoden. Docker och Vagrant är två verktyg som kan användas till detta ändamål. Docker kapslar in projektet i containers, medan Vagrant hanterar automatisk konfiguration av virtuella maskiner. Denna studie jämför Docker och Vagrant avseende deras prestanda för mjukvaruprovisionering på maskinnivå, både när det kommer till en förstagångsinstallation av infrastrukturen på en ny maskin, och även implementering av en ändring i konfigurationen av infrastrukturen. Denna jämförelse gjordes genom att implementera båda lösningarna, och sedan utföra experiment för att mäta tidsåtgången för båda verktygen att lösa de två uppgifterna. Resultaten av experimenten analyserades, och visade att Docker presterade bättre än Vagrant i båda tester. På grund av begränsningar i studien kan detta inte antas vara sant för alla användningsområden och scenarier, och prestanda är inte den enda faktorn att ha i åtanke när ett provisioneringsverktyg ska väljas. Baserat på datan insamlad i denna studie är Docker därmed verktyget som rekommenderas, men mer forskning krävs för att avgöra om andra testområden ger andra resultat.
|
Page generated in 0.0593 seconds