Spelling suggestions: "subject:"infrastructure as pode"" "subject:"infrastructure as mode""
11 |
Cloud provider comparison with a growing business perspective / Jämförelse av molnleverantörer med ett växande företagsperspektivLindgren, Leonard, Kvasov, Maxim January 2023 (has links)
As technology becomes increasingly embedded in our daily lives, the demand for computational power is on the rise. Sometimes, the necessary calculations can be performed within the device itself, but often, they need to be processed remotely. This is where the role of cloud providers comes into play, steadily growing to meet the demand. With a multitude of providers available, each offering their own unique blend of services, costs, availability, and functionality, choosing the right one can be daunting. Each provider has its own advantages and drawbacks, making them more suitable for specific use scenarios. Growing businesses might find it challenging to select the best cloud provider due to the wide range of offerings. Decisions revolve around factors such as costs, scalability, and performance implications. This study examines a case where a transition was made from the cloud provider Heroku to Amazon Web Services (AWS) and their respective offerings. The moved infrastructure included a database, back-end service, and front-end service, with the last two operating as Docker-compiled containers. To investigate the differences, minor modifications were made to the existing infrastructure to make it compatible with AWS. Due to the similarities between the two systems, we could perform tests in 7 categories such as ”Time to first byte” and ”Time to interactive”. The data was derived using the performance monitoring software Google lighthouse. Non-app-specific data, like service availability and costs, were collected directly from the providers. The findings suggest that performance differences between Heroku and AWS were marginal. The cost data indicates Heroku as a pricier option at present. Despite AWS being cheaper it provides more features and customisation, it also offers a more advantageous cost model for future company growth and system expansion. If there was more time available for the study it would also be of interest to see how other providers compared to the two examined in the study. / I takt med att tekniken blir alltmer inbäddad i våra dagliga liv, ökar efterfrågan på beräkningskraft. Ibland kan de nödvändiga beräkningarna utföras inom enheten själv, men ofta behöver de bearbetas på distans. Det är här som rollen av molnleverantörer kommer in i bilden, stadigt växande för att möta efterfrågan. Med en mängd olika leverantörer tillgängliga, var och en erbjuder sin egen unika blandning av tjänster, kostnader, tillgänglighet och funktionalitet, kan det vara svårt att välja rätt. Varje leverantör har sina egna fördelar och nackdelar, vilket gör dem mer lämpliga för specifika användningsscenarier. Växande företag kan finna det utmanande att välja den bästa molnleverantören på grund av det breda utbudet av tjänster. Besluten kretsar kring faktorer som kostnader, skalbarhet och prestandaimplikationer. Denna studie undersöker ett fall där en övergång gjordes från molnleverantören Heroku till Amazon Web Services (AWS) och deras respektive erbjudanden. Den flyttade infrastrukturen inkluderade en databas, bakänd-tjänst och framänd-tjänst, med de två sistnämnda som fungerar som Docker-kompilerade behållare. För att undersöka skillnaderna gjordes mindre ändringar i den befintliga infrastrukturen för att göra den kompatibel med AWS. På grund av likheterna mellan de två systemen kunde vi utföra tester i 7 kategorier, såsom ”Time to first byte” och ”Time to interactive”. Data erhölls med hjälp av prestandaövervakningsprogramvaran Google Lighthouse. Icke-applikationsspecifika data, som tjänstens tillgänglighet och kostnader, samlades direkt från leverantörerna. Resultaten tyder på att prestandaskillnaderna mellan Heroku och AWS var marginella. Kostnadsdata indikerar att Heroku är ett dyrare alternativ för närvarande. Trots att AWS är billigare och erbjuder fler funktioner och anpassningar, erbjuder det också en mer fördelaktig kostnadsmodell för framtida företagstillväxt och systemutvidgning. Om det fanns mer tid för studien skulle det också vara intressant att se hur andra leverantörer jämförde sig med de två som undersöktes i studien.
|
12 |
Využití Nix/NixOps pro průběžnou integraci a nasazení software při vývoji / Continuous Integration and Delivery by Nix/NixOps in Software DevelopmentVlk, Tomáš January 2020 (has links)
This thesis deals with the application of the functional packaging system Nix and its ecosystem (NixOS, NixOps) for CI/CD in agile development. When using these technologies, the problems caused by different environments are virtually eliminated without the need of containerization. The thesis contains a description of the possibilities and the shortcomings of Nix/NixOps and it proposes a general procedure for the use of these technologies in individual phases of agile development and CI/CD. Thanks to Nix/NixOps, the implementation of CI/CD is very simple and the whole process is also reproducible. The output of the work is a set of the examples demonstrating the use of Nix/NixOps in various projects, which is available as open-source. Thanks to this set, the developers can use Nix quickly and easily in any project, without having to study a large amount of materials.
|
13 |
Bootstrapping a Private CloudDeepika Kaushal (9034865) 29 June 2020 (has links)
Cloud computing allows on-demand provision, configuration and assignment of computing resources with minimum cost and effort for users and administrators. Managing the physical infrastructure that underlies cloud computing services relies on the need to provision and manage bare-metal computer hardware. Hence there is a need for quick loading of operating systems in bare-metal and virtual machines to service the demands of users. The focus of the study is on developing a technique to load these machines remotely, which is complicated by the fact that the machines can be present in different Ethernet broadcast domains, physically distant from the provisioning server. The use of available bare-metal provisioning frameworks require significant skills and time. Moreover, there is no easily implementable standard method of booting across separate and different Ethernet broadcast domains. This study proposes a new framework to provision bare-metal hardware remotely using layer 2 services in a secure manner. This framework is a composition of existing tools that can be assembled to build the framework.
|
14 |
Container Orchestration : the Migration Path to KubernetesAndersson, Johan, Norrman, Fredrik January 2020 (has links)
As IT platforms grow larger and more complex, so does the underlying infrastructure. Virtualization is an essential factor for more efficient resource allocation, improving both the management and environmental impact. It allows more robust solutions and facilitates the use of IaC (infrastructure ascode). Many systems developed today consist of containerized microservices. Considered the standard of container orchestration, Kubernetes is the natural next step for many companies. But how do we move on from previous solutions to a Kubernetes cluster? We found that there are not a lot of detailed enough guidelines available, and set out to gain more knowledge by diving into the subject - implementing prototypes that would act as a foundation for a resulting guideline of how it can be done.
|
15 |
<b>The Significance of Automating the Integration of Security and Infrastructure as Code in Software Development Life Cycle</b>Hephzibah Adaeze Igwe (19213285) 28 July 2024 (has links)
<p dir="ltr">The research focuses on integrating automation, specifically security and Infrastructure as Code (IaC), into the Software Development Life Cycle (SDLC). This integration aims to enhance the efficiency, quality, and security of software development processes. The study explores the benefits and challenges associated with implementing DevSecOps practices, which combine development, security, and operations into a unified process.</p><h3>Background and Motivation</h3><p dir="ltr">The rise of new technologies and increasing demand for high-quality software have made software development a crucial aspect of business operations. The SDLC is essential for ensuring that software meets user requirements and maintains high standards of quality and security. Security, in particular, has become a critical focus due to the growing threat of cyber-attacks and data breaches. By integrating security measures early in the development process, companies can better protect their software and data.</p><h3>Objectives</h3><p dir="ltr">The primary objectives of this research are:</p><ol><li><b>Examine the Benefits and Challenges</b>: To investigate the advantages and difficulties of integrating DevSecOps and IaC within the SDLC.</li><li><b>Analyze Impact on Security and Quality</b>: To assess how automation affects the security and quality of software developed through the SDLC.</li><li><b>Develop a Framework</b>: To create a comprehensive framework for integrating DevSecOps and IaC into the SDLC, thereby improving security and reducing time to market.</li></ol><h3>Methodology</h3><p dir="ltr">The research employs a mixed-methods approach, combining qualitative and quantitative methods:</p><ul><li><b>Qualitative</b>: A literature review of existing research on DevSecOps, IaC, and SDLC, providing a theoretical foundation and context.</li><li><b>Quantitative</b>: Building a CI/CD (Continuous Integration/Continuous Deployment) pipeline from scratch to collect empirical data. This pipeline serves as a case study to gather insights into how automation impacts software security and quality.</li></ul><h3>Tools and Technologies</h3><p dir="ltr">The study utilizes various tools, including:</p><ul><li><b>GitHub</b>: For version control and code repository management.</li><li><b>Jenkins</b>: To automate the CI/CD pipeline, including building, testing, and deploying applications.</li><li><b>SonarQube</b>: For static code analysis, detecting code quality issues, and security vulnerabilities.</li><li><b>Amazon Q</b>: An AI-driven tool used for code generation and security scanning.</li><li><b>OWASP Dependency-Check</b>: To identify vulnerabilities in project dependencies.</li><li><b>Prometheus and Grafana</b>: For monitoring and collecting metrics.</li><li><b>Terraform</b>: For defining and deploying infrastructure components as code.</li></ul><h3>Key Findings</h3><ul><li><b>Reduction in Defect Density</b>: Automation significantly reduced defect density, indicating fewer bugs and higher code quality.</li><li><b>Increase in Code Coverage</b>: More comprehensive testing, leading to improved software reliability.</li><li><b>Reduction in MTTR, MTTD, and MTTF</b>: Enhanced system reliability and efficiency, with faster detection and resolution of issues.</li><li><b>Improved System Performance</b>: Better performance metrics, such as reduced response time and increased throughput.</li></ul><h3>Conclusion</h3><p dir="ltr">The study concludes that integrating security and IaC automation into the SDLC is crucial for improving software quality, security, and development efficiency. However, despite the clear benefits, many companies are hesitant to adopt these practices due to perceived challenges, such as the upfront investment, complexity of implementation, and concerns about ROI (Return on Investment). The research underscores the need for continued innovation and adaptation in software development practices to meet the evolving demands of the technological landscape.</p><h3>Areas for Further Research</h3><p dir="ltr">Future studies could explore the broader impact of automation on developer productivity, job satisfaction, and long-term security practices. There is also potential for developing advanced security analysis techniques using machine learning and artificial intelligence, as well as investigating the integration of security and compliance practices within automated SDLC frameworks.</p>
|
Page generated in 0.0684 seconds