Spelling suggestions: "subject:"devops"" "subject:"devops’s""
61 |
Performance of DevOps compared to DevSecOps : DevSecOps pipelines benchmarked!Björnholm, Jimmy January 2020 (has links)
This paper examines how adding security tools to a software pipeline affect the build time. Software development is an ever-changing field in a world where computers are trusted with almost everything society does. Meanwhile keeping build time low is crucial, and some aspects of quality assurance have therefore been left on the cutting room floor, security being one of the most vital and time-consuming. The time taken to scan for vulnerabilities has been suggested as a reason for the absence of security tests. By implementing nine different security tools into a generic DevOps pipeline, this paper aimed to examine the build times quantitatively. The tools were selected using the OWASP Top Ten, coupled with an ISO standard, as a guideline. OWASP Juice Shop was used as the testing environment, and the scans managed to find most of the vulnerabilities in the Vulnerable Web Application. The pipeline was set up in Microsoft Azure and was configured in .yaml files. The resulting scan durations show that adding security measures to a build pipeline can add as little as 1/3 of the original build time.
|
62 |
Multivariate Time Series Prediction for DevOps : A first Step to Fault Prediction of the CI InfrastructureWang, Yiran January 2022 (has links)
The continuous integration infrastructure (CI servers) is commonly used as a shared test environment due to the need for collaborative and distributive development for the software products under growing scale and complexity in recent years. To ensure the stability of the CI servers, with the help of the constantly recorded measurement data of the servers, fault prediction is of great interest to software development companies. However, the lack of fault data is a typical challenge in learning the fault patterns directly. Alternatively, predicting the standard observations that represent the normal behavior of the CI servers can be viewed as an initial step toward fault prediction. Faults can then be identified and predicted by studying the difference between observed data and predicted standard data with enough fault data in the future. In this thesis, a long short-term memory (LSTM), a bidirectional LSTM (BiLSTM), and a vector autoregressive (VAR) models are developed. The models are compared on both one-step-ahead prediction and iteratively long-range prediction up to 60 steps (corresponds to 15 minutes for the CI servers analyzed in the thesis). To account for the uncertainties in the predictions, the LSTM-based models are trained to estimate predictive variance. The prediction intervals obtained are then compared with the VAR model. Moreover, since there are many servers in the CI infrastructure, it is of interest to investigate whether a model trained on one server can represent other servers. The investigation is carried out by applying the one-step-ahead LSTM model on a set of other servers and comparing the results. The LSTM model performs the best overall with only slightly better than the VAR model, whereas the BiLSTM model performs the worst in the one-step-ahead prediction. When taking the uncertainties into account, the LSTM model seems to estimate the assumed distribution the best with the highest log-likelihood. For long-range prediction, the VAR model surprisingly performs the best across almost all range lengths. Lastly, when applying the LSTM one-step-ahead model on the other servers, the performance differs from server to server, which indicates that it is less likely to achieve competitive performance when applying the same model on all servers.
|
63 |
Vägen till DevOps : Implementation i praktiken - en fallstudie hos Centrala StudiestödsnämndenBergelin, Jakob, Reuterskiöld, Axel January 2022 (has links)
IT-utveckling är ett område präglat av ständig förändring, i form av tekniska medel, innovationer och arbetssätt. Precis som all annan form av produktutveckling varierar det tillvägagångssättet vida mellan organisationer, företag och myndigheter. Inom mjukvaruutveckling har förändringen varit lika snabb som uppkomsten och den har gått från sekventiell traditionella utvecklingsmetoder till flexibla agila. Nu sker det återigen ett nytt skifte till ett mer dynamiskt utvecklingssätt, ämnat att sammanföra och i grunden förändra sättet som teams skapar IT-produkter. I det närmaste en ny filosofi - DevOps. DevOps bygger på principen att utvecklingssidan, “Development”, och drift- och förvaltningssidan, “Operations”, går samman för att skapa ett enhetligt team som är gemensamt ansvariga för hela utvecklings- och förvaltningscykeln. Underlaget som möjliggör detta är en kombination av en stark samarbetskultur, god kommunikation och nyskapande tekniska verktyg. Tillvägagångssätten för att implementera ett sådant brett koncept är många, och implementation av nya arbetssätt är en utmaning för alla organisationer som vågar ligga i framkant. Centrala Studiestödsnämnden, CSN, är en sådan organisation, och en myndighet på väg mot DevOps. Det är inte ett okomplicerat arbete för en myndighet som olikt ett privat företag inte arbetar i vinstsyfte och behöver följa andra lagar och förordningar. Denna studie har genom semi-strukturerade intervjuer och en kompletterande litteraturgenomgång utrett hur en organisation kan implementera DevOps och vilka förutsättningar som finns i nuläget. Resultatet pekar på att CSN som organisation har en god utgångspunkt för en lyckad DevOps-miljö, med några nyckelsteg för att fullfölja en fulländad implementation. Dessa inkluderar faktorer som att minska avståndet mellan Development och Operations, möjliggörandet av nödvändiga tekniska verktyg för automatisering, och en förändring i sin utvecklingsstruktur.
|
64 |
The core problems of globally distributed work in software development environments, and possible solutions : DevOps environments' opportunities for better adoption of a globally distributed working cultureOachesu, Alex, Negovanovic, Nemanja January 2021 (has links)
Both distributed work and DevOps are on an upward trend. There is a slight resemblance between the problems that DevOps is trying to find answers to, the solutions, and the common problems that geographically distributed work faces. Mainly, they are related to isolated environments that have difficulties in mutual understanding and communication, collaboration. All this leads to inefficiencies and costs that affect the overall efficiency of companies. This report identifies how DevOps engineering principles and implementations provide solutions to common problems in globally distributed work environments. It uses a literature systematic literature search and review to extract the recent and relevant academic data in the scope of the two research questions. Then, a proof-of-concept is implemented for DevOps, which confirms the literature. In parallel, a survey addressed to Swedish companies provides subject-related data from the professional environment, which largely supports the literature and brings extra knowledge. All of this is considered in data analysis and formulation of conclusions, showing DevOps features that can improve and support work in globally distributed environments and outlining the importance of the tailored organizational culture for the modern need of large-scale distributed work.
|
65 |
An Evaluation of Continuous Integration and Delivery Frameworks for Classroom UseLight, Jarred, Pfeiffer, Phil, Bennett, Brian 15 April 2021 (has links)
Continuous integration and delivery (CI/CD) frameworks are a core element of DevOps-based software development. A PHP-based case study assessed the suitability of five such frameworks - -JFrog Arti-factory, Bitbucket Pipelines, Jenkins, Azure DevOps, and TeamCity - -for instructional use. The five were found to be roughly equivalent in terms of their usability for simple configurations. The effort needed to implement CI/CD substantially increased for more realistic production scenarios, like deployments to cloud and load-balanced platforms. These results suggest a need to limit CI/CD-based academic projects to simple infrastructure and technology stacks: e.g., a web application on a single instance web server.
|
66 |
Smart Enterprise Analytics - Evaluation, Adaption und Implementierung von Analyseverfahren zur Automatisierung des InformationsmanagementsVarwig, Andreas Werner 04 October 2018 (has links)
Die Identifikation von flexibel einsetzbaren, mächtigen Verfahren zur Massendatenanalyse und die Schaffung von standardisierbaren Vorgehensmodellen zur Integration dieser Verfahren in IT-Systeme sind zentrale Herausforderungen für die moderne Wirtschaftsinformatik. Insbesondere für KMU ist die Entwicklung standardisierter Lösungsansätze von großer Relevanz. Dies gilt über alle Branchen. Finanzdienstleister sind ebenso betroffen wie der Maschinen- und Anlagenbau. Im Rahmen dieser Forschungsarbeit wird eine Wissensbasis geschaffen werden, welche es einer breiten Masse an Unternehmen ermöglicht, geeignete quantitative Methoden zur Datenanalyse zu erkennen und diese für sich nutzbar zu machen.
|
67 |
Software Development Environment : the DevOps perspectiveChristensen, Olga January 2020 (has links)
DevOps is a collaborative effort based on a set of practices and cultural values developed in order to improve software quality and release cycle while operating at the lowest cost possible. To enable DevOps principles such as automation, collaboration, and knowledge sharing, Continuous Delivery and Deployment, as well as Infrastructure as Code are heavily employed throughout the whole process of software development. One of the main building blocks of this process is a development environment where the application code is being developed and tested before it is released into production. This research investigates with the help of a systematic literature review the DevOps perspective regarding provisioning and deployment of a development environment in a controlled, automated way, specifically the benefits and challenges of this process.
|
68 |
Accelerating university-industry collaborations with MLOps : A case study about the cooperation of Aimo and the Linnaeus University / Accelerating university-industry collaborations with MLOps : A case study about the cooperation of Aimo and the Linnaeus UniversityPistor, Nico January 2023 (has links)
Many developed machine learning models are not used in production applications as several challenges must be solved to develop and deploy ML models. Manual reimplementation and heterogeneous environments increase the effort required to develop an ML model or improve an existing one, considerably slowing down the overall process. Furthermore, it is required that a model is constantly monitored to ensure high-quality predictions and avoid possible drifts or biases. MLOps processes solve these challenges and streamline the development and deployment process by covering the whole life cycle of ML models. Even if the research area of MLOps, which applies DevOps principles to ML models, is relatively new, several researchers have already developed abstract MLOps process models. Research for cases with multiple collaboration partners is rare. This research project aims to develop an MLOps process for cases involving multiple collaboration partners. Hence, a case study is conducted with the cooperation of Aimo and LNU as a single case. Aimo requires ML models for their application and collaborates with LNU regarding this demand. LNU develops ML models based on the provided data, which Aimo integrates into their application afterward. This case is analyzed in-depth to identify challenges and the current process. These results are required to elaborate a suitable MLOps process for the case, which also considers the handover of artifacts between the collaboration partners. This process is derived from the already existing general MLOps process models. It is also instantiated to generate a benefit for the case and evaluate the feasibility of the MLOps process. Required components are identified, and existing MLOps tools are collected and compared, leading to the selection of suitable tools for the case. A project template is implemented and applied to an ML model project of the case to show the feasibility. As a result, this research project provides a concrete MLOps process. Besides that, several artifacts were elaborated, such as a project template for ML models in which the selected toolset is applied. These results mainly fit the analyzed case. Nevertheless, several findings are also generalizable such as the identified challenges. The compared alternatives and the generally applied method to elaborate an MLOps process can also be applied to other settings. This is also the case for several artifacts of this project, such as the tool comparison table and the applied process to select suitable tools. This case study shows that it is possible to set up MLOps processes with a high maturity level in situations where multiple cooperation partners are involved and artifacts need to be transferred among them.
|
69 |
DevOps for Data Science SystemZhang, Zhongjian January 2020 (has links)
Commercialization potential is important to data science. Whether the problems encountered by data science in production can be solved determines the success or failure of the commercialization of data science. Recent research shows that DevOps theory is a great approach to solve the problems that software engineering encounters in production. And from the product perspective, data science and software engineering both need to provide digital services to customers. Therefore it is necessary to study the feasibility of applying DevOps in data science. This paper describes an approach of developing a delivery pipeline line for a data science system applying DevOps practices. I applied four practices in the pipeline: version control, model server, containerization, and continuous integration and delivery. However, DevOps is not a theory designed specifically for data science. This means the currently available DevOps practices cannot cover all the problems of data science in production. I expended the set of practices of DevOps to handle that kind of problem with a practice of data science. I studied and involved transfer learning in the thesis project. This paper describes an approach of parameterbased transfer where parameters learned from one dataset are transferred to another dataset. I studied the effect of transfer learning on model fitting to a new dataset. First I trained a convolutional neural network based on 10,000 images. Then I experimented with the trained model on another 10,000 images. I retrained the model in three ways: training from scratch, loading the trained weights and freezing the convolutional layers. The result shows that for the problem of image classification when the dataset changes but is similar to the old one, transfer learning a useful practice to adjust the model without retraining from scratch. Freezing the convolutional layer is a good choice if the new model just needs to achieve a similar level of performance as the old one. Loading weights is a better choice if the new model needs to achieve better performance than the original one. In conclusion, there is no need to be limited by the set of existing practices of DevOps when we apply DevOps to data science. / Kommersialiseringspotentialen är viktig för datavetenskapen. Huruvida de problem som datavetenskapen möter i produktionen kan lösas avgör framgången eller misslyckandet med kommersialiseringen av datavetenskap. Ny forskning visar att DevOps-teorin är ett bra tillvägagångssätt för att lösa de problem som programvaruteknik möter i produktionen. Och ur produktperspektivet behöver både datavetenskap och programvaruteknik tillhandahålla digitala tjänster till kunderna. Därför är det nödvändigt att studera genomförbarheten av att tillämpa DevOps inom datavetenskap. Denna artikel beskriver en metod för att utveckla en leverans pipeline för ett datavetenskapssystem som använder DevOps-metoder. Jag använde fyra metoder i pipeline: versionskontroll, modellserver, containerisering och kontinuerlig integration och leverans. DevOps är dock inte en teori som utformats specifikt för datavetenskap. Detta innebär att de för närvarande tillgängliga DevOps-metoderna inte kan täcka alla problem med datavetenskap i produktionen. Jag spenderade uppsättningen av DevOps för att hantera den typen av problem med en datavetenskap. Jag studerade och involverade överföringslärande i avhandlingsprojektet. I det här dokumentet beskrivs en metod för parameterbaserad överföring där parametrar lärda från en datasats överförs till en annan datasats. Jag studerade effekten av överföringsinlärning på modellanpassning till ett nytt datasystem. Först utbildade jag ett invecklat neuralt nätverk baserat på 10 000 bilder. Sedan experimenterade jag med den tränade modellen på ytterligare 10 000 bilder. Jag omskolade modellen på tre sätt: träna från grunden, ladda de tränade vikterna och frysa de invändiga lagren. Resultatet visar att för problemet med bildklassificering när datasättet ändras men liknar det gamla, överföra lärande en användbar praxis för att justera modellen utan omskolning från början. Att frysa det invändiga lagret är ett bra val om den nya modellen bara behöver uppnå en liknande prestanda som den gamla. Att ladda vikter är ett bättre val om den nya modellen behöver uppnå bättre prestanda än den ursprungliga. Sammanfattningsvis finns det inget behov att begränsas av uppsättningen av befintliga metoder för DevOps när vi tillämpar DevOps på datavetenskap.
|
70 |
Hands-on Comparison of Cloud Computing Services for Developing ApplicationsRollino, Sebastian January 2022 (has links)
When developing applications, developers face a challenge when they have to select the technologies that best fit the requirements gathered for the application that is going to be developed. New programming languages, frameworks, stacks, etc., have arisen in recent years making the choice even harder. Cloud computing is a new technology that has gained popularity in the last two decades providing computing resources to developers and companies. As with the other technologies, there are many cloud service providers to choose from. In this thesis, the two biggest cloud service providers Amazon Web Services and Microsoft Azure are compared. Furthermore, after comparing the providers a prototype of a customer relationship management system was deployed to the selected provider. From the data gathered it could be seen that further research needs to be done to decide which provider might fit better for application development.
|
Page generated in 0.0302 seconds