• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 13
  • 7
  • 1
  • 1
  • Tagged with
  • 96
  • 96
  • 39
  • 33
  • 32
  • 30
  • 24
  • 22
  • 22
  • 21
  • 19
  • 17
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Continuous Integration Pipelines to Assess Programming Assignments : Test Like a Professional

Strand, Anton January 2020 (has links)
Examiners of programming assignments in higher education and people in the software industry both need to test and review code. However, the assessing techniques used are often quite different. The IT industry often uses agile work methods like continuous integration and automated tests, while examiners either do manual assessments or rely on code grading tools. The students will most likely become developers and work using agile processes. Therefore, there are possible benefits of universities trying to imitate the work processes of the software industry. The purpose of this study was to develop a workflow for programming assignments inspired by continuous integration, Scrum, and GitLab flow. The workflow was developed based on the requirements of Linnaeus University and tested on one of their programming assignments. It showed that a simplified agile work process is suitable for programming assignments since the demonstration fulfilled all of the predefined requirements. However, examiners might miss some of the workflow’s benefits if the programming assignment can not be tested automatically since it will require more manual work while grading.
62

An infrastructure for autonomic and continuous long-term software evolution

Jiménez, Miguel 29 April 2022 (has links)
Increasingly complex dynamics in the software operations pose formidable software evolution challenges to the software industry. Examples of these dynamics include the globalization of software markets, the massive increase of interconnected devices worldwide with the internet of things, and the digital transformation to large-scale cyber-physical systems. To tackle these challenges, researchers and practitioners have developed impressive bodies of knowledge, including adaptive and autonomic systems, run-time models, continuous software engineering, and the practice of combining software development and operations (i.e., DevOps). Despite the tremendous strides the software engineering community has made toward managing highly dynamic systems, software-intensive industries face major challenges to match the ever-increasing pace. To cope with this rapid rate at which operational contexts for software systems change, organizations are required to automate and expedite software evolution on both the development and operations sides. The aim of our research is to develop continuous and autonomic methods, infrastructures, and tools to realize software evolution holistically. In this dissertation, we shift the prevalent autonomic computing paradigm and provide new perspectives and foci on integrating autonomic computing techniques into continuous software engineering practices, such as DevOps. Our methods and approaches are based on online experimentation and evolutionary optimization. Experimentation allows autonomic managers to make in- formed data-driven and explainable decisions and present evidence to stakeholders. As a result, autonomic managers contribute to the continuous and holistic evolution of design, configuration and deployment artifacts, providing guarantees on the validity, quality and effectiveness of enacted changes. Ultimately, our approach turns autonomic managers into online stakeholders whose contributions are subject to quality control. Our contributions are threefold. We focus on effecting long-lasting software changes through self-management, self-improvement, and self-regulation. First, we propose a framework for continuous software evolution pipelines for bridging offline and online evolution processes. Our framework’s infrastructure captures run-time changes and turns them into configuration and deployment code updates. Our functional validation on cloud infrastructure management demonstrates its feasibility and soundness. It effectively contributes to eliminate technical debt from the Infrastructure-as-Code (IAC) life cycle, allowing development teams to embrace the benefits of IAC without sacrificing existing automation. Second, we provide a comprehensive implementation for the continuous IAC evolution pipeline. Third, we design a feedback loop to conduct experimentation-driven continuous exploration of design, configuration and deployment alternatives. Our experimental validation demonstrates its capacity to enrich the software architecture with additional components, and to optimize the computing cluster’s configuration, both aiming to reduce service latency. Our feedback loop frees DevOps engineers from incremental improvements, and allows them to focus on long-term mission-critical software evolution changes. Fourth, we define a reference architecture to support short-lived and long-lasting evolution actions at run-time. Our architecture incorporates short-term and long-term evolution as alternating autonomic operational modes. This approach keeps internal models relevant over prolonged system operation, thus reducing the need for additional maintenance. We demonstrate the usefulness of our research in case studies that guide the designs of cloud management systems and a Colombian city transportation system with historical data. In summary, this dissertation presents a new approach on how to manage software continuity and continuous software improvement effectively. Our methods, infrastructures, and tools constitute a new platform for short-term and long-term continuous integration and software evolution strategies and processes for large-scale intelligent cyber-physical systems. This research is a significant contribution to the long-standing challenges of easing continuous integration and evolution tasks across the development-time and run-time boundary. Thus, we expand the vision of autonomic computing to support software engineering processes from development to production and back. This dissertation constitutes a new holistic approach to the challenges of continuous integration and evolution that strengthens the causalities in current processes and practices, especially from execution back to planning, design, and development. / Graduate
63

Mathematical Optimization for the Test Case Prioritization Problem

Felding, Eric January 2022 (has links)
Regression testing is the process of testing software to make sure changes to the software will not change the functionality. With growing test suites theneed to prioritize arises. This thesis explores how to weigh factors such as the number of fails detected, days since latest test case execution, and coverage. The prioritization is done over multiple test systems, software branches, and over many test sessions where the software can change in-between. With data provided by an industrial partner, we evaluate different ways to prioritize. The developed mathematical model could not cope with the size of the problem, whereas a simulated annealing approach based on said model proved highly successful. We also found that prioritizing test cases related to recent codechanges was effective. / Regressionstestning är processen att testa mjukvara för att säkerställa att ändringar av mjukvaran inte kommer att ändra funktionaliteten. Med växande testsviter uppstår behovet av att prioritera. Det här examensarbetet undersöker hur man väger faktorer som antalet upptäckta underkända testfall, dagar sedan testfallen senast kördes och täckning. Prioriteringen görs över flera testsystem, mjukvarugrenar och över många testsessioner där mjukvaran kan ändras däremellan. Med data från en industriell partner utvärderar vi olika sätt att prioritera. Den utvecklade matematiska modellen kunde inte hantera problemets storlek, medan en simulerad kylningsmetod baserad på denna modell visade sig vara mycket framgångsrik. Vi fann också att prioritering enligt ändringar som gjorts i mjukvaran var effetivt
64

Minimumkrav för ett CI-system

Kiendys, Petrus, Al-Zara, Shadi January 2015 (has links)
När en grupp utvecklare jobbar med samma kodbas kan konflikter uppstå med avseende på implementationen av moduler eller delsystem som varje utvecklare individuellt jobbar på. Dessa konflikter måste snabbt lösas för att projektet ska fortskrida och inte stagnera. Utvecklare som sällan kommunicerar framför ofta okompatibla moduler eller delsystem som kan vara svåra eller omöjliga att integrera i kodbasen, detta leder ofta till s.k. “integration hell” där det kan ta väldigt lång tid att anpassa ny kod till en befintlig kodbas.En strategi som man kan ta till är “continuous integration”, ett arbetssätt som erbjuder en rad fördelar när man jobbar i grupp på en gemensam kodbas. Continuous integration är möjligt att tillämpa utan verktyg eftersom detta är ett arbetssätt. Däremot kan processen stödjas av ett s.k. “CI-system” som är något av en teknisk implementation eller påtagligt införlivande och stöd för arbetsmetoden “continuous integration”.Denna rapport syftar till att ge en inblick i vad ett CI-system är och vad den principiellt består av. Vi undersöker vad ett CI-system absolut måste bestå av genom en litteraturundersökning och en marknadsundersökning. Vi ställer upp dessa beståndsdelar som “funktionella” och “icke-funktionella” krav för ett typiskt CI-system. Vi kan på så vis kvantifiera och kategorisera olika komponenter och funktionaliteter som bör innefattas i ett typiskt CI-system. I denna rapport finns även ett bihang som visar hur man kommer igång med att bygga en egen CI-server mha. CI-systemmjukvaran “TeamCity”.Slutsatsen av vår rapport är att CI-system är ett viktigt redskap som kan underlätta mjukvaruutveckling. Med hjälp av CI-system kan man stödja utvecklingsprocessen genom att bl.a. förhindra integrationsproblem, automatisera vissa delar av arbetsprocessen (kompilering av källkod, testning av mjukvara, notifikation om stabilitet av kodbas och distribution av färdig mjukvara) samt snabbt hitta och lösa integrationsfel. / When a group of developers work on the same code base, conflicts may arise regarding the implementation of modules or subsystems that developers individually work on. These conflicts have to be resolved quickly in order for the project to advance at a steady pace. Developers who do not communicate changes or other necessary deviations may find themselves in a situation where new or modified modules or subsystems are impossible or very difficult to integrate into the mainline code-base. This often leads to so called “integration hell” where it could take huge amounts of time to adapt new code into the current state of the code-base. One strategy, which can be deployed to counteract this trend is called “continuous integration”. This practice offers a wide range of advantages when a group of developers collaborates on writing clean and stable code. Continuous integration can be put into practice without the use of any tools as it is a “way to do things” rather than an actual tool. With that said, it is possible to support the practice with a tangible tool called a CI-system.This study aims to give insight into the makings of a CI-system and what it fundamentally consists of and has to be able to do. A study of contemporary research reports regarding the subject and a survey was performed in order to substantiate claims and conclusions. Core characteristics of CI-systems are grouped into “functional requirements” and “non-functional requirements (quality attributes)”. By doing this, it is possible to quantify and categorize various core components and functionalities of a typical CI-system. This study also contains an attachment which provides instructions of how to get started with implementing your own CI-server using the CI-system software ”TeamCity”. The conclusion of this study is that a CI-system is an important tool that enables a more efficient software development process. By making use of CI-systems developers can refine the development process by preventing integration problems, automating some parts of the work process (build, test, feedback, deployment) and quickly finding and solving integration issues.
65

Architecting the deployment of cloud-hosted services for guaranteeing multitenancy isolation

Ochei, Laud Charles January 2017 (has links)
In recent years, software tools used for Global Software Development (GSD) processes (e.g., continuous integration, version control and bug tracking) are increasingly being deployed in the cloud to serve multiple users. Multitenancy is an important architectural property in cloud computing in which a single instance of an application is used to serve multiple users. There are two key challenges of implementing multitenancy: (i) ensuring isolation either between multiple tenants accessing the service or components designed (or integrated) with the service; and (ii) resolving trade-offs between varying degrees of isolation between tenants or components. The aim of this thesis is to investigate how to architect the deployment of cloud-hosted service while guaranteeing the required degree of multitenancy isolation. Existing approaches for architecting the deployment of cloud-hosted services to serve multiple users have paid little attention to evaluating the effect of the varying degrees of multitenancy isolation on the required performance, resource consumption and access privilege of tenants (or components). Approaches for isolating tenants (or components) are usually implemented at lower layers of the cloud stack and often apply to the entire system and not to individual tenants (or components). This thesis adopts a multimethod research strategy to providing a set of novel approaches for addressing these problems. Firstly, a taxonomy of deployment patterns and a general process, CLIP (CLoud-based Identification process for deployment Patterns) was developed for guiding architects in selecting applicable cloud deployment patterns (together with the supporting technologies) using the taxonomy for deploying services to the cloud. Secondly, an approach named COMITRE (COmponent-based approach to Multitenancy Isolation Through request RE-routing) was developed together with supporting algorithms and then applied to three case studies to empirically evaluate the varying degrees of isolation between tenants enabled by multitenancy patterns for three different cloud-hosted GSD processes, namely-continuous integration, version control, and bug tracking. After that, a synthesis of findings from the three case studies was carried out to provide an explanatory framework and new insights about varying degrees of multitenancy isolation. Thirdly, a model-based decision support system together with four variants of a metaheuristic solution was developed for solving the model to provide an optimal solution for deploying components of a cloud-hosted application with guarantees for multitenancy isolation. By creating and applying the taxonomy, it was learnt that most deployment patterns are related and can be implemented by combining with others, for example, in hybrid deployment scenarios to integrate data residing in multiple clouds. It has been argued that the shared component is better for reducing resource consumption while the dedicated component is better in avoiding performance interference. However, as the experimental results show, there are certain GSD processes where that might not necessarily be so, for example, in version control, where additional copies of the files are created in the repository, thus consuming more disk space. Over time, performance begins to degrade as more time is spent searching across many files on the disk. Extensive performance evaluation of the model-based decision support system showed that the optimal solutions obtained had low variability and percent deviation, and were produced with low computational effort when compared to a given target solution.
66

Automotive Powertrain Software Evaluation Tool

Powale, Kalkin 08 February 2018 (has links) (PDF)
The software is a key differentiator and driver of innovation in the automotive industry. The major challenges for software development are increasing in complexity, shorter time-to-market, increase in development cost and demand of quality assurance. The complexity is increasing due to emission legislations, variants of product and new communication technologies being interfaced with the vehicle. The shorter development time is due to competition in the market, which requires faster feedback loops of verification and validation of developed functionalities. The increase in development cost is contributed by two factors; the first is pre-launch cost, this involves the cost of error correction in development stages. Another is post-launch cost; this involves warranty and guarantees cost. As the development time passes the cost of error correction also increases. Hence it is important to detect the error as early as possible. All these factors affect the software quality; there are several cases where Original Equipment Manufacturer (OEM) have callbacks their product because of the quality defect. Hence, there is increased in the requirement of software quality assurance. The solution for these software challenges can be the early quality evaluation in continuous integration framework environment. The most prominent in today\'s automotive industry AUTomotive Open System ARchitecture (AUTOSAR) reference architecture is used to describe software component and interfaces. AUTOSAR provides the standardised software component architecture elements. It was created to address the issues of growing complexity; the existing AUTOSAR environment does have software quality measures, such as schema validations and protocols for acceptance tests. However, it lacks the quality specification for non-functional qualities such as maintainability, modularity, etc. The tool is required which will evaluate the AUTOSAR based software architecture and give the objective feedback regarding quality. This thesis aims to provide the quality measurement tool, which will be used for evaluation of AUTOSAR based software architecture. The tool reads the AUTOSAR architecture information from AUTOSAR Extensible Markup Language (ARXML) file. The tool provides configuration ability, continuous evaluation and objective feedback regarding software quality characteristics. The tool was utilised on transmission control project, and results are validated by industry experts.
67

A framework for test case prioritization in the continuous software engineering

Campos Junior, Heleno de Souza 19 September 2018 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-10-30T13:39:57Z No. of bitstreams: 1 helenodesouzacamposjunior.pdf: 1434985 bytes, checksum: 4307be9bfd2ca9825bcd2ce10bfc824e (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-11-23T12:26:30Z (GMT) No. of bitstreams: 1 helenodesouzacamposjunior.pdf: 1434985 bytes, checksum: 4307be9bfd2ca9825bcd2ce10bfc824e (MD5) / Made available in DSpace on 2018-11-23T12:26:30Z (GMT). No. of bitstreams: 1 helenodesouzacamposjunior.pdf: 1434985 bytes, checksum: 4307be9bfd2ca9825bcd2ce10bfc824e (MD5) Previous issue date: 2018-09-19 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Testes de regressão são executados após cada mudança no software. Em ambientes de desenvolvimento de software que adotam práticas da Engenharia de Software Contínua, como a Integração contínua, por exemplo, software é modificado, e testado diversas vezes em curtos prazos. Cada execução dos testes pode levar horas para terminar, gerando atraso em relação à descoberta de falhas no projeto. Para prevenir esse atraso, técnicas de otimização são utilizadas. Uma delas é a priorização de casos de testes (TCP). Nessa técnica, a execução dos testes é reordenada de acordo com um objetivo, que normalmente é a detecção de falhas. Dessa forma, testes que têm maior probabilidade de falhas são executados primeiro. Um problema com essa abordagem é que existem diversas técnicas na literatura, mas pouca evidência em relação ao seu uso. Além disso, quase não existe infra estrutura para apoiar a adoção dessas técnicas no contexto industrial. O objetivo deste trabalho é planejar e implementar um framework que permita o uso, experimentação e implementação de técnicas de TCP. Esperamos que isso ajude praticantes a adotar essas técnicas no contexto industrial, principalmente da engenharia de software continua. Esperamos também que a criação dessa infra estrutura ajude pesquisadores a executar mais estudos experimentais sobre a eficiência do uso dessas técnicas. Para mostrar a viabilidade do framework proposto, é executado um estudo experimental com 16 técnicas de priorização diferentes, executadas em um total de 22 versões de 2 projetos open source. Os resultados coletados sugerem que o uso das técnicas de priorização resultam em retornos mais rápidos em relação à existência de falhas nos projetos, possivelmente resultando em ciclos mais rápidos de desenvolvimento. / Regression tests are executed after every change in software. In a software development environment that adopts Continuous Software Engineering practices such as Continuous Integration, software is changed, built and tested many times in a short period. Each execution can take hours to finish, delaying feedback about failures to the developer. To prevent this, regression test optimization techniques are used. One such technique is test case prioritization (TCP), which reorder the execution of the test cases according to some goal. The most common goal is fault detection, in which test cases are ordered so that those that have higher probability of detecting faults are executed first. One problem with this approach is that there are lots of different available techniques in the literature, but the amount of evidence of its use is low. Furthermore, there is almost no infrastructure support to adopt those techniques at the industry context. The goal of this work is to design and implement a framework that allows the use, experimentation and implementation of TCP techniques. We hope that this will help practitioners on adopting these techniques at the industry context, more specifically, in the continuous software engineering environment. We also hope that creating this infrastructure will encourage researchers on performing more empirical studies regarding test case prioritization techniques effectiveness. In order to show the feasibility of the proposed framework, we perform an empirical study with 16 different TCP techniques executed on a total of 22 versions of 2 different open source projects. Results suggest that using those TCP techniques result in faster feedback about the existence of failures in the projects, possibly resulting in shorter development cycles.
68

Continuous software engineering in the development of software-intensive products:towards a reference model for continuous software engineering

Karvonen, T. (Teemu) 24 October 2017 (has links)
Abstract Continuous software engineering (CSE) has instigated academic debate regarding the rapid, parallel cycles of releasing software and customer experimentation. This approach, originating from Web 2.0 and the software-as-a-service domain, is widely recognised among software-intensive companies today. Earlier studies have indicated some challenges in the use of CSE, especially in the context of business-to-business and product-oriented, embedded systems development. Consequently, research must address more explicit definitions and theoretical models for analysing the prerequisites and organisational capabilities related to the use of CSE. This dissertation investigates various approaches to conducting empirical evaluations related to CSE. The study aims to improve existing models of CSE and to empirically validate them in the context of software companies. The study also aims to accumulate knowledge regarding the use of CSE, as well as its impacts. The case study method is applied for the collection and analysis of empirical data. Twenty-seven interviews are conducted at five companies. In addition, a systematic literature review is used to synthesise the empirical research on agile release engineering practices. Design science research is used to portray the model design and the evaluation process of this dissertation. Three approaches for evaluating CSE are constructed: (1) LESAT for software focuses on enterprise transformation using an organisational self-assessment approach, (2) STH+ extends the “Stairway to Heaven” model and evaluates company practices with respect to evolutionary steps towards continuous experimentation-driven development, and (3) CRUSOE defines 7 key areas and 14 diagnostic questions related to the product-intensive software development ecosystem, strategy, architecture, and organisation, as well as their continuous interdependencies. This dissertation states the relevance of CSE in the context of product-intensive software development. However, more adaptations are anticipated in practices that involve business and product development stakeholders, as well as company external stakeholders. / Tiivistelmä Jatkuva ohjelmistotuotanto on herättänyt keskustelua nopeasta, samanaikaisesta ohjelmistojulkaisemisesta ja asiakaskokeiluista. Toimintatapa on peräisin Web 2.0 ja software-as-a-service yhteydestä, mutta se tunnetaan nykyään yleisesti ohjelmistoja kehittävissä yrityksissä. Aiemmat tutkimukset ovat osoittaneet haasteita jatkuvan ohjelmistotuotannon käytössä. Erityisesti haasteita on havaittu yritykseltä yritykselle liiketoiminnassa ja tuotepainotteisten sulautettujen järjestelmien yhteydessä. Näin ollen on havaittu tarve tutkimuksen avulla kehittää täsmällisempiä määritelmiä ja teoreettisia malleja, joilla voidaan analysoida jatkuvan ohjelmistotuotannon käyttöön liittyviä edellytyksiä ja organisaatioiden kyvykkyyksiä. Tässä väitöskirjassa tutkitaan malleja, joilla voidaan empiirisesti arvioida jatkuvaa ohjelmistotuotantoa. Tutkimuksella pyritään parantamaan nykyisiä malleja ja arvioimaan niiden käyttöä ohjelmistoyrityksissä. Lisäksi tutkimuksella pyritään kasvattamaan tietoa jatkuvasta ohjelmistotuotannosta ja sen vaikutuksista. Tiedon keräämiseen ja analysointiin käytettiin tapaustutkimus menetelmää. Kaksikymmentäseitsemän haastattelua tehtiin viidessä yrityksessä. Lisäksi tehtiin ketterään ohjelmistojulkaisuun keskittyvä systemaattinen kirjallisuuskatsaus. Väitöskirjassa käytetään Design Science Research menetelmää kuvaamaan tutkimuksen eri vaiheita, joissa malleja suunniteltiin ja arvioitiin. Tutkimuksessa rakennettiin kolme tapaa jatkuvan ohjelmistotuotannon arvioimista varten: (1) LESAT for Software keskittyy organisaation muutoskyvykkyyden arviointiin käyttäen itsearviointimenetelmää, (2) STH+, laajentaa ”Stairway to Heaven” mallia ja arvioi yrityksen käytäntöjä eri evoluutioaskelmilla matkalla kohti kokeilupainotteista tuotekehitystä, (3) CRUSOE määrittelee seitsemän pääaluetta ja 14 kysymystä liittyen tuotekehityksen ekosysteemiin, strategiaan, arkkitehtuuriin, organisointiin sekä näiden välisiin jatkuviin riippuvuuksiin. Väitöskirja osoittaa jatkuvan ohjelmistokehityksen olevan merkityksellinen myös tuotepainotteisessa ohjelmistokehityksessä. Nähtävissä kuitenkin on, että useita nykykäytäntöjä on tarvetta muokata. Erityisesti muokkaustarvetta on tuotekehityksen ja liiketoiminnan sidosryhmiin ja yrityksen ulkoisiin sidosryhmiin liittyvissä käytännöissä.
69

Podpora průběžné integrace v rámci systému Copr / Continues Integration Support for Copr Build System

Klusoň, Martin January 2018 (has links)
This thesis deals with implementation of continuous integration for build system Copr. The implementation uses framework Citool and its modules, which are already used for continuous integration of build system Koji. The outcome system can run the tests for the new package from the build system Copr and test it on virtual machine. This thesis shows way how to implement continuous integration for build system Copr.
70

Správa testů s podporou scénářů BDD / Test Case Management with Support of BDD

Bložoňová, Barbora January 2019 (has links)
This thesis focuses on test management tools and automated testing. The project covers analysis of existing open source tools and proposes its own BDD orientated test management tool in the form of a web service. The project aims to specify and design this application based on the process of Behaviour driven development. The resulting application TestBuDDy allows for test library management. Changes on the test library are projected onto a remote repository of software under test (SUT) and triggers a test run (the test library is being run against SUT by the BDD framework). TestBuDDy is able to save the test run results, parse them into a report and generate and group found issues. The application also allows requirement management and user management. The application is integrated with the CI/CD tool Gitlab CI, the BDD framework JBehave and the issue tracker JIRA. The application is designed to help testers during their work and also to be expandable within the open source community.

Page generated in 0.5213 seconds