• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 2
  • Tagged with
  • 19
  • 19
  • 10
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Tidsvinster med automatiserade regressionstester

Ström, Marcus, Kjessler, Oskar January 2022 (has links)
Denna studie undersöker tidsvinsterna med en investeringen av automatiserade regressionstester i förhållande till manuellt genomförande. Syftet med detta är att skapa ett beslutsunderlag av ROI och break-even beräkningar med avsikten att minska osäkerheten ifall investeringen kommer resultera i en tidsvinst under systemets livslängd samt hur stora dessa vinster kan bli. För att undersöka detta har automatiska regressionstester utvecklats, där en mätning av tidsåtgången för detta användes som den investerade tiden. De automatiska testernas tid för genomförandet av studiens testfall jämfördes med den manuella motsvarigheten. Detta har tillsammans med empiriskt material från intervjuer stått som grund för ROI och break-even beräkningar gällande investeringen av automatiska regressionstester. Det empiriska materialet bidrog till beräkningarna med parametrarna testfrekvens, testmängd och livslängd. Till skillnad från tidigare forskning genomför studien beräkningar med flera testfrekvenser, vilket resulterade i att även vid en relativt låg testfrekvens har automatiska regressionstester goda förutsättningar för ett positivt ROI. Vid en medel till hög testfrekvens kunde break-even punkten uppnås inom ett år med möjligheter för stora tidsvinster. Det empiriska materialet har även påvisat att uppstartsfasen, systemtypen, testfallens komplexitet och återanvändning är faktorer som kan påverka tidsvinsterna. / This study examines the time savings of an investment in automated regression tests relative to a manual approach. The purpose of this is to produce an underlay for decision making consisting of ROI and break-even calculations which has the intention of reducing uncertainty in case the investment will result in time savings within the systems life span as well as how big these savings can become. To examine this, automated regression tests have been developed where time spent was measured to be used as the invested time. The time it took for the automated tests to execute the studies test cases were compared with the manual counterpart. This has, together with the empirical material from the interviews, formed the basis for ROI and break-even calculations regarding the investment of automated regression tests. The empirical material contributed to the calculations with parameters for test frequency, test amount and life span. Unlike previous research, this study uses calculations with a higher amount of test frequencies which showed that even with a relatively low test frequency, automated regression tests have good opportunities for a positive ROI. With a medium to high test frequency, the break-even point was reached within one year with possibilities for great time savings. The empirical material has also shown that the start-up phase, system type, test case complexity and reusability are factors that can affect the time savings.
12

Extending Peass to Detect Performance Changes of Apache Tomcat

Rosenlund, Stefan 07 August 2023 (has links)
New application versions may contain source code changes that decrease the application’s performance. To ensure sufficient performance, it is necessary to identify these code changes. Peass is a performance analysis tool using performance measurements of unit tests to achieve that goal for Java applications. However, it can only be utilized for Java applications that are built using the tools Apache Maven or Gradle. This thesis provides a plugin for Peass that enables it to analyze applications built with Apache Ant. Peass utilizes the frameworks Kieker and KoPeMe to record the execution traces and measure the response times of unit tests. This results in the following tasks for the Peass-Ant plugin: (1) Add Kieker and KoPeMe as dependencies and (2) Execute transformed unit tests. For the first task, our plugin programmatically resolves the transitive dependencies of Kieker and KoPeMe and modifies the XML buildfiles of the application under test. For the second task, the plugin orchestrates the process that surrounds test execution—implementing performance optimizations for the analysis of applications with large codebases—and executes specific Ant commands that prepare and start test execution. To make our plugin work, we additionally improved Peass and Kieker. Therefore, we implemented three enhancements and identified twelve bugs. We evaluated the Peass-Ant plugin by conducting a case study on 200 commits of the open-source project Apache Tomcat. We detected 14 commits with 57 unit tests that contain performance changes. Our subsequent root cause analysis identified nine source code changes that we assigned to three clusters of source code changes known to cause performance changes.:1. Introduction 1.1. Motivation 1.2. Objectives 1.3. Organization 2. Foundations 2.1. Performance Measurement in Java 2.2. Peass 2.3. Apache Ant 2.4. Apache Tomcat 3. Architecture of the Plugin 3.1. Requirements 3.2. Component Structure 3.3. Integrated Class Structure of Peass and the Plugin 3.4. Build Modification Tasks for Tomcat 4. Implementation 4.1. Changes in Peass 4.2. Changes in Kieker and Kieker-Source-Instrumentation 4.3. Buildfile Modification of the Plugin 4.4. Test Execution of the Plugin 5. Evaluative Case Study 5.1. Setup of the Case Study 5.2. Results of the Case Study 5.3. Performance Optimizations for Ant Applications 6. Related Work 6.1. Performance Analysis Tools 6.2. Test Selection and Test Prioritization Tools 6.3. Empirical Studies on Performance Bugs and Regressions 7. Conclusion and Future Work 7.1. Conclusion 7.2. Future Work / Neue Versionen einer Applikation können Quelltextänderungen enthalten, die die Performance der Applikation verschlechtern. Um eine ausreichende Performance sicherzustellen, ist es notwendig, diese Quelltextänderungen zu identifizieren. Peass ist ein Performance-Analyse-Tool, das die Performance von Unit-Tests misst, um dieses Ziel für Java-Applikationen zu erreichen. Allerdings kann es nur für Java-Applikationen verwendet werden, die eines der Build-Tools Apache Maven oder Gradle nutzen. In dieser Arbeit wird ein Plugin für Peass entwickelt, das es ermöglicht, mit Peass Applikationen zu analysieren, die das Build-Tool Apache Ant nutzen. Peass verwendet die Frameworks Kieker und KoPeMe, um Ausführungs-Traces von Unit-Tests aufzuzeichnen und Antwortzeiten von Unit-Tests zu messen. Daraus resultieren folgende Aufgaben für das Peass-Ant-Plugin: (1) Kieker und KoPeMe als Abhängigkeiten hinzufügen und (2) Transformierte Unit-Tests ausführen. Für die erste Aufgabe löst das Plugin programmbasiert die transitiven Abhängigkeiten von Kieker und KoPeMe auf und modifiziert die XML-Build-Dateien der zu testenden Applikation. Für die zweite Aufgabe steuert das Plugin den Prozess, der die Testausführung umgibt, und führt spezielle Ant-Kommandos aus, die die Testausführung vorbereiten und starten. Dabei implementiert es Performanceoptimierungen, um auch Applikationen mit einer großen Codebasis analysieren zu können. Um die Lauffähigkeit des Plugins sicherzustellen, wurden zusätzlich Verbesserungen an Peass und Kieker vorgenommen. Dabei wurden drei Erweiterungen implementiert und zwölf Bugs identifiziert. Um das Peass-Ant-Plugin zu bewerten, wurde eine Fallstudie mit 200 Commits des Open-Source-Projekts Apache Tomcat durchgeführt. Dabei wurden 14 Commits mit 57 Unit-Tests erkannt, die Performanceänderungen enthalten. Unsere anschließende Ursachenanalyse identifizierte neun verursachende Quelltextänderungen. Diese wurden drei Clustern von Quelltextänderungen zugeordnet, von denen bekannt ist, dass sie eine Veränderung der Performance verursachen.:1. Introduction 1.1. Motivation 1.2. Objectives 1.3. Organization 2. Foundations 2.1. Performance Measurement in Java 2.2. Peass 2.3. Apache Ant 2.4. Apache Tomcat 3. Architecture of the Plugin 3.1. Requirements 3.2. Component Structure 3.3. Integrated Class Structure of Peass and the Plugin 3.4. Build Modification Tasks for Tomcat 4. Implementation 4.1. Changes in Peass 4.2. Changes in Kieker and Kieker-Source-Instrumentation 4.3. Buildfile Modification of the Plugin 4.4. Test Execution of the Plugin 5. Evaluative Case Study 5.1. Setup of the Case Study 5.2. Results of the Case Study 5.3. Performance Optimizations for Ant Applications 6. Related Work 6.1. Performance Analysis Tools 6.2. Test Selection and Test Prioritization Tools 6.3. Empirical Studies on Performance Bugs and Regressions 7. Conclusion and Future Work 7.1. Conclusion 7.2. Future Work
13

Automated Testing of Robotic Systems in Simulated Environments

Andersson, Sebastian, Carlstedt, Gustav January 2019 (has links)
With the simulations tools available today, simulation can be utilised as a platform for more advanced software testing. By introducing simulations to software testing of robot controllers, the motion performance testing phase can begin at an earlier stage of development. This would benefit all parties involved with the robot controller. Testers at ABB would be able to include more motion performance tests to the regression tests. Also, ABB could save money by adapting to simulated robot tests and customers would be provided with more reliable software updates. In this thesis, a method is developed utilising simulations to create a test set for detecting motion anomalies in new robot controller versions. With auto-generated test cases and a similarity analysis that calculates the Hausdorff distance for a test case executed on controller versions with an induced artificial bug. A test set has been created with the ability to detect anomalies in a robot controller with a bug.
14

Evolving Legacy System's Features into Fine-grained Components Using Regression Test-Cases

Mehta, Alok 11 December 2002 (has links)
"Because many software systems used for business today are considered legacy systems, the need for software evolution techniques has never been greater. We propose a novel evolution methodology for legacy systems that integrates the concepts of features, regression testing, and Component-Based Software Engineering (CBSE). Regression test suites are untapped resources that contain important information about the features of a software system. By exercising each feature with its associated test cases using code profilers and similar tools, code can be located and refactored to create components. The unique combination of Feature Engineering and CBSE makes it possible for a legacy system to be modernized quickly and affordably. We develop a new framework to evolve legacy software that maps the features to software components refactored from their feature implementation. In this dissertation, we make the following contributions: First, a new methodology to evolve legacy code is developed that improves the maintainability of evolved legacy systems. Second, the technique describes a clear understanding between features and functionality, and relationships among features using our feature model. Third, the methodology provides guidelines to construct feature-based reusable components using our fine-grained component model. Fourth, we bridge the complexity gap by identifying feature-based test cases and developing feature-based reusable components. We show how to reuse existing tools to aid the evolution of legacy systems rather than re-writing special purpose tools for program slicing and requirement management. We have validated our approach on the evolution of a real-world legacy system. By applying this methodology, American Financial Systems, Inc. (AFS), has successfully restructured its enterprise legacy system and reduced the costs of future maintenance. "
15

Using data mining to increase controllability and observability in functional verification

Farkash, Monica C. 10 February 2015 (has links)
Hardware verification currently takes more than 50% of the whole verification time. There is a sustained effort to improve the efficiency of the verification process, which in the past helped deliver a large variety of supporting tools. The past years though did not see any major technology change that would bring the improvements that the process really needs (H. Foster 2013) (Wilson Research Group 2012). The existing approach to verification does not provide that type of qualitative jump anymore. This work is introducing a new tactic, providing a modern alternative to the existing approach to the verification problem. The novel approach I use in this research has the potential of significantly improve the process, way beyond incremental changes. It starts with acknowledging the huge amounts of data that follows the hardware development process from inception to the final product and in considering the data not as a quantitative by-product but as a qualitative supply of information on which we can develop a smarter verification. The approach is based on data already generated throughout the process currently used by verification engineers to zoom into the details of different verification aspects. By using existing machine learning approaches we can zoom out and use the same data to extract information, to gain knowledge that we can use to guide the verification process. This approach allows an apparent lack of accuracy introduced by data discovery, to achieve the overall goal. The latest advancements in machine learning and data mining offer a base of a new understanding and usage of the data that is being passed through the process. This work takes several practical problems for which the classical verification process reached a roadblock, and shows how the new approach can provide a jump in productivity and efficiency of the verification process. It focuses on four different aspects of verification to prove the power of this new approach: reducing effort redundancy, guiding verification to areas that need it first, decreasing time to diagnose, and designing tests for coverage efficiency. / text
16

MutShrink: um método de redução de banco de dados de teste baseado em mutação / MutShrink: a mutation-based test database shrinking method

Toledo, Ludmila Irineu 11 August 2017 (has links)
Submitted by JÚLIO HEBER SILVA (julioheber@yahoo.com.br) on 2017-09-06T18:11:43Z No. of bitstreams: 2 Dissertação - Ludmila Irineu Toledo - 2017.pdf: 1781052 bytes, checksum: 809a5a8972f14af9bc5bd3cc2eb37f80 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-09-15T15:34:25Z (GMT) No. of bitstreams: 2 Dissertação - Ludmila Irineu Toledo - 2017.pdf: 1781052 bytes, checksum: 809a5a8972f14af9bc5bd3cc2eb37f80 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-09-15T15:34:26Z (GMT). No. of bitstreams: 2 Dissertação - Ludmila Irineu Toledo - 2017.pdf: 1781052 bytes, checksum: 809a5a8972f14af9bc5bd3cc2eb37f80 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-08-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Regression testing for database applications can be a computationally costly task as it often deals with databases with large volumes of data and complex SQL statements (for example, nested queries, set comparisons, use of functions and operators). In this context, some works only select a subset of the database for testing purposes, that is, select data to create a test database and thus improve test efficiency. But usually, the selection of test data is also a complex optimization problem. Thus, this work proposes a method of selecting test data for regression testing on SQL statements based on mutation analysis, called MutShrink. The goal is to minimize the cost of testing by reducing the size of the database while maintaining the same effectiveness as the original database. MutShrink consists of using the result of the generated mutants to evaluate the database and select tuples using filters in these results, selecting reduced sets of test data. Experiments were performed using a benchmark with complex SQLs and database with large data volume. We compared our proposal with the QAShrink tool and the results revealed that MutShrink overcame the QAShrink tool in 92.85 % of cases when evaluated by the Mutation Score metric and 57.14 % of cases when evaluated by the metric Full Predicate Coverage. / O teste de regressão para aplicações de banco de dados pode ser uma tarefa computacionalmente custosa, pois frequentemente lida com bancos de dados com grandes volumes de dados e instruções SQL com estruturas complexas (por exemplo, consultas aninhadas, comparação de conjuntos, uso de funções e operadores). Neste contexto, alguns trabalhos realizam seleção apenas de um subconjunto do banco de dados para fins de teste, ou seja, selecionam dados para criar um banco de dados de teste e assim, melhorar a eficiência do teste. Mas, normalmente, a seleção de dados de teste também é um problema complexo de otimização. Assim, este trabalho propõe um método de seleção de dados de teste para teste de regressão em instruções SQLs baseado em análise de mutação, chamado MutShrink. O objetivo é minimizar o custo do teste reduzindo o tamanho do banco de dados, mantendo a eficácia semelhante ao banco original. O MutShrink consiste em utilizar o resultado dos mutantes gerados para avaliar o banco de dados e selecionar tuplas a partir de filtros nestes resultados, selecionando conjuntos reduzidos de dados de teste. Foram realizados experimentos usando um benchmark com SQLs de estruturas complexas e banco de dados com grande volume de dados. Comparamos nossa proposta com a ferramenta QAShrink e os resultados revelaram que o MutShrink superou a ferramenta QAShrink em 92,85% dos casos quando avaliada pela métrica Escore de Mutação e em 57,14% dos casos quando avaliada pela métrica Full Predicate Coverage.
17

REGTEST - an Automatic & Adaptive GUI Regression Testing Tool.

Forsgren, Robert, Petersson Vasquez, Erik January 2018 (has links)
Software testing is something that is very common and is done to increase the quality of and confidence in a software. In this report, an idea is proposed to create a software for GUI regression testing which uses image recognition to perform steps from test cases. The problem that exists with such a solution is that if a GUI has had changes made to it, then many test cases might break. For this reason, REGTEST was created which is a GUI regression testing tool that is able to handle one type of change that has been made to the GUI component, such as a change in color, shape, location or text. This type of solution is interesting because setting up tests with such a tool can be very fast and easy, but one previously big drawback of using image recognition for GUI testing is that it has not been able to handle changes well. It can be compared to tools that use IDs to perform a test where the actual visualization of a GUI component does not matter; It only matters that the ID stays the same; however, when using such tools, it either requires underlying knowledge of the GUI component naming conventions or the use of tools which automatically constructs XPath queries for the components. To verify that REGTEST can work as well as existing tools a comparison was made against two professional tools called Ranorex and Kantu. In those tests, REGTEST proved very successful and performed close to, or better than the other software.
18

Automatiserade regressionstester avseende arbetsflöden och behörigheter i ProjectWise. : En fallstudie om ProjectWise på Trafikverket / Automated regression tests regarding workflows and permissions in ProjectWise.

Ograhn, Fredrik, Wande, August January 2016 (has links)
Test av mjukvara görs i syfte att se ifall systemet uppfyller specificerade krav samt för att hitta fel. Det är en viktig del i systemutveckling och involverar bland annat regressionstestning. Regressionstester utförs för att säkerställa att en ändring i systemet inte medför att andra delar i systemet påverkas negativt. Dokumenthanteringssystem hanterar ofta känslig data hos organisationer vilket ställer höga krav på säkerheten. Behörigheter i system måste därför testas noggrant för att säkerställa att data inte hamnar i fel händer. Dokumenthanteringssystem gör det möjligt för flera organisationer att samla sina resurser och kunskaper för att nå gemensamma mål. Gemensamma arbetsprocesser stöds med hjälp av arbetsflöden som innehåller ett antal olika tillstånd. Vid dessa olika tillstånd gäller olika behörigheter. När en behörighet ändras krävs regressionstester för att försäkra att ändringen inte har gjort inverkan på andra behörigheter. Denna studie har utförts som en kvalitativ fallstudie vars syfte var att beskriva utmaningar med regressionstestning av roller och behörigheter i arbetsflöden för dokument i dokumenthanteringssystem. Genom intervjuer och en observation så framkom det att stora utmaningar med dessa tester är att arbetsflödens tillstånd följer en förutbestämd sekvens. För att fullfölja denna sekvens så involveras en enorm mängd behörigheter som måste testas. Det ger ett mycket omfattande testarbete avseende bland annat tid och kostnad. Studien har riktat sig mot dokumenthanteringssystemet ProjectWise som förvaltas av Trafikverket. Beslutsunderlag togs fram för en teknisk lösning för automatiserad regressionstestning av roller och behörigheter i arbetsflöden åt ProjectWise. Utifrån en kravinsamling tillhandahölls beslutsunderlag som involverade Team Foundation Server (TFS), Coded UI och en nyckelordsdriven testmetod som en teknisk lösning. Slutligen jämfördes vilka skillnader den tekniska lösningen kan utgöra mot manuell testning. Utifrån litteratur, dokumentstudie och förstahandserfarenheter visade sig testautomatisering kunna utgöra skillnader inom ett antal identifierade problemområden, bland annat tid och kostnad. / Software testing is done in order to see whether the system meets specified requirements and to find bugs. It is an important part of system development and involves, among other things, regression testing. Regression tests are performed to ensure that a change in the system does not affect other parts of the system adversely. Document management systems often deals with sensitive data for organizations, which place high demands on safety. Permissions in the system has to be tested thoroughly to ensure that data does not fall into the wrong hands. Document management systems make it possible for organizations to pool their resources and knowledge together to achieve common goals. Common work processes are supported through workflows that contains a variety of states. These different permissions apply to different states. When a permission changes regression tests are required to ensure that the changes has not made an impact on other permissions. This study was conducted as a qualitative case study whose purpose was to describe the challenges of regression testing of roles and permissions in document workflows in a document management system. Through interviews and an observation it emerged that the major challenges of these tests is that workflow states follow a predetermined sequence. To complete this sequence, a huge amount of permissions must be tested. This provides a very extensive test work that is time consuming and costly. The study was directed toward the document management system ProjectWise, managed by Trafikverket. Supporting documentation for decision making was produced for a technical solution for automated regression testing of roles and permissions in workflows for ProjectWise. Based on a requirement gathering decision-making was provided that involved the Team Foundation Server (TFS), Coded UI and a keyword-driven test method for a technical solution. Finally, a comparison was made of differences in the technical solution versus today's manual testing. Based on literature, document studies and first hand experiences, test automation provides differences in a number of problem areas, including time and cost.
19

Patient empowerment in long-term conditions : development and validation of a new measure

Small, Nicola January 2012 (has links)
Background: Patient empowerment is viewed as a priority by policy makers, patients and practitioners worldwide. Although there are a number of measures available, none have been developed specifically for patients in the UK with long-term conditions. It is the aim of this study to report the development and preliminary validation of an empowerment instrument for patients with long-term conditions in primary care.Methods: The study involved three methods. Firstly, a systematic review was conducted to identify existing empowerment instruments, and to describe, compare and appraise their content and quality. The results supported the need for a new instrument. Item content of existing instruments helped support development of the new instrument. Secondly, empowerment was explored in patients with long-term conditions and primary care practitioners using qualitative methods, to explore its meaning and the factors that support or hinder empowerment. This led to the development of a conceptual model to support instrument development. Thirdly, a new instrument for measuring empowerment in patients with long-term conditions in primary care was developed. A cross-sectional survey of patients was conducted to collect preliminary data on acceptability, reliability and validity, using pre-specified hypotheses based on existing theoretical and empirical work. Results: Nine instruments meeting review inclusion criteria were identified. Only one instrument was developed to measure empowerment in long-term conditions in the context of primary care, and that was judged to be insufficient in terms of content and purpose. Five dimensions (‘identity’, ‘knowledge and understanding’, ‘personal control’, personal decision-making’, and ‘enabling other patients’) of empowerment were identified through published literature and the qualitative work and incorporated into a preliminary version of the new instrument. A postal survey achieved 197 responses (response rate 33%). Almost half of the sample reported circulatory, diabetic or musculoskeletal conditions. Exploratory factor analysis suggested a three factor solution (‘identity’, ‘knowledge and understanding’ and ‘enabling’). Two dimensions of empowerment (‘identity’ and ‘enabling’) and total empowerment showed acceptable levels of internal consistency. The measure showed relationships with external measures (including quality of chronic illness care, self-efficacy and educational qualifications) that were generally supportive of its construct validity.Conclusion: Initial analyses suggest that the new measure meets basic psychometric criteria and has potential for the measurement of patient empowerment in long-term conditions in primary care. The scale may have a role in research on quality of care for long-term conditions, and could function as a patient-reported outcome measure. However, further validation is required before more extensive use of the measure.

Page generated in 0.0964 seconds