• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automating Regression Test Selection for Web Services

Ruth, Michael Edward 08 August 2007 (has links)
As Web services grow in maturity and use, so do the methods which are being used to test and maintain them. Regression Testing is a major component of most major testing systems but has only begun to be applied to Web services. The majority of the tools and techniques applying regression test to Web services are focused on test-case generation, thus ignoring the potential savings of regression test selection. Regression test selection optimizes the regression testing process by selecting a subset of all tests, while still maintaining some level of confidence about the system performing no worse than the unmodified system. A safe regression test selection technique implies that after selection, the level of confidence is as high as it would be if no tests were removed. Since safe regression test selection techniques generally involve code-based (white-box) testing, they cannot be directly applied to Web services due to their loosely-coupled, standards-based, and distributed nature. A framework which automates both the regression test selection and regression testing processes for Web services in a decentralized, end-to-end manner is proposed. As part of this approach, special consideration is given to the concurrency issues which may occur in an autonomous and decentralized system. The resulting synchronization method will be presented along with a set of algorithms which manage the regression testing and regression test selection processes throughout the system. A set of empirical results demonstrate the feasibility and benefit of the approach.
2

Regression Test Selection in Multi-TaskingReal-Time Systems based on Run-Time logs

LING, ZHANG January 2009 (has links)
<p>Regression testing plays an important role during the software development life-cycle,especially during maintenance, it provides confidence that the modified parts of softwarebehave as intended and the unchanged parts have no affect by the modification. Regressiontest selection is used to select test cases from the test suites which have been used to test theprevious version of the software. In this thesis, we extend the traditional definition of a testcase with a log file, containing information of which events that occurred when the test casewas last executed. Based on the contents of this log file, we propose a method of regressiontest selection for multi-tasking real-time systems, able to determine which parts of softwarethat have not been affected by the modification. Therefore, the test cases designed for theunchanged parts do not need to be re-tested.</p>
3

Impact-Driven Regression Test Selection for Mainframe Business Systems

Dharmapurikar, Abhishek V. 25 July 2013 (has links)
No description available.
4

Ratchet : a prototype change-impact analysis tool with dynamic test selection for C++ code

Asenjo, Alejandro 17 June 2011 (has links)
Understanding the impact of changes made daily by development teams working on large-scale software products is a challenge faced by many organizations nowadays. Development efficiency can be severely affected by the increase in fragility that can creep in as products evolve and become more complex. Processes, such as gated check-in mechanisms, can be put in place to detect problematic changes before submission, but are usually limited in effectiveness due to their reliance on statically-defined sets of tests. Traditional change-impact analysis techniques can be combined with information gathered at run-time in order to create a system that can select tests for change verification. This report provides the high-level architecture of a system, named Ratchet, that combines static analysis of C++ programs, enabled by the reuse of the Clang compiler frontend, and code-coverage information gathered from automated test runs, in order to automatically select and schedule tests that exercise functions and methods possibly affected by the change. Prototype implementations of the static-analysis components of the system are provided, along with a basic evaluation of their capabilities through synthetic examples. / text
5

Regression Test Selection in Multi-TaskingReal-Time Systems based on Run-Time logs

LING, ZHANG January 2009 (has links)
Regression testing plays an important role during the software development life-cycle,especially during maintenance, it provides confidence that the modified parts of softwarebehave as intended and the unchanged parts have no affect by the modification. Regressiontest selection is used to select test cases from the test suites which have been used to test theprevious version of the software. In this thesis, we extend the traditional definition of a testcase with a log file, containing information of which events that occurred when the test casewas last executed. Based on the contents of this log file, we propose a method of regressiontest selection for multi-tasking real-time systems, able to determine which parts of softwarethat have not been affected by the modification. Therefore, the test cases designed for theunchanged parts do not need to be re-tested.
6

Extending Peass to Detect Performance Changes of Apache Tomcat

Rosenlund, Stefan 07 August 2023 (has links)
New application versions may contain source code changes that decrease the application’s performance. To ensure sufficient performance, it is necessary to identify these code changes. Peass is a performance analysis tool using performance measurements of unit tests to achieve that goal for Java applications. However, it can only be utilized for Java applications that are built using the tools Apache Maven or Gradle. This thesis provides a plugin for Peass that enables it to analyze applications built with Apache Ant. Peass utilizes the frameworks Kieker and KoPeMe to record the execution traces and measure the response times of unit tests. This results in the following tasks for the Peass-Ant plugin: (1) Add Kieker and KoPeMe as dependencies and (2) Execute transformed unit tests. For the first task, our plugin programmatically resolves the transitive dependencies of Kieker and KoPeMe and modifies the XML buildfiles of the application under test. For the second task, the plugin orchestrates the process that surrounds test execution—implementing performance optimizations for the analysis of applications with large codebases—and executes specific Ant commands that prepare and start test execution. To make our plugin work, we additionally improved Peass and Kieker. Therefore, we implemented three enhancements and identified twelve bugs. We evaluated the Peass-Ant plugin by conducting a case study on 200 commits of the open-source project Apache Tomcat. We detected 14 commits with 57 unit tests that contain performance changes. Our subsequent root cause analysis identified nine source code changes that we assigned to three clusters of source code changes known to cause performance changes.:1. Introduction 1.1. Motivation 1.2. Objectives 1.3. Organization 2. Foundations 2.1. Performance Measurement in Java 2.2. Peass 2.3. Apache Ant 2.4. Apache Tomcat 3. Architecture of the Plugin 3.1. Requirements 3.2. Component Structure 3.3. Integrated Class Structure of Peass and the Plugin 3.4. Build Modification Tasks for Tomcat 4. Implementation 4.1. Changes in Peass 4.2. Changes in Kieker and Kieker-Source-Instrumentation 4.3. Buildfile Modification of the Plugin 4.4. Test Execution of the Plugin 5. Evaluative Case Study 5.1. Setup of the Case Study 5.2. Results of the Case Study 5.3. Performance Optimizations for Ant Applications 6. Related Work 6.1. Performance Analysis Tools 6.2. Test Selection and Test Prioritization Tools 6.3. Empirical Studies on Performance Bugs and Regressions 7. Conclusion and Future Work 7.1. Conclusion 7.2. Future Work / Neue Versionen einer Applikation können Quelltextänderungen enthalten, die die Performance der Applikation verschlechtern. Um eine ausreichende Performance sicherzustellen, ist es notwendig, diese Quelltextänderungen zu identifizieren. Peass ist ein Performance-Analyse-Tool, das die Performance von Unit-Tests misst, um dieses Ziel für Java-Applikationen zu erreichen. Allerdings kann es nur für Java-Applikationen verwendet werden, die eines der Build-Tools Apache Maven oder Gradle nutzen. In dieser Arbeit wird ein Plugin für Peass entwickelt, das es ermöglicht, mit Peass Applikationen zu analysieren, die das Build-Tool Apache Ant nutzen. Peass verwendet die Frameworks Kieker und KoPeMe, um Ausführungs-Traces von Unit-Tests aufzuzeichnen und Antwortzeiten von Unit-Tests zu messen. Daraus resultieren folgende Aufgaben für das Peass-Ant-Plugin: (1) Kieker und KoPeMe als Abhängigkeiten hinzufügen und (2) Transformierte Unit-Tests ausführen. Für die erste Aufgabe löst das Plugin programmbasiert die transitiven Abhängigkeiten von Kieker und KoPeMe auf und modifiziert die XML-Build-Dateien der zu testenden Applikation. Für die zweite Aufgabe steuert das Plugin den Prozess, der die Testausführung umgibt, und führt spezielle Ant-Kommandos aus, die die Testausführung vorbereiten und starten. Dabei implementiert es Performanceoptimierungen, um auch Applikationen mit einer großen Codebasis analysieren zu können. Um die Lauffähigkeit des Plugins sicherzustellen, wurden zusätzlich Verbesserungen an Peass und Kieker vorgenommen. Dabei wurden drei Erweiterungen implementiert und zwölf Bugs identifiziert. Um das Peass-Ant-Plugin zu bewerten, wurde eine Fallstudie mit 200 Commits des Open-Source-Projekts Apache Tomcat durchgeführt. Dabei wurden 14 Commits mit 57 Unit-Tests erkannt, die Performanceänderungen enthalten. Unsere anschließende Ursachenanalyse identifizierte neun verursachende Quelltextänderungen. Diese wurden drei Clustern von Quelltextänderungen zugeordnet, von denen bekannt ist, dass sie eine Veränderung der Performance verursachen.:1. Introduction 1.1. Motivation 1.2. Objectives 1.3. Organization 2. Foundations 2.1. Performance Measurement in Java 2.2. Peass 2.3. Apache Ant 2.4. Apache Tomcat 3. Architecture of the Plugin 3.1. Requirements 3.2. Component Structure 3.3. Integrated Class Structure of Peass and the Plugin 3.4. Build Modification Tasks for Tomcat 4. Implementation 4.1. Changes in Peass 4.2. Changes in Kieker and Kieker-Source-Instrumentation 4.3. Buildfile Modification of the Plugin 4.4. Test Execution of the Plugin 5. Evaluative Case Study 5.1. Setup of the Case Study 5.2. Results of the Case Study 5.3. Performance Optimizations for Ant Applications 6. Related Work 6.1. Performance Analysis Tools 6.2. Test Selection and Test Prioritization Tools 6.3. Empirical Studies on Performance Bugs and Regressions 7. Conclusion and Future Work 7.1. Conclusion 7.2. Future Work
7

MutShrink: um método de redução de banco de dados de teste baseado em mutação / MutShrink: a mutation-based test database shrinking method

Toledo, Ludmila Irineu 11 August 2017 (has links)
Submitted by JÚLIO HEBER SILVA (julioheber@yahoo.com.br) on 2017-09-06T18:11:43Z No. of bitstreams: 2 Dissertação - Ludmila Irineu Toledo - 2017.pdf: 1781052 bytes, checksum: 809a5a8972f14af9bc5bd3cc2eb37f80 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-09-15T15:34:25Z (GMT) No. of bitstreams: 2 Dissertação - Ludmila Irineu Toledo - 2017.pdf: 1781052 bytes, checksum: 809a5a8972f14af9bc5bd3cc2eb37f80 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-09-15T15:34:26Z (GMT). No. of bitstreams: 2 Dissertação - Ludmila Irineu Toledo - 2017.pdf: 1781052 bytes, checksum: 809a5a8972f14af9bc5bd3cc2eb37f80 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-08-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Regression testing for database applications can be a computationally costly task as it often deals with databases with large volumes of data and complex SQL statements (for example, nested queries, set comparisons, use of functions and operators). In this context, some works only select a subset of the database for testing purposes, that is, select data to create a test database and thus improve test efficiency. But usually, the selection of test data is also a complex optimization problem. Thus, this work proposes a method of selecting test data for regression testing on SQL statements based on mutation analysis, called MutShrink. The goal is to minimize the cost of testing by reducing the size of the database while maintaining the same effectiveness as the original database. MutShrink consists of using the result of the generated mutants to evaluate the database and select tuples using filters in these results, selecting reduced sets of test data. Experiments were performed using a benchmark with complex SQLs and database with large data volume. We compared our proposal with the QAShrink tool and the results revealed that MutShrink overcame the QAShrink tool in 92.85 % of cases when evaluated by the Mutation Score metric and 57.14 % of cases when evaluated by the metric Full Predicate Coverage. / O teste de regressão para aplicações de banco de dados pode ser uma tarefa computacionalmente custosa, pois frequentemente lida com bancos de dados com grandes volumes de dados e instruções SQL com estruturas complexas (por exemplo, consultas aninhadas, comparação de conjuntos, uso de funções e operadores). Neste contexto, alguns trabalhos realizam seleção apenas de um subconjunto do banco de dados para fins de teste, ou seja, selecionam dados para criar um banco de dados de teste e assim, melhorar a eficiência do teste. Mas, normalmente, a seleção de dados de teste também é um problema complexo de otimização. Assim, este trabalho propõe um método de seleção de dados de teste para teste de regressão em instruções SQLs baseado em análise de mutação, chamado MutShrink. O objetivo é minimizar o custo do teste reduzindo o tamanho do banco de dados, mantendo a eficácia semelhante ao banco original. O MutShrink consiste em utilizar o resultado dos mutantes gerados para avaliar o banco de dados e selecionar tuplas a partir de filtros nestes resultados, selecionando conjuntos reduzidos de dados de teste. Foram realizados experimentos usando um benchmark com SQLs de estruturas complexas e banco de dados com grande volume de dados. Comparamos nossa proposta com a ferramenta QAShrink e os resultados revelaram que o MutShrink superou a ferramenta QAShrink em 92,85% dos casos quando avaliada pela métrica Escore de Mutação e em 57,14% dos casos quando avaliada pela métrica Full Predicate Coverage.

Page generated in 0.154 seconds