1 |
Automating Regression Test Selection for Web ServicesRuth, Michael Edward 08 August 2007 (has links)
As Web services grow in maturity and use, so do the methods which are being used to test and maintain them. Regression Testing is a major component of most major testing systems but has only begun to be applied to Web services. The majority of the tools and techniques applying regression test to Web services are focused on test-case generation, thus ignoring the potential savings of regression test selection. Regression test selection optimizes the regression testing process by selecting a subset of all tests, while still maintaining some level of confidence about the system performing no worse than the unmodified system. A safe regression test selection technique implies that after selection, the level of confidence is as high as it would be if no tests were removed. Since safe regression test selection techniques generally involve code-based (white-box) testing, they cannot be directly applied to Web services due to their loosely-coupled, standards-based, and distributed nature. A framework which automates both the regression test selection and regression testing processes for Web services in a decentralized, end-to-end manner is proposed. As part of this approach, special consideration is given to the concurrency issues which may occur in an autonomous and decentralized system. The resulting synchronization method will be presented along with a set of algorithms which manage the regression testing and regression test selection processes throughout the system. A set of empirical results demonstrate the feasibility and benefit of the approach.
|
2 |
Regression Test Selection in Multi-TaskingReal-Time Systems based on Run-Time logsLING, ZHANG January 2009 (has links)
<p>Regression testing plays an important role during the software development life-cycle,especially during maintenance, it provides confidence that the modified parts of softwarebehave as intended and the unchanged parts have no affect by the modification. Regressiontest selection is used to select test cases from the test suites which have been used to test theprevious version of the software. In this thesis, we extend the traditional definition of a testcase with a log file, containing information of which events that occurred when the test casewas last executed. Based on the contents of this log file, we propose a method of regressiontest selection for multi-tasking real-time systems, able to determine which parts of softwarethat have not been affected by the modification. Therefore, the test cases designed for theunchanged parts do not need to be re-tested.</p>
|
3 |
Testing of Heterogeneous SystemsGhazi, Nauman January 2014 (has links)
Context: A system of systems often exhibits heterogeneity, for instance in implementation, hardware, process and verification. We define a heterogeneous system, as a system comprised of multiple systems (system of systems) where at least one subsystem exhibits heterogeneity with respect to the other systems. The system of systems approach taken in development of heterogeneous systems give rise to various challenges due to continuous change in configurations and multiple interactions between the functionally independent subsystems. The challenges posed to testing of heterogeneous systems are mainly related to interoperability, conformance and large regression test suites. Furthermore, the inherent complexities of heterogeneous systems also pose challenge to the specification, selection and execution of tests. Objective: The main objective of this licentiate thesis is to provide an insight on the state of the art in testing heterogeneous systems. Moreover, we also aimed to investigate different test techniques used to test heterogeneous systems in industrial settings and their usefulness as well as to identify and prioritize different information sources that can help practitioners to define a generic search space for test case selection process. Method: The findings presented in this thesis are obtained through a controlled experiment, a systematic literature review (SLR), a case study and an exploratory survey. The purpose of systematic literature review was to investigate the existing state of art in testing heterogeneous systems and identification of research gaps. The results from the SLR further laid down the foundation of action research conducted through an exploratory survey to compare different test techniques. We also conducted an industrial case study to investigate the relevant data sources for search space initiation to prioritize and specify test cases in context of heterogeneous systems. Results: Based on our literature review, we found that testing of heterogeneous systems is considered a problem of integration and system testing. It has been observed that multiple interactions between the system and subsystems results into a testing challenge, especially when the configurations change continuously. It is also observed that current literature targets the problem of testing heterogeneous systems with multiple test objectives resulting in employing different test methods to reach a domain specific testing challenge. Using the exploratory survey, we found three test techniques to be most relevant in context of testing heterogeneous systems. However, the most frequently used technique mentioned by the practitioners is manual exploratory testing which is not a much researched topic in the context of heterogeneous systems. Moreover, multiple information sources for test selection process are identified through the case study and the survey. Conclusion: Companies engaged in development of heterogeneous systems encounter huge challenges due to multiple interactions between the system and subsystems. However, the conclusions we draw from the research studies included herein show a gap between literature and industry. Search-based testing is widely discussed in the literature but is the least used test technique in industrial practice. Moreover, for test selection process there are no frameworks that take in account all the information sources that we investigated. Therefore, to fill this gap there is a need for an optimized test selection process based on the information sources. There is also a need to study different test techniques identified through our SLR and survey and compare these techniques on real heterogeneous systems.
|
4 |
Využití on-line diagnostiky při výběru pracovníků / Use of on-line diagnostic tools in selecting employeesDufková, Klára January 2010 (has links)
This thesis deals with the possibilities of using on-line diagnostic tools in selecting employees. The theoretical part of the thesis summarizes the theoretical basis for selection of employees, psychodiagnostics and on-line diagnostics. Attention is paid to psycho diagnostic characteristics and determination of preferences and risks of on-line diagnostics. The practical section examines and assesses the individual tools on-line diagnostics available for selection of employees on the Czech market - specifically, by the company Access Assessment s.r.o., Assessment Systems s.r.o., cut-e czech s.r.o and www.SCIO.cz, s.r.o. The tools are evaluated on the basis of psychometric features, factor is the standardization, objectivity, reliability and validity, as these characteristics determine the quality of the questionnaire or test. The result is a recommendation of an on-line questionnaire and tests that are for standard practice in the selection procedure most applicable.
|
5 |
Impact-Driven Regression Test Selection for Mainframe Business SystemsDharmapurikar, Abhishek V. 25 July 2013 (has links)
No description available.
|
6 |
Ratchet : a prototype change-impact analysis tool with dynamic test selection for C++ codeAsenjo, Alejandro 17 June 2011 (has links)
Understanding the impact of changes made daily by development teams working on large-scale software products is a challenge faced by many organizations nowadays. Development efficiency can be severely affected by the increase in fragility that can creep in as products evolve and become more complex. Processes, such as gated check-in mechanisms, can be put in place to detect problematic changes before submission, but are usually limited in effectiveness due to their reliance on statically-defined sets of tests. Traditional change-impact analysis techniques can be combined with information gathered at run-time in order to create a system that can select tests for change verification. This report provides the high-level architecture of a system, named Ratchet, that combines static analysis of C++ programs, enabled by the reuse of the Clang compiler frontend, and code-coverage information gathered from automated test runs, in order to automatically select and schedule tests that exercise functions and methods possibly affected by the change. Prototype implementations of the static-analysis components of the system are provided, along with a basic evaluation of their capabilities through synthetic examples. / text
|
7 |
Regression Test Selection in Multi-TaskingReal-Time Systems based on Run-Time logsLING, ZHANG January 2009 (has links)
Regression testing plays an important role during the software development life-cycle,especially during maintenance, it provides confidence that the modified parts of softwarebehave as intended and the unchanged parts have no affect by the modification. Regressiontest selection is used to select test cases from the test suites which have been used to test theprevious version of the software. In this thesis, we extend the traditional definition of a testcase with a log file, containing information of which events that occurred when the test casewas last executed. Based on the contents of this log file, we propose a method of regressiontest selection for multi-tasking real-time systems, able to determine which parts of softwarethat have not been affected by the modification. Therefore, the test cases designed for theunchanged parts do not need to be re-tested.
|
8 |
Planning a Sound High School Testing ProgramCampbell, Claude W. 07 1900 (has links)
A major consideration in this study has been given to the establishment of the criteria by which the soundness of a testing program could be evaluated as its role in the secondary school.
This problem was limited to the planning and administering of a progressive and comprehensive long-range testing program designed to meet the needs and problems common to most school administrators within the economic limits of a small high school. It was not the purpose of this study to anticipate the problems peculiar to particular teachers, high schools, or localities. However, the testing program if properly directed will result in the formation of subsidiary testing programs undertaken by particular teachers or groups of teachers for the purpose of throwing light on the specific problems raised by a large general testing program.
|
9 |
Extending Peass to Detect Performance Changes of Apache TomcatRosenlund, Stefan 07 August 2023 (has links)
New application versions may contain source code changes that decrease the application’s performance. To ensure sufficient performance, it is necessary to identify these code changes. Peass is a performance analysis tool using performance measurements of unit tests to achieve that goal for Java applications. However, it can only be utilized for Java applications that are built using the tools Apache Maven or Gradle. This thesis provides a plugin for Peass that enables it to analyze applications built with Apache Ant.
Peass utilizes the frameworks Kieker and KoPeMe to record the execution traces and measure the response times of unit tests. This results in the following tasks for the Peass-Ant plugin: (1) Add Kieker and KoPeMe as dependencies and (2) Execute transformed unit tests. For the first task, our plugin programmatically resolves the transitive dependencies of Kieker and KoPeMe and modifies the XML buildfiles of the application under test. For the second task, the plugin orchestrates the process that surrounds test execution—implementing performance optimizations for the analysis of applications with large codebases—and executes specific Ant commands that prepare and start test execution. To make our plugin work, we additionally improved Peass and Kieker. Therefore, we implemented three enhancements and identified twelve bugs.
We evaluated the Peass-Ant plugin by conducting a case study on 200 commits of the open-source project Apache Tomcat. We detected 14 commits with 57 unit tests that contain performance changes. Our subsequent root cause analysis identified nine source code changes that we assigned to three clusters of source code changes known to cause performance changes.:1. Introduction
1.1. Motivation
1.2. Objectives
1.3. Organization
2. Foundations
2.1. Performance Measurement in Java
2.2. Peass
2.3. Apache Ant
2.4. Apache Tomcat
3. Architecture of the Plugin
3.1. Requirements
3.2. Component Structure
3.3. Integrated Class Structure of Peass and the Plugin
3.4. Build Modification Tasks for Tomcat
4. Implementation
4.1. Changes in Peass
4.2. Changes in Kieker and Kieker-Source-Instrumentation
4.3. Buildfile Modification of the Plugin
4.4. Test Execution of the Plugin
5. Evaluative Case Study
5.1. Setup of the Case Study
5.2. Results of the Case Study
5.3. Performance Optimizations for Ant Applications
6. Related Work
6.1. Performance Analysis Tools
6.2. Test Selection and Test Prioritization Tools
6.3. Empirical Studies on Performance Bugs and Regressions
7. Conclusion and Future Work
7.1. Conclusion
7.2. Future Work / Neue Versionen einer Applikation können Quelltextänderungen enthalten, die die Performance der Applikation verschlechtern. Um eine ausreichende Performance sicherzustellen, ist es notwendig, diese Quelltextänderungen zu identifizieren. Peass ist ein Performance-Analyse-Tool, das die Performance von Unit-Tests misst, um dieses Ziel für Java-Applikationen zu erreichen. Allerdings kann es nur für Java-Applikationen verwendet werden, die eines der Build-Tools Apache Maven oder Gradle nutzen. In dieser Arbeit wird ein Plugin für Peass entwickelt, das es ermöglicht, mit Peass Applikationen zu analysieren, die das Build-Tool Apache Ant nutzen.
Peass verwendet die Frameworks Kieker und KoPeMe, um Ausführungs-Traces von Unit-Tests aufzuzeichnen und Antwortzeiten von Unit-Tests zu messen. Daraus resultieren folgende Aufgaben für das Peass-Ant-Plugin: (1) Kieker und KoPeMe als Abhängigkeiten hinzufügen und (2) Transformierte Unit-Tests ausführen. Für die erste Aufgabe löst das Plugin programmbasiert die transitiven Abhängigkeiten von Kieker und KoPeMe auf und modifiziert die XML-Build-Dateien der zu testenden Applikation. Für die zweite Aufgabe steuert das Plugin den Prozess, der die Testausführung umgibt, und führt spezielle Ant-Kommandos aus, die die Testausführung vorbereiten und starten. Dabei implementiert es Performanceoptimierungen, um auch Applikationen mit einer großen Codebasis analysieren zu können. Um die Lauffähigkeit des Plugins sicherzustellen, wurden zusätzlich Verbesserungen an Peass und Kieker vorgenommen. Dabei wurden drei Erweiterungen implementiert und zwölf Bugs identifiziert.
Um das Peass-Ant-Plugin zu bewerten, wurde eine Fallstudie mit 200 Commits des Open-Source-Projekts Apache Tomcat durchgeführt. Dabei wurden 14 Commits mit 57 Unit-Tests erkannt, die Performanceänderungen enthalten. Unsere anschließende Ursachenanalyse identifizierte neun verursachende Quelltextänderungen. Diese wurden drei Clustern von Quelltextänderungen zugeordnet, von denen bekannt ist, dass sie eine Veränderung der Performance verursachen.:1. Introduction
1.1. Motivation
1.2. Objectives
1.3. Organization
2. Foundations
2.1. Performance Measurement in Java
2.2. Peass
2.3. Apache Ant
2.4. Apache Tomcat
3. Architecture of the Plugin
3.1. Requirements
3.2. Component Structure
3.3. Integrated Class Structure of Peass and the Plugin
3.4. Build Modification Tasks for Tomcat
4. Implementation
4.1. Changes in Peass
4.2. Changes in Kieker and Kieker-Source-Instrumentation
4.3. Buildfile Modification of the Plugin
4.4. Test Execution of the Plugin
5. Evaluative Case Study
5.1. Setup of the Case Study
5.2. Results of the Case Study
5.3. Performance Optimizations for Ant Applications
6. Related Work
6.1. Performance Analysis Tools
6.2. Test Selection and Test Prioritization Tools
6.3. Empirical Studies on Performance Bugs and Regressions
7. Conclusion and Future Work
7.1. Conclusion
7.2. Future Work
|
10 |
Test Automation for Grid-Based Multiagent Autonomous SystemsEntekhabi, Sina January 2024 (has links)
Traditional software testing usually comes with manual definitions of test cases. This manual process can be time-consuming, tedious, and incomplete in covering important but elusive corner cases that are hardly identifiable. Automatic generation of random test cases emerges as a strategy to mitigate the challenges associated with the manual test case design. However, the effectiveness of random test cases in fault detection may be limited, leading to increased testing costs, particularly in systems where test execution demands substantial resources and time. Leveraging the domain knowledge of test experts can guide the automatic random generation of test cases to more effective zones. In this thesis, we target quality assurance of multiagent autonomous systems and aim to automate test generation for them by applying the domain knowledge of test experts. To formalize the specification of the domain expert's knowledge, we introduce a small Domain Specific Language (DSL) for formal specification of particular locality-based constraints for grid-based multiagent systems. We initially employ this DSL for filtering randomly generated test inputs. Then, we evaluate the effectiveness of the generated test cases through an experiment on a case study of autonomous agents. Applying statistical analysis on the experiment results demonstrates that utilizing the domain knowledge to specify test selection criteria for filtering randomly generated test cases significantly reduces the number of potentially costly test executions to identify the persisting faults. Domain knowledge of experts can also be utilized to directly generate test inputs with constraint solvers. We conduct a comprehensive study to compare the performance of filtering random cases and constraint-solving approaches in generating selective test cases across various test scenario parameters. The examination of these parameters provides criteria for determining the suitability of random data filtering versus constraint solving, considering the varying size and complexity of the test input generation constraint. To conduct our experiments, we use QuickCheck tool for random test data generation with filtering, and we employ Z3 for constraint solving. The findings, supported by observations and statistical analysis, reveal that test scenario parameters impact the performance of filtering and constraint-solving approaches differently. Specifically, the results indicate complementary strengths between the two approaches: random generation and filtering approach excels for the systems with a large number of agents and long agent paths but shows degradation in larger grid sizes and stricter constraints. Conversely, constraint solving approach demonstrates robust performance for large grid sizes and strict constraints but experiences degradation with increased agent numbers and longer paths. Our initially proposed DSL is limited in its features and is only capable of specifying particular locality-based constraints. To be able to specify more elaborate test scenarios, we extend that DSL based on a more intricate model of autonomous agents and their environment. Using the extended DSL, we can specify test oracles and test scenarios for a dynamic grid environment and agents having several attributes. To assess the extended DSL's utility, we design a questionnaire to gather opinions from several experts and also run an experiment to compare the efficiency of the extended DSL with the initially proposed one. The questionnaire results indicate that the extended DSL was successful in specifying several scenarios that the experts found more useful than the scenarios specified by the initial DSL. Moreover, the experimental results demonstrate that testing with the extended DSL can significantly reduce the number of test executions to detect system faults, leading to a more efficient testing process. / Safety of Connected Intelligent Vehicles in Smart Cities – SafeSmart
|
Page generated in 0.1168 seconds