• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 10
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 53
  • 43
  • 37
  • 32
  • 27
  • 20
  • 19
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Towards a Regression Test Selection Technique for Message-Based Software Integration

Kuchimanchi, Sriram 17 December 2004 (has links)
Regression testing is essential to ensure software quality. Regression Test-case selection is another process wherein, the testers would like to ensure that test-cases which are obsolete due to the changes in the system should not be considered for further testing. This is the Regression Test-case Selection problem. Although existing research has addressed many related problems, most of the existing regression test-case selection techniques cater to procedural systems. Being academic, they lack the scalability and detail to cater to multi-tier applications. Such techniques can be employed for procedural systems, usually mathematical applications. Enterprise applications have become complex and distributed leading to component-based architectures. Thus, inter-process communication has become a very important activity of any such system. Messaging is the most widely employed intermodule interaction mechanism. Today's systems, being heavily internet dependent, are Web-Services based which utilize XML for messaging. We propose an RTS technique which is specifically targeted at enterprise applications.
2

Test case prioritization based on data reuse for Black-box environments

Lima, Lucas Albertins de 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:53:11Z (GMT). No. of bitstreams: 2 arquivo1906_1.pdf: 1728491 bytes, checksum: 711dbaf0713ac324ffe904a6dace38d7 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Albertins de Lima, Lucas; Cezar Alves Sampaio, Augusto. Test case prioritization based on data reuse for Black-box environments. 2009. Dissertação (Mestrado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2009.
3

Application of Adaptive Techniques in Regression Testing for Modern Software Development

Azizi, Maral 08 1900 (has links)
In this dissertation we investigate the applicability of different adaptive techniques to improve the effectiveness and efficiency of the regression testing. Initially, we introduce the concept of regression testing. We then perform a literature review of current practices and state-of-the-art regression testing techniques. Finally, we advance the regression testing techniques by performing four empirical studies in which we use different types of information (e.g. user session, source code, code commit, etc.) to investigate the effectiveness of each software metric on fault detection capability for different software environments. In our first empirical study, we show the effectiveness of applying user session information for test case prioritization. In our next study, we apply learning from the previous study, and implement a collaborative filtering recommender system for test case prioritization, which uses user sessions and change history information as input parameter, and return the risk score associated with each component. Results of this study show that our recommender system improves the effectiveness of test prioritization; the performance of our approach was particularly noteworthy when we were under time constraints. We then investigate the merits of multi-objective testing over single objective techniques with a graph-based testing framework. Results of this study indicate that the use of the graph-based technique reduces the algorithm execution time considerably, while being just as effective as the greedy algorithms in terms of fault detection rate. Finally, we apply the knowledge from the previous studies and implement a query answering framework for regression test selection. This framework is built based on a graph database and uses fault history information and test diversity in attempt to select the most effective set of test cases in term of fault detection capability. Our empirical evaluation of this study with four open source programs shows that our approach can be effective and efficient by selecting a far smaller subset of tests compared to the existing techniques.
4

Understanding the Effects of Model Evolution Through Incremental Test Case Generation for UML-RT Models

Rapos, ERIC 27 September 2012 (has links)
Model driven development (MDD) is on the rise in software engineering and no more so than in the realm of real-time and embedded systems. Being able to leverage the code generation and validation techniques made available through MDD is worth exploring, and is the focus of much academic and industrial research. However given the iterative nature of MDD, the natural evolution of models causes test case generation to occur multiple times throughout a software modeling project. Currently, the existing process of regenerating test cases for a modified model of a system can be costly, inefficient, and even redundant. The focus of this research was to achieve an improved understanding of the impact of typical model evolution steps on both the execution of the model and its test cases, and how this impact can be mitigated by reusing previously generated test cases. In this thesis we use existing techniques for symbolic execution and test case generation to perform an analysis on example models and determine how evolution affects model artifacts; these findings were then used to classify evolution steps based on their impact. From these classifications, we were able to determine exactly how to perform updates to existing symbolic execution trees and test suites in order to obtain the resulting test suites using minimal computational resources whenever possible. The approach was implemented in a software plugin, IncreTesCaGen, that is capable of incrementally generating test cases for a subset of UML-RT models by leveraging the existing testing artifacts (symbolic execution trees and test suites), as well as presenting additional analysis results to the user. Finally, we present the results of an initial evaluation of our tool, which provides insight into the tool’s performance, the effects of model evolution on execution and test case generation, as well as design tips to produce optimal models for evolution. / Thesis (Master, Computing) -- Queen's University, 2012-09-26 14:18:50.838
5

Methods For Test Case Prioritization Based On Test Case Execution History

Ying, PuLe, Fan, LingZhi January 2017 (has links)
Motivation: Test case prioritization can prioritize test cases, optimize the test execution, save time and cost. There are many different methods for test case prioritization, test case prioritization method based on test case execution history is one kind of them. Based on the test case execution history, it’s easier to increase the rate of fault detection, hence we want to do a study about test case prioritization methods based on the test case execution history. Meanwhile, executing the feasible methods to compare the effectiveness of them. For the motivation of the thesis may be regarded as an example for experiencing approach for comparing test case prioritizations based on test case execution history, or as a study case for identifying the suitable methods to use and help improve the effectiveness of the testing process. Objectives: The aim of this thesis is to look for a suitable test case prioritization method that can support risk based testing, in which test case execution history is employed as the key criterion of evaluation. For this research, there are three main objectives. First, explore and summarize methods of test case prioritization based on test case history. Next, identify what are differences among the test case prioritization methods. Finally, execute the methods which we selected, and compare the effectiveness of methods. Methods: To achieve the first and the second study objectives, a systematic literature review has been conducted using Kitchenham guidelines. To achieve the third study objective, an experiment was conducted following Wohlin guidelines. Results: In our thesis: 1) We conducted a systematic literature review and selected 15 relevant literatures. We extracted data of the literatures and then we synthesized the data. We found that the methods have different kinds of inputs, test levels, maturity levels, validation and "automated testing or manual testing". 2) We selected two feasible methods from those 15 literatures, Method 1 is Adaptive test-case prioritization and Method 2 is Similarity-based test quality metric. We executed the methods within 17 test suites. Comparing the result of two methods and non-prioritization, the mean Average Percentage of Defects Found (APFD) of Adaptive test-case prioritization execution result (86.9%) is significantly higher than non-prioritization (51.5%) and Similarity-based test quality metric (47.5%), it means that the Adaptive test-case prioritization has higher effectiveness. Conclusion: In our thesis, existing test case prioritization methods based on test case execution history are extracted and listed out through systematic literature review. The summary of them and the description of differences can be available in the thesis. The 15 relevant literatures and the synthesized data may be as a guideline for relevant software researchers or testers. We did the statistical test for the experimental result, we can see two different test case prioritization methods have different effectiveness.
6

An Experiment on the Suitability of RAM for Test Case Design

Wu, Hong January 2009 (has links)
To perform software testing at the early stages of software development process can save the cost and effort on finding and fixing defects. As the first stage of software development process, requirements engineering has been moved away from project-initiated requirements engineering towards requirements-initiated development in the last decade. This leads new challenges that it demands support for handling the requirements continually come in from multiple stakeholders on multiple abstraction levels instead of some specific customers. Requirements Abstraction Model was developed as a hierarchical abstraction method for requirements management, which is enable product management to leverage their resources and select requirements for implementation without overloading the organization. RAM was validated in industry on the usability for requirements management, but there is no evaluation for RAM on software testing. This thesis presents an empirical study with a goal of evaluating the suitability of RAM for test case design in respective of efficiency and effectiveness by the comparison with IEEE Std. 830 which is a standard of the traditional requirements specification. For achieving the goal of this study, a controlled experiment is conducted based on the refinement on an initial experiment planning, and is operated with twenty developers in industry in China. Analysis of the collected data from the experiment indicates that RAM has a similar effectiveness as using the requirements in IEEE Std. 830 format, while RAM is more efficient for test case design. Therefore, RAM is suitable for test case design, and has better performance than IEEE Std. 830 comprehensively in view of both efficiency and effectiveness.
7

Supported Programming for Beginning Developers

Gilbert, Andrew 01 March 2019 (has links)
Testing code is important, but writing test cases can be time consuming, particularly for beginning programmers who are already struggling to write an implementation. We present TestBuilder, a system for test case generation which uses an SMT solver to generate inputs to reach specified lines in a function, and asks the user what the expected outputs would be for those inputs. The resulting test cases check the correctness of the output, rather than merely ensuring the code does not crash. Further, by querying the user for expectations, TestBuilder encourages the programmer to think about what their code ought to do, rather than assuming that whatever it does is correct. We demonstrate, using mutation testing of student projects, that tests generated by TestBuilder perform better than merely compiling the code using Python’s built-in compile function, although they underperform the tests students write when required to achieve 100% test coverage.
8

Automatic Generation of Test Cases for Agile using Natural Language Processing

Rane, Prerana Pradeepkumar 24 March 2017 (has links)
Test case design and generation is a tedious manual process that requires 40-70% of the software test life cycle. The test cases written manually by inexperienced testers may not offer a complete coverage of the requirements. Frequent changes in requirements reduce the reusability of the manually written test cases costing more time and effort. Most projects in the industry follow a Behavior-Driven software development approach to capturing requirements from the business stakeholders through user stories written in natural language. Instead of writing test cases manually, this thesis investigates a practical solution for automatically generating test cases within an Agile software development workflow using natural language-based user stories and acceptance criteria. However, the information provided by the user story is insufficient to create test cases using natural language processing (NLP), so we have introduced two new input parameters, Test Scenario Description and Dictionary, to improve the test case generation process. To establish the feasibility, we developed a tool that uses NLP techniques to generate functional test cases from the free-form test scenario description automatically. The tool reduces the effort required to create the test cases while improving the test coverage and quality of the test suite. Results from the feasibility study are presented in this thesis. / Master of Science
9

A Systematic Approach To Synthesis Of Verification Test-Suites For Modular SoC Designs

Surendran, Sudhakar 11 1900 (has links)
SoCs (System on Chips) are complex designs with heterogeneous modules (CPU, memory, etc.) integrated in them. Verification is one of the important stages in designing an SoC. Verification is the process of checking if the transformation from architectural specification to design implementation is correct. Verification involves creating the following components: (i) a testplan that identifies the conditions to be verified, (ii) a testcase that generates the stimuli to verify the conditions identified, and (iii) a test-bench that applies the stimuli and monitors the output from the design. Verification consumes upto 70% of the total design time. This is largely due to the complex and manual nature of the verification task. To reduce the time spent in verifying the design, the components used for verification can be generated automatically or created at an abstract level (to reduce the complexity) and reused. In this work we present a methodology to synthesize testcases from reusable code segments and abstract specifications. Our methodology consists of the following major steps: (i) identifying the structure of testcases, (ii) identifying code segments of testcases that can be reused from one SoC to another, (iii) identifying properties of an SoC and its modules that can be used to synthesize the SoC specific code segments of the testcase, and (iv) proposing a synthesizer that uses the code segments, the properties and the abstract specification to synthesize testcases. We discuss two specific classes of testcases. These are testcases for verifying the memory modules and the testcases for verifying the data transfer modules. These are considered since they form a significantly large subset of the device functionality. We implement a prototype testcase generator and also present an example to illustrate the use of methodology for each of these classes. The use of our methodology enables (i) the creation of testcases automatically that are correct by construction and (ii) reuse of the testcase code segments from one SoC to another. Some of the properties (of the modules and the SoC) presented in our work can be easily made part of the architectural specification, and hence, can further reduce the effort needed to create them.
10

Test Case Generation from Specifications Using Natural Language Processing / Testfallsgenerering från specifikationer med hjälp av naturlig språkbehandling

Salman, Alzahraa January 2020 (has links)
Software testing plays a fundamental role in software engineering as it ensures the quality of a software system. However, one of the major challenges of software testing is its costs since it is a time and resource-consuming process which according to academia and industry can take up to 50% of the total development cost. Today, one of the most common ways of generating testcases is through manual labor by analyzing specification documents to produce test scripts, which tends to be an expensive and error prone process. Therefore, optimizing software testing by automating the test case generation process can result in time and cost reductions and also lead to better quality of the end product. Currently, most of the state-of-the-art solutions for automatic test case generation require the usage of formal specifications. Such formal specifications are not always available during the testing process and if available, they require expert knowledge for writing and understanding them. One artifact that is often available in the testing domain is test case specifications written in natural language. In this thesis, an approach for generating integration test cases from natural language test case specifications is designed, applied and, evaluated. Machine learning and natural language processing techniques are used to implement the approach. The proposed approach is conducted and evaluated on an industrial testing project at Ericsson AB in Sweden. Additionally, the approach has been implemented as a tool with a graphical user interface for aiding testers in the process of test case generation. The approach involves performing natural language processing techniques for parsing and analyzing the test case specifications to generate feature vectors that are later mapped to label vectors containing existing C# test scripts filenames. The feature and label vectors are used as input and output, respectively, in a multi-label text classification process. The approach managed to produce test scripts for all test case specifications and obtained a best F1 score of 89% when using LinearSVC as the classifier and performing data augmentation on the training set. / Programvarutestning spelar en grundläggande roll i programvaruutveckling då den säkerställer kvaliteten på ett programvarusystem. En av de största utmaningarna med programvarutestning är dess kostnader eftersom den är en tids och resurskrävande process som enligt akademin och industrin kan ta upp till 50% av den totala utvecklingskostnaden. Ett av de vanligaste sätten att generera testfall idag är med manuellt arbete genom analys av testfallsspecifikationer, vilket tenderar att vara en dyr och felbenägen process. Därför kan optimering av programvarutestning genom automatisering av testfallsgenereringsprocessen resultera i tids- och kostnadsminimeringar och även leda till bättre kvalitet på slutprodukten. Nuförtiden kräver de flesta toppmoderna lösningarna för automatisk testfallsgenerering användning av formella specifikationer. Sådana specifikationer är inte alltid tillgängliga under testprocessen och om de är tillgängliga, så krävs det expertkunskap för att skriva och förstå dem. En artefakt som ofta finns i testdomänen är testfallspecifikationer skrivna på naturligt språk. I denna rapport utformas, tillämpas och utvärderas en metod för generering av integrationstestfall från testfallsspecifikationer skrivna på naturligt språk. Maskininlärnings- och naturlig språkbehandlingstekniker används för implementationen av metoden. Den föreslagna metoden genomförs och utvärderas vid ett industriellt testprojekt hos Ericsson AB i Sverige. Dessutom har metoden implementerats som ett verktyg med ett grafiskt användargränssnitt för att hjälpa testare i testfallsgenereringsprocessen. Metoden fungerar genom att utföra naturlig språkbehandlingstekniker på testfallsspecifikationer för att generera egenskapsvektorer som senare mappas till etikettsvektorer som innehåller befintliga C# testskriptfilnamn. Engenskaps och etikettsvektorerna används sedan som indata och utdata, respektive, för textklassificeringsprocessen. Metoden lyckades producera testskript för allatestfallsspecifikationer och fick en bästa F1 poäng på 89% när LinearSVC användes för klassificeringen och datautökning var utförd på träningsdatat.

Page generated in 0.1013 seconds