• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • Tagged with
  • 12
  • 12
  • 12
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Towards a Regression Test Selection Technique for Message-Based Software Integration

Kuchimanchi, Sriram 17 December 2004 (has links)
Regression testing is essential to ensure software quality. Regression Test-case selection is another process wherein, the testers would like to ensure that test-cases which are obsolete due to the changes in the system should not be considered for further testing. This is the Regression Test-case Selection problem. Although existing research has addressed many related problems, most of the existing regression test-case selection techniques cater to procedural systems. Being academic, they lack the scalability and detail to cater to multi-tier applications. Such techniques can be employed for procedural systems, usually mathematical applications. Enterprise applications have become complex and distributed leading to component-based architectures. Thus, inter-process communication has become a very important activity of any such system. Messaging is the most widely employed intermodule interaction mechanism. Today's systems, being heavily internet dependent, are Web-Services based which utilize XML for messaging. We propose an RTS technique which is specifically targeted at enterprise applications.
2

Seleção de casos de teste para sistemas de processamento de imagens utilizando conceitos de CBIR / Test Case Selection For Image Processing Systems Using CBIR Concepts.

Narciso, Everton Note 29 October 2013 (has links)
Os sistemas de processamento de imagens exercem um papel importante no que tange à emulação da visão humana, pois grande parte das informações que as pessoas obtêm do mundo real ocorre por meio de imagens. Desenvolver tais sistemas é uma tarefa complexa e que requer testes rigorosos para garantir a sua confiabilidade. Neste cenário, a seleção de casos de teste é fundamental, pois ajuda a eliminar os dados de teste redundantes e desnecessários enquanto procura manter altas taxas de detecção de erros. Na literatura há várias abordagens para seleção de casos de teste com foco em sistemas de entradas/saídas alfanuméricas, mas a seleção voltada a sistemas complexos (e.g. processamento de imagens) ainda é pouco explorada. Visando a contribuir neste campo de pesquisa, este trabalho apresenta um novo método intitulado Tcs&CbIR, que seleciona e recupera um subconjunto de imagens a partir de um vasto conjunto de teste. Os testes realizados com dois programas de processamento de imagens mostram que a nova abordagem pode superar a seleção aleatória pois, no contexto de avaliação apresentado, a quantidade de casos de teste necessária para revelar a presença de erros foi reduzida em até 87%. Os resultados obtidos revelam, também, o potencial da utilização de CBIR para abstração de informações, a importância da definição de extratores de características adequados e a influência que as funções de similaridade podem exercer na seleção de casos de teste. / Image processing systems play a key role when it comes to emulation of human vision, because much of the information that humans capture from the real world occurs through images. Developing such systems is a complex task that requires rigorous testing to ensure their quality and reliability. In this scenario, the test case selection is crucial because it helps to eliminate the redundant and unnecessary test data while it tries to maintain high rates of error detection. In the literature there are several approaches for test cases selection with a focus on systems with alphanumeric inputs and outputs, but the selection focused on complex systems (e.g. image processing) is still unexplored. Aiming to contribute to this research field, this work presents a new method entitled Tcs&CbIR, which selects and retrieves a subset of images from a wide test suite. Tests conducted with two image processing programs show that the new approach can overcome the random selection because, in the context of evaluation presented, the amount of test cases required to detect the presence of the errors was reduced by up to 87%. The results also show the potential use of CBIR for information abstraction, the importance of the definition of suitable extractors of characteristics and the influence of the similarity functions in the test case selection.
3

Seleção de casos de teste para sistemas de processamento de imagens utilizando conceitos de CBIR / Test Case Selection For Image Processing Systems Using CBIR Concepts.

Everton Note Narciso 29 October 2013 (has links)
Os sistemas de processamento de imagens exercem um papel importante no que tange à emulação da visão humana, pois grande parte das informações que as pessoas obtêm do mundo real ocorre por meio de imagens. Desenvolver tais sistemas é uma tarefa complexa e que requer testes rigorosos para garantir a sua confiabilidade. Neste cenário, a seleção de casos de teste é fundamental, pois ajuda a eliminar os dados de teste redundantes e desnecessários enquanto procura manter altas taxas de detecção de erros. Na literatura há várias abordagens para seleção de casos de teste com foco em sistemas de entradas/saídas alfanuméricas, mas a seleção voltada a sistemas complexos (e.g. processamento de imagens) ainda é pouco explorada. Visando a contribuir neste campo de pesquisa, este trabalho apresenta um novo método intitulado Tcs&CbIR, que seleciona e recupera um subconjunto de imagens a partir de um vasto conjunto de teste. Os testes realizados com dois programas de processamento de imagens mostram que a nova abordagem pode superar a seleção aleatória pois, no contexto de avaliação apresentado, a quantidade de casos de teste necessária para revelar a presença de erros foi reduzida em até 87%. Os resultados obtidos revelam, também, o potencial da utilização de CBIR para abstração de informações, a importância da definição de extratores de características adequados e a influência que as funções de similaridade podem exercer na seleção de casos de teste. / Image processing systems play a key role when it comes to emulation of human vision, because much of the information that humans capture from the real world occurs through images. Developing such systems is a complex task that requires rigorous testing to ensure their quality and reliability. In this scenario, the test case selection is crucial because it helps to eliminate the redundant and unnecessary test data while it tries to maintain high rates of error detection. In the literature there are several approaches for test cases selection with a focus on systems with alphanumeric inputs and outputs, but the selection focused on complex systems (e.g. image processing) is still unexplored. Aiming to contribute to this research field, this work presents a new method entitled Tcs&CbIR, which selects and retrieves a subset of images from a wide test suite. Tests conducted with two image processing programs show that the new approach can overcome the random selection because, in the context of evaluation presented, the amount of test cases required to detect the presence of the errors was reduced by up to 87%. The results also show the potential use of CBIR for information abstraction, the importance of the definition of suitable extractors of characteristics and the influence of the similarity functions in the test case selection.
4

Using Autonomous Agents for Software Testing Based on JADE

Nyussupov, Adlet January 2019 (has links)
The thesis work describes the development of a multiagent testing application (MTA) based on an agent approach for solving challenges in regression testing domain, such as: reducing the complexity of testing, optimizing the time consumption, increasing the efficiency and implementing the automation of this approach for regression testing. All these challenges related to effectiveness and cost, can be represented as measures of achieved code coverage and number of test cases created. A multiagent approach is proposed in this thesis since it allows the implementation of the autonomous behaviour and optimizes the data processing in a heterogeneous environment. In addition, the agent-based approach provides flexible design methods for building multitask applications and conducting parallel task execution. However, all of these advantages of using an agent-based approach need to be investigated in the regression testing domain for realistic scenarios. Therefore, a hypothesis was formulated in order to investigate the efficiency of the MTA approach using an experiment as the main research method for obtaining results. The thesis includes a comparison analysis between the MTA and well-known test case generation tools (i.e. EvoSuite and JUnitTools) for identifying the differences in terms of efficiency and code coverage achieved. The comparison results showed advantages of the MTA within regression testing context due to optimal level of code coverage and test cases. The outcome of the thesis work moves toward solving the aforementioned problems in regression testing domain and shows some advantages of using the multagent approach within regression testing context.
5

Application of Adaptive Techniques in Regression Testing for Modern Software Development

Azizi, Maral 08 1900 (has links)
In this dissertation we investigate the applicability of different adaptive techniques to improve the effectiveness and efficiency of the regression testing. Initially, we introduce the concept of regression testing. We then perform a literature review of current practices and state-of-the-art regression testing techniques. Finally, we advance the regression testing techniques by performing four empirical studies in which we use different types of information (e.g. user session, source code, code commit, etc.) to investigate the effectiveness of each software metric on fault detection capability for different software environments. In our first empirical study, we show the effectiveness of applying user session information for test case prioritization. In our next study, we apply learning from the previous study, and implement a collaborative filtering recommender system for test case prioritization, which uses user sessions and change history information as input parameter, and return the risk score associated with each component. Results of this study show that our recommender system improves the effectiveness of test prioritization; the performance of our approach was particularly noteworthy when we were under time constraints. We then investigate the merits of multi-objective testing over single objective techniques with a graph-based testing framework. Results of this study indicate that the use of the graph-based technique reduces the algorithm execution time considerably, while being just as effective as the greedy algorithms in terms of fault detection rate. Finally, we apply the knowledge from the previous studies and implement a query answering framework for regression test selection. This framework is built based on a graph database and uses fault history information and test diversity in attempt to select the most effective set of test cases in term of fault detection capability. Our empirical evaluation of this study with four open source programs shows that our approach can be effective and efficient by selecting a far smaller subset of tests compared to the existing techniques.
6

Exploring the use of call stack depth limits to reduce regression testing costs

Bogren, Patrik, Kristola, Isak January 2021 (has links)
Regression testing is performed after existing source code has been modified to verify that no new faults have been introduced by the changes. Test case selection can be used to reduce the effort of regression testing by selecting a smaller subset of the test suite for later execution. Several criteria and objectives can be used as constraints that should be satisfied by the selection process. One common criteria is function coverage, which can be represented by a coverage matrix that maps test cases to methods under test. The process of generating and evaluating these matrices can be very time consuming for large matrices since their complexity increases exponentially with the number of tests included. To the best of our knowledge, no techniques for reducing execution matrix size have been proposed. This thesis develops a matrix-reduction technique based on analysis of call stack data. It studies the effects of limiting the call stack depth in terms of coverage accuracy, matrix size, and generation costs. Further, it uses a tool that can instrument Java projects using Java’s instrumentation API to collect coverage information on open-source Java projects for varying depth limits of the call stack. Our results show that the stack depth limit can be significantly reduced while retaining high coverage and that matrix size can be decreased by up to 50%. The metric we used to indicate the difficulty of splitting up the matrix closely resembled the curve for coverage. However, we did not see any significant differences in execution time for lower depth limits.
7

Generate Test Selection Statistics With Automated Selective Mutation

Gamini, Devi charan January 2020 (has links)
Context. Software systems are under constant updating for being faulty and to improve and introduce features. The Software testing is the most commonly used  method for validating the quality of software systems. Agile processes help to  automate testing process. A regression test is the main strategy used in testing. Regression testing is time consuming, but with increase in codebases is making it more time extensive and time consuming. Making regression testing time efficient for continuous integration is the new strategy.   Objectives. This thesis focuses on co-relating code packages to test packages by automating mutation to inject error into C code. Regression testing against mutated code establishes co-relations. Co-relation data of particular modified code packages can be used for test selections. This method is most effective than the traditional test selection method. For this thesis to reduce the mutation costs selective mutation method is selected. Demonstrating the proof of concept helps to prove proposed  hypothesis.   Methods. An experiment answers the research questions. Testing of hypothesis on open source C programs will evaluate efficiency. Using this correlation method testers can reduce the testing cycles regardless of test environments. Results. Experimenting with sample programs using automated selective mutation the efficiency to co-relate tests to code packages was 93.4%.   Results. After experimenting with sample programs using automated selective mutation the efficiency to co-relate tests to code packages was 93.4%.   Conclusions. This research concludes that the automated mutation to obtain test selection statistics can be adopted. Though it is difficult for mutants to fail every test case, supposing that this method works with 93.4% efficient test failure on an average, then this method can reduce the test suite size to 5% for the particular modified code package.
8

Test Case Selection in Continuous Integration Using Reinforcement Learning with Linear Function Approximator

Salman, Younus January 2023 (has links)
Continuous Integration (CI) has become an essential practice in software development, allowing teams to integrate code changes frequently and detect issues early. However, the selection of proper test cases for CI remains a challenge, as it requires balancing the need for thorough testing with the minimization of execution time and resources. This study proposes a practical and lightweight approach that leverages Reinforcement Learning with a linear function approximator for test case selection in CI. Several models are created where each one focuses on a different feature set. The proposed method aims to optimize the selection of test cases by learning from past CI outcomes, both the historical data of the test cases and the coverage data of the source code, and dynamically adapting the models for encountering new test cases and modified source code. Through experimentation and comparison between the models, the study demonstrates which feature set is optimal and efficient. The result indicates that Reinforcement Learning with a linear function approximator using coverage information can effectively assist in selecting test cases in CI, leading to enhanced software quality and development efficiency.
9

GENERATE TEST SELECTION STATISTICS WITH AUTOMATED MUTATION TESTING

MADHUKAR, ENUGURTHI January 2018 (has links)
Context: The goal of this research is to form a correlation between code packages and test cases which is done by using automated weak mutation. The correlations formed is used as the statistical test data for selecting relevant tests from the test suite which decreases the size of the test suite and speed up the process. Objectives: In this study, we have done an investigation of existing methods for reducing the computational cost of automatic mutation testing. After the investigation, we build an open source automatic mutation tool that mutates the source code to run on the test cases of the mutated code that maps the failed test to the part of the code that was changed. The failed test cases give the correlation between the test and the source code which is collected as data for future use of the test selection. Methods: Literature review and Experimentation is chosen for this research. It was a controlled experiment done at the Swedish ICT company to mutate the camera codes and test them using the regression test suite. The camera codes provided are from the continuous integration of historical data. We have chosen experimentation as our research because as this method of research is more focused on analyzing the data and implementing a tool using historical data. A literature review is done to know what kind of mutation testing reduces the computational cost of the testing process. The implementation of this process is done by using experimentation Results: The comparative results obtained after mutating the source code with regular mutants and weak mutants we have found that regular mutants and weak mutants are compared with their correlation accuracy and we found that on regular mutation operators we got 62.1% correlation accuracy and coming to weak mutation operators we got 85% of the correlation accuracy. Conclusions: This research on experimentation to form the correlations in generating test selection statistics using automated mutation testing in the continuous integration environment for improving test cases selection in regression testing
10

Application of Topic Models for Test Case Selection : A comparison of similarity-based selection techniques / Tillämpning av ämnesmodeller för testfallsselektion

Askling, Kim January 2019 (has links)
Regression testing is just as important for the quality assurance of a system, as it is time consuming. Several techniques exist with the purpose of lowering the execution times of test suites and provide faster feedback to the developers, examples are ones based on transition-models or string-distances. These techniques are called test case selection (TCS) techniques, and focuses on selecting subsets of the test suite deemed relevant for the modifications made to the system under test. This thesis project focused on evaluating the use of a topic model, latent dirichlet allocation, as a means to create a diverse selection of test cases for coverage of certain test characteristics. The model was tested on authentic data sets from two different companies, where the results were compared against prior work where TCS was performed using similarity-based techniques. Also, the model was tuned and evaluated, using an algorithm based on differential evolution, to increase the model’s stability in terms of inferred topics and topic diversity. The results indicate that the use of the model for test case selection purposes was not as efficient as the other similarity-based selection techniques studied in work prior to thist hesis. In fact, the results show that the selection generated using the model performs similar, in terms of coverage, to a randomly selected subset of the test suite. Tuning of the model does not improve these results, in fact the tuned model performs worse than the other methods in most cases. However, the tuning process results in the model being more stable in terms of inferred latent topics and topic diversity. The performance of the model is believed to be strongly dependent on the characteristics of the underlying data used to train the model, putting emphasis on word frequencies and the overall sizes of the training documents, and implying that this would affect the words’ relevance scoring to the better.

Page generated in 0.1142 seconds