• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 10
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 53
  • 43
  • 37
  • 32
  • 27
  • 20
  • 19
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Thesis for the Degree of Bachelor of Science in Computer Science by Peter Charbachi and Linus Eklund : PAIRWISE TESTING FOR PLC EMBEDDED SOFTWARE

Charbachi, Peter, Eklund, Linus January 2016 (has links)
In this thesis we investigate the use of pairwise testing for PLC embedded software. We compare these automatically generated tests with tests created manually by industrial engineers. The tests were evaluated in terms of fault detection, code coverage and cost. In addition, we compared pairwise testing with randomly generated tests of the same size as pairwise tests. In order to automatically create test suites for PLC software a previously created tool called Combinatorial Test Tool (CTT) was extended to support pairwise testing using the IPOG algorithm. Once test suites were created using CTT they were executed on real industrial programs. The fault detection was measured using mutation analysis. The results of this thesis showed that manual tests achieved better fault detection (8% better mutation score in average) than tests generated using pairwise testing. Even if pairwise testing performed worse in terms of fault detection than manual testing, it achieved better fault detection in average than random tests of the same size. In addition, manual tests achieved in average 97.29% code coverage compared to 93.95% for pairwise testing, and 84.79% for random testing. By looking closely on all tests, manual testing performed equally good as pairwise in terms of achieved code coverage. Finally, the number of tests for manual testing was lower (12.98 tests in average) compared to pairwise and random testing (21.20 test in average). Interestingly enough, for the majority of the programs pairwise testing resulted in fewer tests than manual testing.
12

Seleção de casos de teste para sistemas de processamento de imagens utilizando conceitos de CBIR / Test Case Selection For Image Processing Systems Using CBIR Concepts.

Narciso, Everton Note 29 October 2013 (has links)
Os sistemas de processamento de imagens exercem um papel importante no que tange à emulação da visão humana, pois grande parte das informações que as pessoas obtêm do mundo real ocorre por meio de imagens. Desenvolver tais sistemas é uma tarefa complexa e que requer testes rigorosos para garantir a sua confiabilidade. Neste cenário, a seleção de casos de teste é fundamental, pois ajuda a eliminar os dados de teste redundantes e desnecessários enquanto procura manter altas taxas de detecção de erros. Na literatura há várias abordagens para seleção de casos de teste com foco em sistemas de entradas/saídas alfanuméricas, mas a seleção voltada a sistemas complexos (e.g. processamento de imagens) ainda é pouco explorada. Visando a contribuir neste campo de pesquisa, este trabalho apresenta um novo método intitulado Tcs&CbIR, que seleciona e recupera um subconjunto de imagens a partir de um vasto conjunto de teste. Os testes realizados com dois programas de processamento de imagens mostram que a nova abordagem pode superar a seleção aleatória pois, no contexto de avaliação apresentado, a quantidade de casos de teste necessária para revelar a presença de erros foi reduzida em até 87%. Os resultados obtidos revelam, também, o potencial da utilização de CBIR para abstração de informações, a importância da definição de extratores de características adequados e a influência que as funções de similaridade podem exercer na seleção de casos de teste. / Image processing systems play a key role when it comes to emulation of human vision, because much of the information that humans capture from the real world occurs through images. Developing such systems is a complex task that requires rigorous testing to ensure their quality and reliability. In this scenario, the test case selection is crucial because it helps to eliminate the redundant and unnecessary test data while it tries to maintain high rates of error detection. In the literature there are several approaches for test cases selection with a focus on systems with alphanumeric inputs and outputs, but the selection focused on complex systems (e.g. image processing) is still unexplored. Aiming to contribute to this research field, this work presents a new method entitled Tcs&CbIR, which selects and retrieves a subset of images from a wide test suite. Tests conducted with two image processing programs show that the new approach can overcome the random selection because, in the context of evaluation presented, the amount of test cases required to detect the presence of the errors was reduced by up to 87%. The results also show the potential use of CBIR for information abstraction, the importance of the definition of suitable extractors of characteristics and the influence of the similarity functions in the test case selection.
13

Seleção de casos de teste para sistemas de processamento de imagens utilizando conceitos de CBIR / Test Case Selection For Image Processing Systems Using CBIR Concepts.

Everton Note Narciso 29 October 2013 (has links)
Os sistemas de processamento de imagens exercem um papel importante no que tange à emulação da visão humana, pois grande parte das informações que as pessoas obtêm do mundo real ocorre por meio de imagens. Desenvolver tais sistemas é uma tarefa complexa e que requer testes rigorosos para garantir a sua confiabilidade. Neste cenário, a seleção de casos de teste é fundamental, pois ajuda a eliminar os dados de teste redundantes e desnecessários enquanto procura manter altas taxas de detecção de erros. Na literatura há várias abordagens para seleção de casos de teste com foco em sistemas de entradas/saídas alfanuméricas, mas a seleção voltada a sistemas complexos (e.g. processamento de imagens) ainda é pouco explorada. Visando a contribuir neste campo de pesquisa, este trabalho apresenta um novo método intitulado Tcs&CbIR, que seleciona e recupera um subconjunto de imagens a partir de um vasto conjunto de teste. Os testes realizados com dois programas de processamento de imagens mostram que a nova abordagem pode superar a seleção aleatória pois, no contexto de avaliação apresentado, a quantidade de casos de teste necessária para revelar a presença de erros foi reduzida em até 87%. Os resultados obtidos revelam, também, o potencial da utilização de CBIR para abstração de informações, a importância da definição de extratores de características adequados e a influência que as funções de similaridade podem exercer na seleção de casos de teste. / Image processing systems play a key role when it comes to emulation of human vision, because much of the information that humans capture from the real world occurs through images. Developing such systems is a complex task that requires rigorous testing to ensure their quality and reliability. In this scenario, the test case selection is crucial because it helps to eliminate the redundant and unnecessary test data while it tries to maintain high rates of error detection. In the literature there are several approaches for test cases selection with a focus on systems with alphanumeric inputs and outputs, but the selection focused on complex systems (e.g. image processing) is still unexplored. Aiming to contribute to this research field, this work presents a new method entitled Tcs&CbIR, which selects and retrieves a subset of images from a wide test suite. Tests conducted with two image processing programs show that the new approach can overcome the random selection because, in the context of evaluation presented, the amount of test cases required to detect the presence of the errors was reduced by up to 87%. The results also show the potential use of CBIR for information abstraction, the importance of the definition of suitable extractors of characteristics and the influence of the similarity functions in the test case selection.
14

Using Autonomous Agents for Software Testing Based on JADE

Nyussupov, Adlet January 2019 (has links)
The thesis work describes the development of a multiagent testing application (MTA) based on an agent approach for solving challenges in regression testing domain, such as: reducing the complexity of testing, optimizing the time consumption, increasing the efficiency and implementing the automation of this approach for regression testing. All these challenges related to effectiveness and cost, can be represented as measures of achieved code coverage and number of test cases created. A multiagent approach is proposed in this thesis since it allows the implementation of the autonomous behaviour and optimizes the data processing in a heterogeneous environment. In addition, the agent-based approach provides flexible design methods for building multitask applications and conducting parallel task execution. However, all of these advantages of using an agent-based approach need to be investigated in the regression testing domain for realistic scenarios. Therefore, a hypothesis was formulated in order to investigate the efficiency of the MTA approach using an experiment as the main research method for obtaining results. The thesis includes a comparison analysis between the MTA and well-known test case generation tools (i.e. EvoSuite and JUnitTools) for identifying the differences in terms of efficiency and code coverage achieved. The comparison results showed advantages of the MTA within regression testing context due to optimal level of code coverage and test cases. The outcome of the thesis work moves toward solving the aforementioned problems in regression testing domain and shows some advantages of using the multagent approach within regression testing context.
15

A Bayesian Framework for Software Regression Testing

Mir arabbaygi, Siavash January 2008 (has links)
Software maintenance reportedly accounts for much of the total cost associated with developing software. These costs occur because modifying software is a highly error-prone task. Changing software to correct faults or add new functionality can cause existing functionality to regress, introducing new faults. To avoid such defects, one can re-test software after modifications, a task commonly known as regression testing. Regression testing typically involves the re-execution of test cases developed for previous versions. Re-running all existing test cases, however, is often costly and sometimes even infeasible due to time and resource constraints. Re-running test cases that do not exercise changed or change-impacted parts of the program carries extra cost and gives no benefit. The research community has thus sought ways to optimize regression testing by lowering the cost of test re-execution while preserving its effectiveness. To this end, researchers have proposed selecting a subset of test cases according to a variety of criteria (test case selection) and reordering test cases for execution to maximize a score function (test case prioritization). This dissertation presents a novel framework for optimizing regression testing activities, based on a probabilistic view of regression testing. The proposed framework is built around predicting the probability that each test case finds faults in the regression testing phase, and optimizing the test suites accordingly. To predict such probabilities, we model regression testing using a Bayesian Network (BN), a powerful probabilistic tool for modeling uncertainty in systems. We build this model using information measured directly from the software system. Our proposed framework builds upon the existing research in this area in many ways. First, our framework incorporates different information extracted from software into one model, which helps reduce uncertainty by using more of the available information, and enables better modeling of the system. Moreover, our framework provides flexibility by enabling a choice of which sources of information to use. Research in software measurement has proven that dealing with different systems requires different techniques and hence requires such flexibility. Using the proposed framework, engineers can customize their regression testing techniques to fit the characteristics of their systems using measurements most appropriate to their environment. We evaluate the performance of our proposed BN-based framework empirically. Although the framework can help both test case selection and prioritization, we propose using it primarily as a prioritization technique. We therefore compare our technique against other prioritization techniques from the literature. Our empirical evaluation examines a variety of objects and fault types. The results show that the proposed framework can outperform other techniques on some cases and performs comparably on the others. In sum, this thesis introduces a novel Bayesian framework for optimizing regression testing and shows that the proposed framework can help testers improve the cost effectiveness of their regression testing tasks.
16

A Requirements-Based Partition Testing Framework Using Particle Swarm Optimization Technique

Ganjali, Afshar January 2008 (has links)
Modern society is increasingly dependent on the quality of software systems. Software failure can cause severe consequences, including loss of human life. There are various ways of fault prevention and detection that can be deployed in different stages of software development. Testing is the most widely used approach for ensuring software quality. Requirements-Based Testing and Partition Testing are two of the widely used approaches for testing software systems. Although both of these techniques are mature and are addressed widely in the literature and despite the general agreement on both of these key techniques of functional testing, a combination of them lacks a systematic approach. In this thesis, we propose a framework along with a procedural process for testing a system using Requirements-Based Partition Testing (RBPT). This framework helps testers to start from the requirements documents and follow a straightforward step by step process to generate the required test cases without loosing any required data. Although many steps of the process are manual, the framework can be used as a foundation for automating the whole test case generation process. Another issue in testing a software product is the test case selection problem. Choosing appropriate test cases is an essential part of software testing that can lead to significant improvements in efficiency, as well as reduced costs of combinatorial testing. Unfortunately, the problem of finding minimum size test sets is NP-complete in general. Therefore, artificial intelligence-based search algorithms have been widely used for generating near-optimal solutions. In this thesis, we also propose a novel technique for test case generation using Particle Swarm Optimization (PSO), an effective optimization tool which has emerged in the last decade. Empirical studies show that in some domains particle swarm optimization is equally well-suited or even better than some other techniques. At the same time, a particle swarm algorithm is much simpler, easier to implement, and has just a few parameters that the user needs to adjust. These properties make PSO an ideal technique for test case generation. In order to have a fair comparison of our newly proposed algorithm against existing techniques, we have designed and implemented a framework for automatic evaluation of these methods. Through experiments using our evaluation framework, we illustrate how this new test case generation technique can outperform other existing methodologies.
17

A Bayesian Framework for Software Regression Testing

Mir arabbaygi, Siavash January 2008 (has links)
Software maintenance reportedly accounts for much of the total cost associated with developing software. These costs occur because modifying software is a highly error-prone task. Changing software to correct faults or add new functionality can cause existing functionality to regress, introducing new faults. To avoid such defects, one can re-test software after modifications, a task commonly known as regression testing. Regression testing typically involves the re-execution of test cases developed for previous versions. Re-running all existing test cases, however, is often costly and sometimes even infeasible due to time and resource constraints. Re-running test cases that do not exercise changed or change-impacted parts of the program carries extra cost and gives no benefit. The research community has thus sought ways to optimize regression testing by lowering the cost of test re-execution while preserving its effectiveness. To this end, researchers have proposed selecting a subset of test cases according to a variety of criteria (test case selection) and reordering test cases for execution to maximize a score function (test case prioritization). This dissertation presents a novel framework for optimizing regression testing activities, based on a probabilistic view of regression testing. The proposed framework is built around predicting the probability that each test case finds faults in the regression testing phase, and optimizing the test suites accordingly. To predict such probabilities, we model regression testing using a Bayesian Network (BN), a powerful probabilistic tool for modeling uncertainty in systems. We build this model using information measured directly from the software system. Our proposed framework builds upon the existing research in this area in many ways. First, our framework incorporates different information extracted from software into one model, which helps reduce uncertainty by using more of the available information, and enables better modeling of the system. Moreover, our framework provides flexibility by enabling a choice of which sources of information to use. Research in software measurement has proven that dealing with different systems requires different techniques and hence requires such flexibility. Using the proposed framework, engineers can customize their regression testing techniques to fit the characteristics of their systems using measurements most appropriate to their environment. We evaluate the performance of our proposed BN-based framework empirically. Although the framework can help both test case selection and prioritization, we propose using it primarily as a prioritization technique. We therefore compare our technique against other prioritization techniques from the literature. Our empirical evaluation examines a variety of objects and fault types. The results show that the proposed framework can outperform other techniques on some cases and performs comparably on the others. In sum, this thesis introduces a novel Bayesian framework for optimizing regression testing and shows that the proposed framework can help testers improve the cost effectiveness of their regression testing tasks.
18

A Requirements-Based Partition Testing Framework Using Particle Swarm Optimization Technique

Ganjali, Afshar January 2008 (has links)
Modern society is increasingly dependent on the quality of software systems. Software failure can cause severe consequences, including loss of human life. There are various ways of fault prevention and detection that can be deployed in different stages of software development. Testing is the most widely used approach for ensuring software quality. Requirements-Based Testing and Partition Testing are two of the widely used approaches for testing software systems. Although both of these techniques are mature and are addressed widely in the literature and despite the general agreement on both of these key techniques of functional testing, a combination of them lacks a systematic approach. In this thesis, we propose a framework along with a procedural process for testing a system using Requirements-Based Partition Testing (RBPT). This framework helps testers to start from the requirements documents and follow a straightforward step by step process to generate the required test cases without loosing any required data. Although many steps of the process are manual, the framework can be used as a foundation for automating the whole test case generation process. Another issue in testing a software product is the test case selection problem. Choosing appropriate test cases is an essential part of software testing that can lead to significant improvements in efficiency, as well as reduced costs of combinatorial testing. Unfortunately, the problem of finding minimum size test sets is NP-complete in general. Therefore, artificial intelligence-based search algorithms have been widely used for generating near-optimal solutions. In this thesis, we also propose a novel technique for test case generation using Particle Swarm Optimization (PSO), an effective optimization tool which has emerged in the last decade. Empirical studies show that in some domains particle swarm optimization is equally well-suited or even better than some other techniques. At the same time, a particle swarm algorithm is much simpler, easier to implement, and has just a few parameters that the user needs to adjust. These properties make PSO an ideal technique for test case generation. In order to have a fair comparison of our newly proposed algorithm against existing techniques, we have designed and implemented a framework for automatic evaluation of these methods. Through experiments using our evaluation framework, we illustrate how this new test case generation technique can outperform other existing methodologies.
19

Automated Selective Test Case Generation Methods for Real-Time Systems

Nilsson, Robert January 2000 (has links)
<p>This work aims to investigate the state of the art in test case generation for real-time systems, to analyze existing methods, and to propose future research directions in this area. We believe that a combination of design for testability, automation, and sensible test case selection is the key for verifying modern real-time systems. Existing methods for system-level test case generation for real-time systems are presented, classified, and evaluated against a real-time system model. Significant for real-time systems is that timeliness is crucial for their correctness. Our system model of the testing target adopts the event-triggered design paradigm for maximum flexibility. This paradigm results in target systems that are harder to test than its time-triggered counterpart, but the model improves testability by adopting previously proposed constraints on application behavior. This work investigates how time constraints can be tested using current methods and reveals problems relating to test-case generation for verifying such constraints. Further, approaches for automating the test-case generation process are investigated, paying special attention to methods aimed for real-time systems. We also note a need for special test-coverage criteria for concurrent and real-time systems to select test cases that increase confidence in such systems. We analyze some existing criteria from the perspective of our target model. The results of this dissertation are a classification of methods for generating test cases for real-time systems, an identification of contradictory terminology, and an increased body of knowledge about problems and open issues in this area. We conclude that the test-case generation process often neglects the internal behavior of the tested system and the properties of its execution environment as well as the effects of these on timeliness. Further, we note that most of the surveyed articles on testing methods incorporate automatic test-case generation in some form, but few consider the issues of automated execution of test cases. Four high-level future research directions are proposed that aim to remedy one or more of the identified problems.</p>
20

Self-learning algorithms applied in Continuous Integration system

Tummala, Akhil January 2018 (has links)
Context: Continuous Integration (CI) is a software development practice where a developer integrates a code into the shared repository. And, then an automated system verifies the code and runs automated test cases to find integration error. For this research, Ericsson’s CI system is used. The tests that are performed in CI are regression tests. Based on the time scopes, the regression test suites are categorized into hourly and daily test suits. The hourly test is performed on all the commits made in a day, whereas the daily test is performed at night on the latest build that passed the hourly test. Here, the hourly and daily test suites are static, and the hourly test suite is a subset of the daily test suite. Since the daily test is performed at the end of the day, the results are obtained on the next day, which is delaying the feedback to the developers regarding the integration errors. To mitigate this problem, research is performed to find the possibility of creating a learning model and integrating into the CI system, which can then create a dynamic hourly test suite for faster feedback. Objectives: This research aims to find the suitable machine learning algorithm for CI system and investigate the feasibility of creating self-learning test machinery. This goal is achieved by examining the CI system and, finding out what type data is required for creating the learning model for prioritizing the test cases. Once the necessary data is obtained, then the selected algorithms are evaluated to find the suitable learning algorithm for creating self-learning test machinery. And then, the investigation is done whether the created learning model can be integrated into the CI workflow to create the self-learning test machinery. Methods: In this research, an experiment is conducted for evaluating the learning algorithms. For this experimentation, the data is provided by Ericsson AB, Gothenburg. The dataset consists of the daily test information and the test case results. The algorithms that are evaluated in this experiment are Naïve Bayes, Support vector machines, and Decision trees. This evaluation is done by performing leave-one-out cross-validation. And, the learning algorithm performance is calculated by using the prediction accuracy. After obtaining the accuracies, the algorithms are compared to find the suitable machine learning algorithm for CI system. Results: Based on the Experiment results it is found that support vector machines have outperformed Naïve Bayes and Decision tree algorithms in performance. But, due to the challenges present in the current CI system, the created learning model is not feasible to integrate into the CI. The primary challenge faced by the CI system is, mapping of test case failure to its respective commit is no possible (cannot find which commit made the test case to fail). This is because the daily test is performed on the latest build which is the combination of commits made in that day. Another challenge present is low data storage. Due to this low data storage, problems like the curse of dimensionality and class imbalance has occurred. Conclusions: By conducting this research, a suitable learning algorithm is identified for creating a self-learning machinery. And, also identified the challenges facing to integrate the model in CI. Based on the results obtained from the experiment, it is recognized that support vector machines have high prediction accuracy in test case result classification compared to Naïve Bayes and Decision trees.

Page generated in 0.0719 seconds