• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 11
  • 9
  • 9
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Element and Event-Based Test Suite Reduction for Android Test Suites Generated by Reinforcement Learning

Alenzi, Abdullah Sawdi M. 07 1900 (has links)
Automated test generation for Andriod apps with reinforcement learning algorithms often produce test suites with redundant coverage. We looked at minimizing test suites that have already been generated based on state–action–reward–state–action (SARSA) algorithms. In this dissertation, we hypothesize that there is room for improvement by introducing novel hybrid approaches that combine SARSA-generated test suites with greedy reduction algorithms following the principle of Head-up Guidance System (HGS™) approach. In addition, we apply an empirical study on Android test suites that reveals the value of these new hybrid methods. Our novel approaches focus on post-processing test suites by applying greedy reduction algorithms. To reduce Android test suites, we utilize different coverage criteria including event-based criterion (EBC), element-based criterion (ELBC), and combinatorial-based sequences criteria (CBSC) that follow the principle of combinatorial testing to generate sequences of events and elements. The proposed criteria effectively decreased the test suites generated by SARSA and revealed a high performance in maintaining code coverage. These findings suggest that test suite reduction using these criteria is particularly well suited for SARSA-generated test suites of Android apps.
12

Efektivní generátor náhodných čísel v nízko-výkonových zařízení / Effective random number generator for limited devices

Michálek, Tomáš January 2017 (has links)
This thesis solves the problem of generating random numbers on low-power devices. Author describes possible ways of generating and implements selected generators of (pseudo)random numbers on MSP430F5438A. 4 generators were added by the enhancement of one of them and a new generator was created, using the phenomenon of temperature change in the surroundings. For each generator, test sequences were generated and these sequences were tested by the Dieharder, STS-NIST, and Visual Test. The output of the thesis is the functional implementation of the generators, their testing by statistical methods and their comparison between each other.
13

Reinforcement Learning-Based Test Case Generation with Test Suite Prioritization for Android Application Testing

Khan, Md Khorrom 07 1900 (has links)
This dissertation introduces a hybrid strategy for automated testing of Android applications that combines reinforcement learning and test suite prioritization. These approaches aim to improve the effectiveness of the testing process by employing reinforcement learning algorithms, namely Q-learning and SARSA (State-Action-Reward-State-Action), for automated test case generation. The studies provide compelling evidence that reinforcement learning techniques hold great potential in generating test cases that consistently achieve high code coverage; however, the generated test cases may not always be in the optimal order. In this study, novel test case prioritization methods are developed, leveraging pairwise event interactions coverage, application state coverage, and application activity coverage, so as to optimize the rates of code coverage specifically for SARSA-generated test cases. Additionally, test suite prioritization techniques are introduced based on UI element coverage, test case cost, and test case complexity to further enhance the ordering of SARSA-generated test cases. Empirical investigations demonstrate that applying the proposed test suite prioritization techniques to the test suites generated by the reinforcement learning algorithm SARSA improved the rates of code coverage over original orderings and random orderings of test cases.
14

Change-effects analysis for effective testing and validation of evolving software

Santelices, Raul A. 17 May 2012 (has links)
The constant modification of software during its life cycle poses many challenges for developers and testers because changes might not behave as expected or may introduce erroneous side effects. For those reasons, it is of critical importance to analyze, test, and validate software every time it changes. The most common method for validating modified software is regression testing, which identifies differences in the behavior of software caused by changes and determines the correctness of those differences. Most research to this date has focused on the efficiency of regression testing by selecting and prioritizing existing test cases affected by changes. However, little attention has been given to finding whether the test suite adequately tests the effects of changes (i.e., behavior differences in the modified software) and which of those effects are missed during testing. In practice, it is necessary to augment the test suite to exercise the untested effects. The thesis of this research is that the effects of changes on software behavior can be computed with enough precision to help testers analyze the consequences of changes and augment test suites effectively. To demonstrate this thesis, this dissertation uses novel insights to develop a fundamental understanding of how changes affect the behavior of software. Based on these foundations, the dissertation defines and studies new techniques that detect these effects in cost-effective ways. These techniques support test-suite augmentation by (1) identifying the effects of individual changes that should be tested, (2) identifying the combined effects of multiple changes that occur during testing, and (3) optimizing the computation of these effects.
15

Capturing JUnit Behavior into Static Programs : Static Testing Framework

Siddiqui, Asher January 2010 (has links)
<p>In this research paper, it evaluates the benefits achievable from static testing framework by analyzing and transforming the <em>JUnit3.8 </em>source code and static execution of transformed code. Static structure enables us to analyze the code statically during creation and execution of test cases. The concept of research is by now well established in static analysis and testing development. The research approach is also increasingly affecting the static testing process and such research oriented work has proved particularly valuable for those of us who want to understand the reflective behavior of <em>JUnit3.8 Framework</em>.</p><p><em> JUnit3.8 Framework</em> uses <em>Java Reflection API</em> to invoke core functionality (test cases creation and execution) dynamically. However, <em>Java Reflection API</em> allows developers to access and modify structure and behavior of a program.  Reflection provides flexible solution for creating test cases and controlling the execution of test cases. Java reflection helps to encapsulate test cases in a single object representing the test suite. It also helps to associate each test method with a test object. Where reflection is a powerful tool to perform potential operations, on the other hand, it limits static analysis. Static analysis tools often cannot work effectively with reflection.</p><p>In order to avoid the reflection, <em>Static Testing Framework</em> provides a static platform to analyze the <em>JUnit3.8</em> source code and transform it into non-reflective version that emulates the dynamic behavior of <em>JUnit3.8</em>. The transformed source code has possible leverage to replace reflection with static code and does same things in an execution environment of <em>Static Testing Framework</em> that reflection does in <em>JUnit3.8</em>. More besides, the transformed code also enables execution environment of <em>Static Testing Framework</em> to run test methods statically. In order to measure the degree of efficiency, the implemented tool is evaluated. The evaluation of <em>Static Testing Framework</em> draws results for different Java projects and these statistical data is compared with <em>JUnit3.8</em> results to measure the effectiveness of <em>Static Testing Framework</em>. As a result of evaluation, <em>STF</em> can be used for static creation and execution of test cases up to <em>JUnit3.8</em> where test cases are not creating within a test class and where real definition of constructors is not required. These problems can be dealt as future work by introducing a middle layer to execute test fixtures for each test method and by generating test classes as per real definition of constructors.</p>
16

Capturing JUnit Behavior into Static Programs : Static Testing Framework

Siddiqui, Asher January 2010 (has links)
In this research paper, it evaluates the benefits achievable from static testing framework by analyzing and transforming the JUnit3.8 source code and static execution of transformed code. Static structure enables us to analyze the code statically during creation and execution of test cases. The concept of research is by now well established in static analysis and testing development. The research approach is also increasingly affecting the static testing process and such research oriented work has proved particularly valuable for those of us who want to understand the reflective behavior of JUnit3.8 Framework. JUnit3.8 Framework uses Java Reflection API to invoke core functionality (test cases creation and execution) dynamically. However, Java Reflection API allows developers to access and modify structure and behavior of a program.  Reflection provides flexible solution for creating test cases and controlling the execution of test cases. Java reflection helps to encapsulate test cases in a single object representing the test suite. It also helps to associate each test method with a test object. Where reflection is a powerful tool to perform potential operations, on the other hand, it limits static analysis. Static analysis tools often cannot work effectively with reflection. In order to avoid the reflection, Static Testing Framework provides a static platform to analyze the JUnit3.8 source code and transform it into non-reflective version that emulates the dynamic behavior of JUnit3.8. The transformed source code has possible leverage to replace reflection with static code and does same things in an execution environment of Static Testing Framework that reflection does in JUnit3.8. More besides, the transformed code also enables execution environment of Static Testing Framework to run test methods statically. In order to measure the degree of efficiency, the implemented tool is evaluated. The evaluation of Static Testing Framework draws results for different Java projects and these statistical data is compared with JUnit3.8 results to measure the effectiveness of Static Testing Framework. As a result of evaluation, STF can be used for static creation and execution of test cases up to JUnit3.8 where test cases are not creating within a test class and where real definition of constructors is not required. These problems can be dealt as future work by introducing a middle layer to execute test fixtures for each test method and by generating test classes as per real definition of constructors.
17

Enabling Java Software Developers to use ATCG tools by demonstrating the tools that exist today, their usefulness, and effectiveness

QAZIZADA, RASHED January 2021 (has links)
The software industry is expanding at a rapid rate. To keep up with the fast-growing and ever-changing technologies, it has become necessary to produce high-quality software in a short time and at an affordable cost. This research aims to demonstrate to Java developers the use of Automated Test Case Generation (ATCG) tools by presenting the tools that exist today, their usefulness, and their effectiveness. The main focus is on the automated testing tools for the Java industry, which can help developers achieve their goals faster and make better software. Moreover, the discussion covers the availability, features, prerequisites, effectiveness, and limitations of the automated testing tools. Among these tools, the most widely used are Evosuite, JUnit, TestNG, and Selenium. Each tool has its advantages and purpose. Furthermore, these ATCG-tools were compared to provide a clear picture to Java developers, answer the research questions, and show strengths and limitations of each selected tool. Results show that there is no single ultimate tool that can do all kinds of testing independently. It all depends on what the developer aims to achieve. If one tool is good at generating unit test cases for Java classes, another tool is good at testing the code security through penetration testing. Therefore, the Java developers may choose a tool/s based on their requirements. This study has revealed captivating findings regarding the ATCG-tools, which ought to be explored in the future.
18

Model Coverage vs System-under-test Coverage in Model-based testing : Using Edge-pair coverage, Edge coverage, Node coverage and Mutation analysis / Modelltäckning vs täckning av system-under-test inom modellbaserad testning : Med användning av kantparstäckning, kant-täckning, nodtäckning och mutationsanalys

Rezkalla, George January 2021 (has links)
Model-based testing (MBT) is a black-box software testing technique that focuses on specification of the system-under-test (SUT) and/or its environment. It uses models to automatically generate a large number of tests. To the best of our knowledge, no study has investigated the correlation of model coverage with SUT coverage using more advanced coverage criteria (such as edge-pair coverage) and the correlation of coverage (at model level and SUT level) with test suite effectiveness using non-adequate test suites in the context of MBT despite the prominence of non-adequate test suites in industry. To carry out the investigation, we extend an existing open-source MBT tool called Modbat to measure edge-pair coverage at model level, implement a new tool called PaCovForJbc to measure edge-pair coverage, edge coverage and node coverage at SUT level. Finally, we perform an experiment using these tools applied on three projects: “ArrayList”, and “LinkedList” of Java standard library, and “Apache ZooKeeper”. Overall, the results suggest the following: Edge and edge-pair coverage at model level often have a moderate to high correlation with the same type of coverage at SUT level, while that link between model and SUT for node coverage is weaker. Moreover, coverage criteria at SUT level often have a moderate to high correlation with test suite effectiveness, and a coverage criterion at SUT level has a slightly higher correlation with test suite effectiveness than the same type of coverage at model level. Regarding coverage at model level, edge and edge-pair coverage at model level have a slightly higher correlation with test suite effectiveness than node coverage at model level. Note that the mentioned suggestions need to be taken with discretion, because results vary depending on the project and/or coverage criterion under investigation. / Modellbaserad testning (MBT) är en black-box-testteknik som fokuserar på specifikation av system-under-test (SUT) och/eller dess miljö. MBT använder modeller för att generera ett stort antal tester automatiskt. Såvitt vi vet, finns ingen studie som undersökt korrelationen mellan modelltäckning och täckning av SUT med hjälp av mer avancerade täckningskriterier såsom kantparstäckning. Dessutom finns ingen studie som undersökt korrelationen mellan täckning (på modellnivå och SUT-nivå) och effektivitet av icke- adekvata testsviter som genereras med hjälp av MBT trots betydelsen av icke-adekvata testsviter i industrin. För att utföra undersökningen, utökar vi ett ”open-source” MBT-verktyg som kallas för Modbat för att mäta kantparstäckning på modellnivå. Dessutom implementerar vi ett nytt verktyg som kallas för PaCovForJbc för att mäta kantpars-, kant- och nodtäckning på SUT-nivå. Till slut utför vi experiment genom att applicera Modbat och PaCovForJbc på tre projekt: ”ArrayList” och ”LinkedList” av Javas standardbibliotek samt ”Apache ZooKeeper”. Sammantaget indikerar resultaten följande: Kant- och kantparstäckning på modellnivå har ofta en måttlig till hög korrelation med samma typ av täckning på SUT- nivå, medan länken mellan modell och SUT för nodtäckning är svagare. Dessutom har täckningskriterier på SUT-nivå ofta en måttlig till hög korrelation med testsvitseffektivitet, och ett täckningskriterium på SUT-nivå har en aning högre korrelation med testsvitseffektivitet än samma typ av täckning på modellnivå. Angående täckning på modellnivå har kant- och kantparstäckning på modellnivå en aning högre korrelation med testsvitseffektivitet än nodtäckning på modellnivå. Observera att de nämnda förslagen måste tas med diskretion, eftersom resultaten varierar beroende på projektet och/eller täckningskriteriet som undersöks.

Page generated in 0.0727 seconds