• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 10
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 84
  • 84
  • 53
  • 43
  • 37
  • 32
  • 27
  • 20
  • 19
  • 17
  • 15
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Mathematical Optimization for the Test Case Prioritization Problem

Felding, Eric January 2022 (has links)
Regression testing is the process of testing software to make sure changes to the software will not change the functionality. With growing test suites theneed to prioritize arises. This thesis explores how to weigh factors such as the number of fails detected, days since latest test case execution, and coverage. The prioritization is done over multiple test systems, software branches, and over many test sessions where the software can change in-between. With data provided by an industrial partner, we evaluate different ways to prioritize. The developed mathematical model could not cope with the size of the problem, whereas a simulated annealing approach based on said model proved highly successful. We also found that prioritizing test cases related to recent codechanges was effective. / Regressionstestning är processen att testa mjukvara för att säkerställa att ändringar av mjukvaran inte kommer att ändra funktionaliteten. Med växande testsviter uppstår behovet av att prioritera. Det här examensarbetet undersöker hur man väger faktorer som antalet upptäckta underkända testfall, dagar sedan testfallen senast kördes och täckning. Prioriteringen görs över flera testsystem, mjukvarugrenar och över många testsessioner där mjukvaran kan ändras däremellan. Med data från en industriell partner utvärderar vi olika sätt att prioritera. Den utvecklade matematiska modellen kunde inte hantera problemets storlek, medan en simulerad kylningsmetod baserad på denna modell visade sig vara mycket framgångsrik. Vi fann också att prioritering enligt ändringar som gjorts i mjukvaran var effetivt
42

Proof-of-concept of Model-based testing based on an UML-model of a water-level measurement system

Alshekhly, Zoubida, Gill, Namra January 2020 (has links)
Software testing is a very important phase in software development as it minimize risks ina software system, however, it consumes time and can be very expensive. With automatictest case generation time consumption and cost can be reduced. Model-based testing isa method to test a software system with a model of the systems behaviour. Automatictest case generation is often considered a favorable support in model-based testing. In thiswork, the concept of model-based testing is explored along with testing the embedded partof a water-level measurement system (WLM) to investigate the efficiency of model-basedtesting on a software system. As a result of this, a model-based testing tool, MoMut::UMLis used to generate the test-cases on the UML model of WLM system that is built ina UML modeling environment, Eclipse-Papyrus. However, MoMut::UML implements aspecial type of model-based testing technique, model-based mutation testing; that injectsfaults in the UML model, and generates test-data on the fault-based model. By this, thebehaviour of system-under-test, only the UML model of water-level measurement system,is tested.
43

Empirical Comparison Between Conventional and AI-based Automated Unit Test Generation Tools in Java

Gkikopouli, Marios, Bataa, Batjigdrel January 2023 (has links)
Unit testing plays a crucial role in ensuring the quality and reliability of software systems. However, manual testing can often be a slow and time-consuming process. With current advancements in artificial intelligence (AI), new tools have emerged for automated unit testing to address this issue. But how do these new AI tools compare to conventional automated unit test generation tools? To answer this question, we compared two state-of-the-art conventional unit test tools (EVOSUITE and RANDOOP) with the sole commercially available AI-based unit test tool (DIFFBLUE COVER) for Java. We tested them on 10 sample classes from 3 real-life projects provided by the Defects4J dataset to evaluate their performance regarding code coverage, mutation score, and fault detection. The results showed that EVOSUITE achieved the highest code coverage, averaging 89%, while RANDOOP and DIFFBLUE COVER achieved similar results, averaging 63%. In terms of mutation score, DIFFBLUE COVER had the lowest average score of 40%, while EVOSUITE and RANDOOP scored 67% and 50%, respectively. For fault detection, EVOSUITE and RANDOOP detected a higher number of bugs (7 out of 10 and 5 out of 10, respectively) compared to DIFFBLUE COVER, which found only 4 out of 10. Although the AI-based tool was outperformed in all three criteria, it still shows promise by being able to achieve adequate results, in some cases even surpassing the conventional tools while generating a significantly smaller number of total assertions and more comprehensive tests. Nonetheless, the study acknowledges its limitations in terms of the restricted number of AI-based tools used and the small number of projects utilized from Defects4J.
44

Automation in CS1 with the Factoring Problem Generator

Parker, Joshua B. 01 December 2009 (has links) (PDF)
As the field of computer science continues to grow, the number of students enrolled in related programs will grow as well. Though one-on-one tutoring is one of the more effective means of teaching, computer science instructors will have less and less time to devote to individual students. To address this growing concern, many tools that automate parts of an instructor’s job have been proposed. These tools can assist instructors in presenting concepts and grading student work, and they can help students learn to program more effectively. A growing group of intelligent tutoring systems attempts to tie all of this functionality into a single tool that is meant to be used throughout an entire CS course or series of courses. To contribute to this emerging area, the Factoring Problem Generator (FPG) is presented in this work. The FPG creates and grades problems in C in which students search for and extract blocks of repeated code into individual functions, learning to utilize parameters and return values as they do so. The problems created by the FPG are highly configurable by instructors such that the difficulty can be finely tuned to suit students’ individual needs. Instructors can choose whether or not to include arrays, pointers, certain elemental data types, certain operators, or certain kinds of statements, among other things. The FPG is additionally capable of generating a set of test cases for each generated problem. These test cases fully exercise students’ solutions by covering all branches of execution, and they ensure that program functionality does not change as students factor code into functions. Initial experimentation with the system has suggested that the FPG can be integrated into a beginning CS curriculum and with further refinement could become a standard tool in the CS classroom.
45

eID in the e-learning Environment

Pan, Zhe January 2022 (has links)
At present, there are different Electronic Identity (eID) systems utilized in EU, resulting in difficulties to carry eID information and transfer eID data of e-learning systems from one EU country to another. A project entitled Secure idenTity acrOss boRders linKed (STORK) was launched to address this problem, by installing a Pan European Proxy Service (PEPS) server. Currently, Logica, a Swedish company, cooperates with Department of Computer and Systems Sciences (DSV) to implement the PEPS at DSV. This thesis aims to build various testing cases for the PEPS server installed at DSV. The PEPS is well-developed and separate packages working together with the Service Provider (SP) and Identity Provider (IDP) to implement its respective functionalities. The tests performed on PEPS are used to test the whole PEPS infrastructure: SP, PEPS and IDP, that is, the communication between these packages. The purpose of implementation of PEPS is to support the Ilearn@DSV to connect with STORK. Hence, the SP in this thesis is Ilearn@DSV system embedded with SP package. This thesis first introduces the background of eID, e-learning and the e-learning system Ilearn@DSV. Then, it describes the test hierarchy and test requirements, and completes data collection step. The details of various test cases are provided for the predetermined test items in test plans. Test plans and test cases must abide by the IEEE test format and meet the IEEE Standard 829-2008. Finally, test cases are validated against depth, breadth and effectiveness. / För närvarande finns det olika elektroniska identitetssystem (eID) som används inom EU, vilket resulterar i svårigheter att använda eID-information och överföra e-lärande systems eID-data från ett EU-land till annat. Projektet Secure idenTity acrOss boRders linKed (STORK) lanserades för att lösa detta problem, genom att installera en Pan European Proxy Service (PEPS)-server. Nu samarbetar Logica, ett svenskt företag, med Institutionen för data- och systemvetenskap (DSV) för att implementera PEPS på DSV. Detta examensarbete syftar till att bygga olika testfall för PEPS-servern installerad hos DSV. PEPS är välutvecklat och separata paket som arbetar tillsammans med Service Provider (SP) och Identity Provider (IDP) för att implementera respektiv funktioner. Testerna som utförs på PEPS används för att testa hela PEPS-infrastrukturen: SP, PEPS och IDP, det vill säga kommunikationen mellan dessa paket. Syftet med implementeringen av PEPS är att stödja Ilearn@DSV för att få kontakt med STORK. Därför är SP i detta examensarbete Ilearn@DSV-systemet inbyggt i SP-paket. Detta examensarbete först introducerar bakgrunden till eID, e-learning och e-learning-systemet Ilearn@DSV. Sedan beskriver den testhierarkin och testkraven och slutför datainsamlingssteget. Detaljerna för olika testfall tillhandahålls för de förutbestämda testobjekten i testplanerna. Testplaner och testfall får följa IEEE-testformatet och uppfylla IEEE Standard 829-2008. Slutligen valideras testfall mot djup, bredd och effektivitet.
46

The utilization of log files generated by test executions: A systematic literature review

Gabaire, Elmi Bile January 2023 (has links)
Context: Testing is an important activity in software development and is typically estimated to account for nearly half of the efforts in the software development cycle. This puts a great demand on improving the artifacts involved in this task such as the test cases and test suites (a collection of test cases).  Objective: When executing test programs, it is typical to record runtime information associated with the test cases in the form of test execution logs or traces. The aim of this work is to explore how this information can be utilized to improve the software testing process. To this end, two main aspects are investigated which are (1) in the context of test case generation and (2) in the context of different optimizations regarding existing test suites. Furthermore, the role of the logs regarding fault localization in connection with improving the existing test suites is investigated. Method: A systematic literature review is conducted to investigate, identify and analyze the existing literature on test case generation and test suite optimization that utilizes the test execution logs. Results: After a rigorous search in six digital databases, 26 primary studies were identified. 5 of the selected papers propose approaches in the context of test data generation, 8 papers suggest test case prioritization (TCP) techniques, 4 papers discuss approaches in test case selection (TCS), and 5 papers propose approaches in test suite minimization (TSM). Furthermore, we identified, 3 papers that discuss fault localization, and one paper that discussed the decomposition of large test cases into smaller single purpose test cases using the logs from previous test executions. Conclusion: The test execution logs are a useful source of information for different testing activities. Regarding test case generation, the main theme observed is the use of genetic algorithms in attempting to generate appropriate test cases when the alternative might have been to use random test data generation methods. When it comes to improving existing test suites several approaches within TCP, TCS and TSM such as similarity-based, modification-based, cluster-based, and search-based were put forward by the authors of the selected primary studies. Furthermore, several fault localization techniques using the logs were suggested.
47

An In-Depth study on the Utilization of Large Language Models for Test Case Generation

Johnsson, Nicole January 2024 (has links)
This study investigates the utilization of Large Language Models for Test Case Generation. The study uses the Large Language model and Embedding model provided by Llama, specifically Llama2 of size 7B, to generate test cases given a defined input. The study involves an implementation that uses customization techniques called Retrieval Augmented Generation (RAG) and Prompt Engineering. RAG is a method that in this study, stores organisation information locally, which is used to create test cases. This stored data is used as complementary data apart from the pre-trained data that the large language model has already trained on. By using this method, the implementation can gather specific organisation data and therefore have a greater understanding of the required domains. The objective of the study is to investigate how AI-driven test case generation impacts the overall software quality and development efficiency. This is evaluated by comparing the output of the AI-based system, to manually created test cases, as this is the company standard at the time of the study. The AI-driven test cases are analyzed mainly in the form of coverage and time, meaning that we compare to which degree the AI system can generate test cases compared to the manually created test case. Likewise, time is taken into consideration to understand how the development efficiency is affected. The results reveal that by using Retrieval Augmented Generationin combination with Prompt Engineering, the system is able to identify test cases to a certain degree. The results show that 66.67% of a specific project was identified using the AI, however, minor noise could appear and results might differ depending on the project’s complexity. Overall the results revealed how the system can positively impact the development efficiency and could also be argued to have a positive effect on the software quality. However, it is important to understand that the implementation as its current stage, is not sufficient enough to be used independently, but should rather be used as a tool to more efficiently create test cases.
48

Bug-finding and test case generation for java programs by symbolic execution

Bester, Willem Hendrik Karel 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: In this dissertation we present a software tool, Artemis, that symbolically executes Java virtual machine bytecode to find bugs and automatically generate test cases to trigger the bugs found. Symbolic execution is a technique of static software analysis that entails analysing code over symbolic inputs—essentially, classes of inputs—where each class is formulated as constraints over some input domain. The analysis then proceeds in a path-sensitive way adding the constraints resulting from a symbolic choice at a program branch to a path condition, and branching non-deterministically over the path condition. When a possible error state is reached, the path condition can be solved, and if soluble, value assignments retrieved to be used to generate explicit test cases in a unit testing framework. This last step enhances confidence that bugs are real, because testing is forced through normal language semantics, which could prevent certain states from being reached. We illustrate and evaluate Artemis on a number of examples with known errors, as well as on a large, complex code base. A preliminary version of this work was successfully presented at the SAICSIT conference held on 1–3 October 2012, in Centurion, South Africa. / AFRIKAANSE OPSOMMING: In die dissertasie bied ons ’n stuk sagtewaregereedskap, Artemis, aan wat biskode van die Java virtuele masjien simbolies uitvoer om foute op te spoor en toetsgevalle outomaties voort te bring om die foute te ontketen. Simboliese uitvoering is ’n tegniek van statiese sagteware-analise wat behels dat kode oor simboliese toevoere—in wese, klasse van toevoer—geanaliseer word, waar elke klas geformuleer word as beperkinge oor ’n domein. Die analise volg dan ’n pad-sensitiewe benadering deur die domeinbeperkinge, wat volg uit ’n simboliese keuse by ’n programvertakking, tot ’n padvoorwaarde by te voeg en dan nie-deterministies vertakkings oor die padvoorwaarde te volg. Wanneer ’n moontlike fouttoestand bereik word, kan die padvoorwaarde opgelos word, en indien dit oplaasbaar is, kan waardetoekennings verkry word om eksplisiete toetsgevalle in ’n eenheidstoetsingsraamwerk te formuleer. Die laaste stap verhoog vertroue dat die foute gevind werklik is, want toetsing word deur die normale semantiek van die taal geforseer, wat sekere toestande onbereikbaar maak. Ons illustreer en evalueer Artemis met ’n aantal voorbeelde waar die foute bekend is, asook op ’n groot, komplekse versameling kode. ’n Voorlopige weergawe van die´ werk is suksesvol by die SAICSIT-konferensie, wat van 1 tot 3 Oktober 2012 in Centurion, Suid-Afrika, gehou is, aangebied.
49

Application of Topic Models for Test Case Selection : A comparison of similarity-based selection techniques / Tillämpning av ämnesmodeller för testfallsselektion

Askling, Kim January 2019 (has links)
Regression testing is just as important for the quality assurance of a system, as it is time consuming. Several techniques exist with the purpose of lowering the execution times of test suites and provide faster feedback to the developers, examples are ones based on transition-models or string-distances. These techniques are called test case selection (TCS) techniques, and focuses on selecting subsets of the test suite deemed relevant for the modifications made to the system under test. This thesis project focused on evaluating the use of a topic model, latent dirichlet allocation, as a means to create a diverse selection of test cases for coverage of certain test characteristics. The model was tested on authentic data sets from two different companies, where the results were compared against prior work where TCS was performed using similarity-based techniques. Also, the model was tuned and evaluated, using an algorithm based on differential evolution, to increase the model’s stability in terms of inferred topics and topic diversity. The results indicate that the use of the model for test case selection purposes was not as efficient as the other similarity-based selection techniques studied in work prior to thist hesis. In fact, the results show that the selection generated using the model performs similar, in terms of coverage, to a randomly selected subset of the test suite. Tuning of the model does not improve these results, in fact the tuned model performs worse than the other methods in most cases. However, the tuning process results in the model being more stable in terms of inferred latent topics and topic diversity. The performance of the model is believed to be strongly dependent on the characteristics of the underlying data used to train the model, putting emphasis on word frequencies and the overall sizes of the training documents, and implying that this would affect the words’ relevance scoring to the better.
50

Model-Based Test Case Generation for Real-Time Systems

Hessel, Anders January 2007 (has links)
<p>Testing is the dominant verification technique used in the software industry today. The use of automatic test case execution increases, but the creation of test cases remains manual and thus error prone and expensive. To automate generation and selection of test cases, model-based testing techniques have been suggested.</p><p>In this thesis two central problems in model-based testing are addressed: the problem of how to formally specify coverage criteria, and the problem of how to generate a test suite from a formal timed system model, such that the test suite satisfies a given coverage criterion. We use model checking techniques to explore the state-space of a model until a set of traces is found that together satisfy the coverage criterion. A key observation is that a coverage criterion can be viewed as consisting of a set of items, which we call coverage items. Each coverage item can be treated as a separate reachability problem. </p><p>Based on our view of coverage items we define a language, in the form of parameterized observer automata, to formally describe coverage criteria. We show that the language is expressive enough to describe a variety of common coverage criteria described in the literature. Two algorithms for test case generation with observer automata are presented. The first algorithm returns a trace that satisfies all coverage items with a minimum cost. We use this algorithm to generate a test suite with minimal execution time. The second algorithm explores only states that may increase the already found set of coverage items. This algorithm works well together with observer automata.</p><p>The developed techniques have been implemented in the tool CoVer. The tool has been used in a case study together with Ericsson where a WAP gateway has been tested. The case study shows that the techniques have industrial strength.</p>

Page generated in 0.4544 seconds