Spelling suggestions: "subject:"unit:testing"" "subject:"unittesting""
11 |
Frameworky pro jednotkové testování v jazyce Scala / Frameworks for unit testing in Scala languageKolmistr, Tomáš January 2015 (has links)
This thesis deals with frameworks for unit testing in Scala programming language. In total, there are presented five frameworks in the thesis, two of which are designed for unit testing with mock objekt and three without mock objects. The first, theoretical part aims to introduce concepts regarding testing and Scala programming language. In another part of the thesis there are specified criteria for selecting frameworks, including the criteria for subsequent comparison. In the practical part there are written unit tests according to test scenarios and evaluated the comparison of frameworks.
|
12 |
A Language-Recognition Approach to Unit Testing Message-Passing SystemsUbah, Ifeanyi January 2017 (has links)
This thesis addresses the problem of unit testing components in message-passing systems. A message-passing system is one that comprises components communicating with each other solely via the exchange of messages. Testing aids developers in detecting and fixing potential errors and with unit testing in particular, the focus is on independently verifying the correctness of single components, such as functions and methods, in a system whose behavior is well understood. With the aid of unit testing frameworks such as those of the xUnit family, this process can not only be automated and done iteratively, but easily interleaved with the development process, facilitating rapid feedback and early detection of errors in the system. However, such frameworks work in an imperative manner and as such, are unsuitable for verifying message-passing systems where the behavior of a component is encoded in its stream of exchanged messages. In this work, we recognise that similar to streams of symbols in the field of formal languages and abstract machines, one can specify properties of a component’s message stream such that they form a language. Unit testing a component thus becomes the description of an automaton that recognizes such a specified language. We propose a platform-independent, language-recognition approach to creating unit testing frameworks for describing and verifying the behavior of message-passing components, and use this approach in creating a prototype implementation for the Kompics component model. We show that this approach can be used to perform both black box and white box testing of components, and that it is easy to work with while preventing common mistakes in practice.
|
13 |
Utvecklares upplevelser av enhetstesterLindberg, Robert, Thysell, Oskar January 2022 (has links)
Syftet med studien har varit att undersöka vilka utmaningar utvecklare upplever vidgenomförande av enhetstester. Enhetstester är testkod som skrivs för att verifiera att produktionskod eller så kallad vanlig kod fungerar på ett bra sätt och uppfyller sitt syfte. Detta är den typ av testning vilket görs först i en utvecklingsprocess för att garantera att koden håller god kvalitét. Studien genomfördes genom att 5 utvecklare på ett IT konsultföretag i Luleå medverkat i individuella öppna kvalitativa intervjuer där de fått svara på frågor kring deras upplevelser av att arbeta med enhetstest. De slutsatser som studien har kommit fram till är att många av de identifierade problem som framkommit i denna studie stämmer överens med tidigare forskning inom detta ämne. Huvudsakligen var dessa problem av teknisk natur, där exempelvis underhåll, att veta vad som ska testas och att veta när en har testat tillräckligt, hörde till de vanligast förekommande gemensamma. Detta visar att denna problematik finns kvar än i dag. Vad som även framkommit i denna studie är att det är vanligt med avsaknad avutbildning gällande enhetstest. Även avsaknad av någon gemensam metod är något vi, speciellt i kombination med avsaknad av utbildning, anser förvärrar problematiken som redan existerar. Med hjälp av studiens kvalitativa ansats så har det identifierats en del potentiellabidragande faktorer för dessa upplevda problem. Utifrån dessa slutsatser har det formulerats rekommendationer för organisationer och för vidare forskning. / The purpose of the study has been to investigate what challenges developers experience when working with unit tests. Unit tests are test code written to verify that production code or so-called regular code works well and fulfills its purpose. This is the type of testing which is done first in a development process to guarantee that the code maintains good quality. The study was carried out by having 5 developers at an IT consulting company in Luleå participate in individual open qualitative interviews where they had to answer questions about their experiences of working with unit tests. The conclusions that the study has reached are that many of the identified problems that emerged in this study are consistent with previous research on this topic. Mainly these problems were of a technical nature, where, for example, maintenance, knowing what to test and knowing when one has tested enough, were among the most commonly occurring ones. This shows that these problems still exist today. What also emerged in this study is that it is common to lack education regarding unit tests. Also the lack of a common method is something we, especially in combination with the lack of education, consider to worsen the problems that already exist. With the help of the study's qualitative approach, some potential contributing factors for these perceived problems have also been identified. Based on these conclusions, recommendations have been formulated for organizations and for further research.
|
14 |
Programming Language and Tools for Automated TestingTan, Roy Patrick 27 August 2007 (has links)
Software testing is a necessary and integral part of the software quality process. It is estimated that inadequate testing infrastructure cost the US economy between $22.2 and $59.5 billion.
We present Sulu, a programming language designed with automated unit testing specifically in mind, as a demonstration of how software testing may be more integrated and automated into the software development process. Sulu's runtime and tools support automated testing from end to end; automating the generation, execution, and evaluation of test suites using both code coverage and mutation analysis. Sulu is also designed to fully integrate automatically generated tests with manually written test suites. Sulu's tools incorporate pluggable test case generators, which enables the software developer to employ different test case generation algorithms.
To show the effectiveness of this integrated approach, we designed an experiment to evaluate a family of test suites generated using one test case generation algorithm, which exhaustively enumerates every sequence of method calls within a certain bound. The results show over 80\% code coverage and high mutation coverage for the most comprehensive test suite generated. / Ph. D.
|
15 |
Sind Sprachmodelle in der Lage die Arbeit von Software-Testern zu übernehmen?: automatisierte JUnit Testgenerierung durch Large Language ModelsSchäfer, Nils 20 September 2024 (has links)
Die Bachelorarbeit untersucht die Qualität von Sprachmodellen im Kontext der Generierung
von Unit Tests für Java Anwendungen. Ziel der Arbeit ist es, zu analysieren,
inwieweit JUnit Tests durch den Einsatz von Sprachmodellen automatisiert generiert
werden können und daraus abzuleiten mit welcher Qualität sie die Arbeit von Software-
Testern übernehmen und ersetzen. Hierzu wird ein automatisiertes Testerstellungssystem
in Form eines Python-Kommandozeilen-Tools konzipiert sowie implementiert, welches mithilfe
von Anfragen an das Sprachmodell Testfälle generiert. Um dessen Qualität messen zu
können, werden die generierten Tests ohne manuellen Einfluss übernommen. Als Grundlage
der Evaluierung findet eine Durchführung statt, in der für 3 Java-Maven Projekte, mit
unterschiedlichen Komplexitätsgraden, Tests generiert werden. Die anschließende Analyse
besteht aus einem festen Bewertungsverfahren, welches die Testcodeabdeckung sowie
Erfolgsquote evaluiert und mit manuellen Tests vergleicht. Die Ergebnisse zeigen, dass
Sprachmodelle in der Lage sind, JUnit Tests mit einer zufriedenstellenden Testabdeckung
zu generieren, jedoch eine unzureichende Erfolsquote im Vergleich zu manuellen Tests
aufweisen. Es wird deutlich, dass sie aufgrund von Qualitätsmängeln bei der Generierung
von Testcode die Arbeit von Software-Testern nicht vollständig ersetzen können. Jedoch
bieten sie eine Möglichkeit, Testerstellungsprozesse zu übernehmen, welche mit einer
anschließenden manuellen Nachkontrolle enden und reduzieren somit den Arbeitsaufwand
der Tester.:Abbildungsverzeichnis IV
Tabellenverzeichnis V
Quellcodeverzeichnis VI
Abkürzungsverzeichnis VIII
1 Einleitung 1
1.1 Problemstellung 1
1.2 Zielstellung 2
2 Grundlagen 4
2.1 Software Development Lifecycle 4
2.2 Large Language Models 6
2.2.1 Begriff und Einführung 6
2.2.2 Generative Pre-trained Transformer 8
2.3 Prompt Engineering 9
2.3.1 Prompt Elemente 10
2.3.2 Prompt Techniken 10
2.4 Unit Testing 12
2.4.1 Grundlagen 12
2.4.2 Java mit JUnit5 14
2.5 SonarQube 16
3 Konzeption 18
3.1 Voraussetzungen 18
3.2 Anforderungsanalyse 19
3.3 Wahl des Large Language Models 21
3.4 Design des Prompts 22
3.5 Programmablaufplan 25
4 Implementation 28
4.1 Funktionalitäten 28
4.1.1 Nutzerabfrage 28
4.1.2 Java-Datei Erfassung im Projekt 30
4.1.3 Prompt-Erstellung 30
4.1.4 API-Anfrage zur Generierung von Tests 33
4.1.5 Testüberprüfung mit Repair Rounds 34
4.1.6 Logging 37
4.2 Integration von SonarQube, Plugins, Dependencies 39
4.3 Testdurchlauf 40
5 Durchführung und Analyse 43
5.1 Durchführung 43
5.2 Evaluation der Tests 44
5.2.1 Line Coverage 45
5.2.2 Branch Coverage 47
5.2.3 Overall Coverage 49
5.2.4 Erfolgsquote 51
5.3 Testcodeanalyse 52
5.4 Vergleich mit manuellen Testergebnissen 56
5.5 Einordnung der Ergebnisse 57
6 Fazit 58
6.1 Schlussfolgerung 58
6.2 Ausblick 59
Literaturverzeichnis I
A Anhang - Quelltexte
|
16 |
Selecting unit testing framework for embedded microcontroller developmentToth, Jonatan, Karlsson, Fredrik January 2021 (has links)
In this study, the absence of enough usage of the agile methodology Test-driven development among embedded developers was highlighted, and a solution for getting more developers to start using that methodology was researched into. The research revolved around making the practice of unit testing, which is a large part of the test-driven development methodology, more available to developers by lowering the knowledge threshold of which unit testing framework to choose and how they work. The area of embedded development was narrowed down to the usage of microcontrollers and the development of software for those in the programming language C. This study managed to firstly gather the general opinion of developers of which the most sought after criteria was that a unit testing framework for embedded development should support. With the help of those criteria, an extensive comparison could be done between some of the most popular and recommended unit testing frameworks for embedded microcontroller development. The observations that was made during the experiment were then used to take away some lessons learned that could form recommendations containing information about which unit testing framework that should be used depending on which preferences a developer could have.
|
17 |
Reestructuración y refactorización de Unit tests con TestSurgeonEstefo Carrasco, Pablo Ignacio January 2013 (has links)
Ingeniero Civil en Computación / Actualmente la actividad de Testing es fundamental dentro del ciclo de desarrollo de cualquier proyecto de software serio. Es más, las metodologías ágiles elevan su relevancia dentro de la construcción del software a tal nivel que está prohibido añadir una nueva funcionalidad sin que se haya escrito previamente un test que la valide.
A medida que el software crece en funcionalidades y cambian los requerimientos se vuelve más complejo. Es por eso que existen varias técnicas para reestructurar el código haciéndolo más flexible a los cambios y permitiendo que crezca.
Sin embargo, los test también crecen en número y en complejidad. Por lo que no son raros los casos de test redundantes tanto desde el punto de vista de su código fuente (duplicación de test) como de su ejecución. Pero a diferencia con el código "funcional", poco esfuerzo se ha realizado por parte de la industria por promover técnicas y crear herramientas que faciliten la tarea de mantener su estructura y diseño limpio.
Una de las consecuencias importantes de este problema, es el gran tiempo que toma ejecutar todos los tests. Al haber redundancia, la ejecución tarda más tiempo del necesario lo hace que los desarrolladores los corran con menos frecuencia e inclusive invierten menos tiempo en escribir nuevos test lo cual minimiza la cobertura. Esto último atenta críticamente en la confiabilidad del código base y por ende de la aplicación.
En este trabajo se propone una herramienta para detectar problemas de diseño de los tests. TestSurgeon aborda este problema desde dos perspectivas de análisis principales: su código fuente y su ejecución. A través de una intuitiva interfaz, el desarrollador puede navegar sobre las pruebas unitarias y realizar comparaciones entre tests guiado por métricas dedicadas que facilitan la detección de casos interesantes. Además provee una completa visualización que condensa dos métricas que describen y diferencian la ejecución de los test en comparación, permitiendo realizar un análisis eficaz. Finalmente, TestSurgeon permite detectar diferencias semánticas entre tests y encontrar redundancias entre estos para una posible refactorización.
Se presentan distintos escenarios de refactorización y reestructuración que son detectados por TestSurgeon. Estos son descritos con ejemplos reales en base a una experiencia de aplicación de TestSurgeon sobre los tests de Roassal, un motor de visualización ágil.
TestSurgeon ganó el primer lugar en la competencia internacional ACM Student Research Competition (categoría pregrado) durante la conferencia ICSE (principal en Ingeniería de Software) el año 2012.
|
18 |
Testování výkonu za běhu v Javě / Run-time performance testing in JavaKotrč, Jaroslav January 2015 (has links)
This work focuses on relative comparisons of individual methods performance. It is based on Stochastic Performance Logic, which allows to express, for example, that one method runs at most two times longer than another method. This results are more portable than absolute values. It extends standard unit tests with performance assumptions, which are evaluated during actual run-time of a released application. Dynamically added and removed instrumentation is used for automatic modification of the production code. Instrumentation part uses DiSL framework to be able to seamlessly measure even Java system classes. Methods are measured sequentially, number of concurrently measured method is dynamically changed and measurement code is removed as soon as required data are obtained to avoid high overhead. The results show that for processor demanding application this approach may bring up to 3-times lower overhead peaks than measuring all methods at once. Powered by TCPDF (www.tcpdf.org)
|
19 |
An Analysis of the Differences between Unit and Integration TestsTrautsch, Fabian 08 April 2019 (has links)
No description available.
|
20 |
System for firmware verificationNilsson, Daniel January 2009 (has links)
<p>Software verification is an important part of software development and themost practical way to do this today is through dynamic testing. This reportexplains concepts connected to verification and testing and also presents thetesting-framework Trassel developed during the writing of this report.Constructing domain specific languages and tools by using an existinglanguage as a starting ground can be a good strategy for solving certainproblems, this was tried with Trassel where the description-language forwriting test-cases was written as a DSL using Python as the host-language.</p>
|
Page generated in 0.0565 seconds