Spelling suggestions: "subject:"flaky tests"" "subject:"tlaky tests""
1 |
Eliminating effects of Flakiness in Embedded Software Testing : An industrial case studyKanneganti, Joshika, Vadrevu, Krithi Sameera January 2020 (has links)
Background. Unstable and unpredictable tests, herein referred to as flaky tests, pose a serious challenge to systems in the production environment. If a device is not tested thoroughly, it will be sent back for retesting from the production centers, which is an expensive affair. Removing flaky tests involves detecting the flaky tests, finding the causes of flakiness and finally the elimination of flakiness. The existing literature provides information on causes and elimination techniques of flakiness for software systems. All of these are studied thoroughly, and support is taken from interviews to understand they are applicable in the context of embedded systems. Objectives. The primary objective is to identify causes of flakiness in a device under test and also techniques for eliminating flakiness. Methods. In this paper, we applied a literature review to find the current state-of-art of flakiness. A case study is selected to address the objectives of the study. Interviews and observations carried out to collect data. Data analysis performed using a directed content analysis method. Results. Observations resulted in eliminating 4 causes of flakiness in embedded systems. Interview results in finding 4 elimination techniques which were not found in the literature. Conclusions. Causes and Elimination techniques for the domain of embedded systems are identified. Knowledge translation between the domains was carried out effectively.
|
2 |
Non-deterministic tests and where to find them : Empirically investigating the relationship between flaky tests and test smells by examining test order dependencyLamprou, Sokrates January 2022 (has links)
Flaky tests are non-deterministic tests that both pass and fail when no new code changes have been introduced to the code under test. This is a widespread problem for the Continuous Integration community where the underlying idea is that code is only integrated if all test cases pass. Debugging and eradicating flaky tests is hard and time-consuming and if ignored may introduce bugs. Due to flaky tests, Continuous Integration systems are at risk of decreasing either productivity or quality, properties it claims to provide. Prior research suggests a link between 4 root causes for flaky tests and 5 different test smells, i.e. anti-code patterns. Unfortunately, difficulties in reproducing flaky tests found in the study make this connection ambiguous. This thesis intends to validate this relationship by re-implementing the test smells in a static analyser named FlakyHoover and detecting them on a new dataset consisting of order-dependent and non-order-dependent flaky tests. Analysis of the test smell distribution over order and non-order-dependent flaky tests makes it possible to determine the relationship between test smells and root causes for flaky tests. The findings suggest that there may exist a correlation between 3/5 test smells and certain types of root causes for flaky tests. However further research is required to determine the studied test smells relation to flaky tests.
|
3 |
Investigating the applicability of execution tracing techniques for root causing randomness-related flaky tests in Python / En undersökning av exekveringsspårning och dess tillämplighet för att orsaksbestämma skakiga tester i Python relaterade till slumpmässighetErik, Norrestam Held January 2021 (has links)
Regression testing is an essential part of developing and maintaining software. It helps verify that changes to the software have not introduced any new bugs, and that the functionality still works as intended. However, for this verification to be valid, the executed tests must be assumed to be deterministic, i.e. produce the same output under the same circumstances. Unfortunately, this is not always the case. A test that exhibits non-deterministic behavior is said to be flaky. Flaky tests can severely inhibit the benefits of regression testing, as developers must figure out whether a failing test is due to a bug in the system under test (SUT) or test flakiness. Moreover, the non-deterministic nature of flaky tests poses several problems. Not only are the failures difficult to reproduce and debug, but developers are more likely to ignore the outcome of flaky tests, potentially leading to overlooked bugs in the SUT. The aim of this thesis was to investigate the applicability of execution tracing techniques as a means of providing root cause analysis for flaky tests in the randomness and network categories. This involved reproducing and studying flakiness, as well as implementing and evaluating a prototype with the ability to analyze runtime behavior in flaky tests. To gain a better understanding of reproducibility and common traits among flaky tests in the selected categories, a pre-study was conducted. Based on the outcome of the pre-study and findings in related literature, the network category was dropped entirely, and two techniques were chosen to be implemented. The implementation process resulted in the FlakyPy tool, a plugin for pytest that provides root cause analysis aimed at randomness flakiness. When run against a dataset of 22 flaky tests, the tool was able to identify potential root causes in 15 of these. This serves as an indication that execution tracing has the potential of detecting possible root causes in flaky randomness tests in Python. However, more research is needed to evaluate how developers perceive the usefulness of such tools.
|
4 |
Randomness as a Cause of Test Flakiness / Slumpmässighet som en orsak till skakiga testerMjörnman, Jesper, Mastell, Daniel January 2021 (has links)
With today’s focus on Continuous Integration, test cases are used to ensure the software’s reliability when integrating and developing code. Test cases that behave in an undeterministic manner are known as flaky tests, which threatens the software’s reliability. Because of flaky test’s undeterministic nature, they can be troublesome to detect and correct. This is causing companies to spend great amount of resources on flaky tests since they can reduce the quality of their products and services. The aim of this thesis was to develop a usable tool that can automatically detect flakiness in the Randomness category. This was done by initially locating and rerunning flaky tests found in public Git repositories. By scanning the resulting pytest logs from the tests that manifested flaky behaviour, noting indicators of how flakiness manifests in the Randomness category. From these findings we determined tracing to be a viable option of detecting Randomness as a cause of flakiness. The findings were implemented into our proposed tool FlakyReporter, which reruns flaky tests to determine if they pertain to the Randomness category. Our FlakyReporter tool was found to accurately categorise flaky tests into the Randomness category when tested against 25 different flaky tests. This indicates the viability of utilizing tracing as a method of categorizing flakiness.
|
Page generated in 0.0451 seconds