Spelling suggestions: "subject:"test quite"" "subject:"test suite""
1 |
Combinatorial-Based Testing Strategies for Mobile Application TestingMichaels, Ryan P. 12 1900 (has links)
This work introduces three new coverage criteria based on combinatorial-based event and element sequences that occur in the mobile environment. The novel combinatorial-based criteria are used to reduce, prioritize, and generate test suites for mobile applications. The combinatorial-based criteria include unique coverage of events and elements with different respects to ordering. For instance, consider the coverage of a pair of events, e1 and e2. The least strict criterion, Combinatorial Coverage (CCov), counts the combination of these two events in a test case without respect to the order in which the events occur. That is, the combination (e1, e2) is the same as (e2, e1). The second criterion, Sequence-Based Combinatorial Coverage (SCov), considers the order of occurrence within a test case. Sequences (e1, ..., e2) and (e2,..., e1) are different sequences. The third and strictest criterion is Consecutive-Sequence Combinatorial Coverage (CSCov), which counts adjacent sequences of consecutive pairs. The sequence (e1, e2) is only counted if e1 immediately occurs before e2.
The first contribution uses the novel combinatorial-based criteria for the purpose of test suite reduction. Empirical studies reveal that the criteria, when used with event sequences and sequences of size t=2, reduce the test suites by 22.8%-61.3% while the reduced test suites provide 98.8% to 100% fault finding effectiveness. Empirical studies in Android also reveal that the event sequence criteria of size t=2 reduce the test suites by 24.67%-66% while losing at most 0.39% code coverage. When the criteria are used with element sequences and sequences of size t=2, the test suites are reduced by 40\% to 72.67%, losing less than 0.87% code coverage.
The second contribution of this work applies the combinatorial-based criteria for test suite prioritization of mobile application test suites. The results of an empirical study show that the prioritization criteria that use element and event sequences cover the test suite's elements, events, and code faster than random orderings. On average the prioritized orderings cover all elements within 21.81% of the test suite, all events within 45.99% of the test suite, and all code within 51.21% of the test suite. Random orderings achieve full code coverage with 84.8% of the test suite on average.
The third contribution uses the combinatorial-based criteria for test suite generation. This work modifies the random walk tool used from prior experiments to give weight (preference) to coverage of the combinatorial-based event and element criteria. The use of Element SCov and CSCov criteria result in test suites that increase code coverage for three of the four subject applications. Specifically, the code coverage increases by 0.29%-5.89% with SCov and 1.36%-6.79% with CSCov in comparison to the original random walk algorithm. The SCov criterion increases total sequence coverage by 5%-88% and the CSCov criterion increases sequence coverage by 13%-68%. One criteria, Element CCov, failed to increase code coverage for two of the four applications.
The contributions of this dissertation show that the novel combinatorial-based criteria using sequences of events and elements offer improvements to different testing strategies for mobile applications, including test suite reduction, prioritization, and generation.
|
2 |
Thesis for the Degree of Bachelor of Science in Computer Science by Peter Charbachi and Linus Eklund : PAIRWISE TESTING FOR PLC EMBEDDED SOFTWARECharbachi, Peter, Eklund, Linus January 2016 (has links)
In this thesis we investigate the use of pairwise testing for PLC embedded software. We compare these automatically generated tests with tests created manually by industrial engineers. The tests were evaluated in terms of fault detection, code coverage and cost. In addition, we compared pairwise testing with randomly generated tests of the same size as pairwise tests. In order to automatically create test suites for PLC software a previously created tool called Combinatorial Test Tool (CTT) was extended to support pairwise testing using the IPOG algorithm. Once test suites were created using CTT they were executed on real industrial programs. The fault detection was measured using mutation analysis. The results of this thesis showed that manual tests achieved better fault detection (8% better mutation score in average) than tests generated using pairwise testing. Even if pairwise testing performed worse in terms of fault detection than manual testing, it achieved better fault detection in average than random tests of the same size. In addition, manual tests achieved in average 97.29% code coverage compared to 93.95% for pairwise testing, and 84.79% for random testing. By looking closely on all tests, manual testing performed equally good as pairwise in terms of achieved code coverage. Finally, the number of tests for manual testing was lower (12.98 tests in average) compared to pairwise and random testing (21.20 test in average). Interestingly enough, for the majority of the programs pairwise testing resulted in fewer tests than manual testing.
|
3 |
An approach to Natural Language understandingMarlen, Michael Scott January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / David A. Gustafson / Natural Language understanding over a set of sentences or a document is a challenging problem. We approach this problem using semantic extraction and an ontology for answering questions based on the data. There is more information in a sentence than that found by extracting out the visible terms and their obvious relations between one another. It is the hidden information that is not seen that gives this solution the advantage over alternatives. This methodology was tested against the FraCas Test Suite with near perfect results (correct answers) for the sections that are the focus of this paper (Generalized Quantifiers, Plurals, Adjectives, Comparatives, Verbs, and Attitudes). The results indicate that extracting the visible semantics as well as the unseen semantics and their interrelations using an ontology to reason over it provides reliable and provable answers to questions validating this technology.
|
4 |
Predicting mutation score using source code and test suite metricsJalbert, Kevin 01 September 2012 (has links)
Mutation testing has traditionally been used to evaluate the effectiveness of test suites
and provide con dence in the testing process. Mutation testing involves the creation of
many versions of a program each with a single syntactic fault. A test suite is evaluated
against these program versions (i.e., mutants) in order to determine the percentage
of mutants a test suite is able to identify (i.e., mutation score). A major drawback
of mutation testing is that even a small program may yield thousands of mutants
and can potentially make the process cost prohibitive. To improve the performance
and reduce the cost of mutation testing, we proposed a machine learning approach to
predict mutation score based on a combination of source code and test suite metrics.
We conducted an empirical evaluation of our approach to evaluated its effectiveness
using eight open source software systems. / UOIT
|
5 |
EVH2 protocol : Performance analysis and Wireshark dissector developmentÅhman, Stefan, Wallstersson, Marcus January 2012 (has links)
EVH2 is a proprietary application layer protocol developed by Aptilo Networks and used in their software product. Currently the only way to inspect EVH2 traffic is by using their own application. This application inspects no traffic other than EVH2. Since Aptilo continuously develops this protocol it is important to see how changes in the protocol affect its performance. This thesis examines possible ways to facilitate the use and development of the protocol. To analyse EVH2 network traffic along with traffic from other protocols another approach is needed. Wireshark is an application capable of inspecting traffic of multiple protocols simultaneously and uses dissectors to decode each packet. This thesis describes the development and evaluation of a Wireshark plugin dissector for inspection of EVH2 traffic. Performance tests of EVH2 will provide feedback about protocol changes. This thesis creates a platform for performance evaluation by introducing a test suite for performance testing. A performance evaluation of EVH2 was conducted using the developed test suit. / EVH2 är ett proprietärt applikationslagerprotokoll utvecklat av Aptilo Networks som används i deras mjukvaruprodukt. För närvarande kan EVH2-trafik endast inspekteras med deras egenutvecklade applikation. Denna applikation har inte något stöd för att inspektera annan trafik än EVH2. Eftersom Aptilo kontinuerligt utvecklar detta protokollet är det viktigt att kunna se hur förändringar i protokollet påverkar dess prestanda. Detta examenarbete undersöker möjliga sätt att underlätta användningen och utvecklingen av protokollet. För att kunna inspektera EVH2-trafik tillsammans med trafik från andra protokoll behövs en annan lösning än den nuvarande. Wireshark är en applikation som har stöd för att inspektera flera protokoll samtidigt där protokollpaketen avkodas med dissectors (dissektorer översatt till svenska). I detta examensarbete beskrivs och utvärderas utvecklingen av ett Wireshark dissector plugin för inspektion av EVH2-trafik. Genom att prestandatesta EVH2 kan prestandaskillnader påvisas vid förändringar i protokollet. Detta examensarbete tar fram en plattform för prestandautvärdering genom att introducera en testsvit för prestandatestning. Den utvecklade testsviten användes för att utföra prestandtestning av EVH2.
|
6 |
Understanding Test-Artifact Quality in Software EngineeringTran, Huynh Khanh Vi January 2022 (has links)
Context: The core of software testing is test artifacts, i.e., test cases, test suites, test scripts, test code, test specifications, and natural language tests. Hence, the quality of test artifacts can negatively or positively impact the reliability of the software testing process. Several empirical studies and secondary studies have investigated the test artifact quality. Nevertheless, little is known about how practitioners by themselves perceive test artifact quality, and the evidence on test artifact quality in the literature has not been synthesized in one place. Objective: This thesis aims to identify and synthesize the knowledge in test artifact quality from both academia and industry. Hence, our objectives are: (1) To understand practitioners’ perspectives on test artifact quality, (2) To investigate how test artifact quality has been characterized in the literature, (3) To increase the reliability of the research method for conducting systematic literature reviews (SLR) in software engineering. Method: In this thesis, we conducted an interview-based exploratory study and a tertiary study to achieve the first two objectives. We used the tertiary study as a case and referred to related observations from other researchers to achieve the last objective. Results: We provided two quality models based on the findings of the interview-based and tertiary studies. The two models were synthesized and combined to provide a broader view of test artifact quality. Also, the context information that can be used to characterize the environment in which test artifact quality is investigated was aggregated based on these studies’ findings. Based on our experience in constructing and validating automated search results using a quality gold standard (QGS) in the tertiary study, we provided recommendations for the QGS construction and proposed an extension to the current search validation approach. Conclusions: The context information and the combined quality model provide a comprehensive view of test artifact quality. Researchers can use the quality model to develop guidelines, templates for designing new test artifacts, or assessment tools for evaluating existing test artifacts. The model also can serve as a guideline for practitioners to search for test-artifact quality information, i.e., definitions for the quality attributes and measurements. For future work, we aim at investigating how to improve relevant test artifact quality attributes that are challenging to deal with.
|
7 |
User Interface Test Automation and its Challenges in an Industrial ScenarioPradhan, Ligaj January 2012 (has links)
The growing demand for UI test automation has triggered the development of many tools. Researchers and developers have been continuously working to further improvise the existing approaches. If we look at GUI test evolution we can observe a clear progress from manual testing towards complete automation. Numerous approaches have been made to automate the GUI testing process. Record and playback tools, key-word driven methodologies, event flow exploration strategies, model based approaches are continuously evolving with higher level of automation. Similarly, new ideas and strategies to make these tests efficient are also emerging. Optimization of this resource consuming activity is another very important aspect in this area. Dependencies between different tests can create deadlock scenarios, while running larger test suites. A concept of Ordered Test Suite can be used to cope with such dependencies. Following the Model Driven Architecture initiative by Object Management Group, a new global trend of Model Driven Engineering is creating a big sensation in the field of model based software development. Using the same principle, studies have also been made to automatically generate tests from models. Behavioral models can be made using the model driven approaches and these models can be analyzed to generate tests automatically. This master thesis addresses different approaches made for Graphical User Interface test automation, some optimization issues and solutions, a case study done at a software company to automate User Interface testing and a model driven approach for automatic test case generation.
|
8 |
Adaptation of Software Testability Concept for Test Suite Generation : A systematic reviewMalla, Prakash, Gurung, Bhupendra January 2012 (has links)
Context: Software testability, which is the degree to which a software artifact facilitates process of testing, is not only the indication of the test process effectiveness but also gives the new perspective on code development. Since more than fifty percent of total software development costs is related to testing process activities, Software testability has always been the improving area in software domain so that we can make the software development process effective with respect to test cases writing and fault detection process. Objectives: The research though this thesis will have the objective of proposing a conceptual framework considering the testability issues for the simpler test suite generation and facilitating the concerned persons with better effectiveness of testing. We investigate the testability factors and testability metrics basically with the help of the systematic literature review and the proposed framework’s feasibility is evaluated with case study. Methods: Initially, we conduct the literature review to get broad knowledge on this domain as well for the key documents. Then study starts with the systematic literature review process guided by the review protocol to collect the testability factors and measurements. The framework is validated with the case study. The research documents are included from highly trusted e-database including Compendex, Inspec, IEEE Xplore, ACM Digital Library, Springer Link and Scopus. Altogether 36 primary documents are included for the study and results are extracted. Results: From the results of systematic literature review, Software testability factors and associated measurements are found and the construction of framework for simple test generation as guidelines evaluate with case study. To make the test suite generation simpler, we propped a framework based on the FTA concepts and breakdown of high level testability factors to its simpler form of measureable level. Conclusions: Numbers of different software testability factors are presented in different researches in different perspectives. We collect important testability factors and associated measurement methods and we concluded the effect of testability in simpler test suite generation with the help of framework evaluated by case study.
|
9 |
Hybrid Approaches in Test Suite PrioritizationNurmuradov, Dmitriy 05 1900 (has links)
The rapid advancement of web and mobile application technologies has recently posed numerous challenges to the Software Engineering community, including how to cost-effectively test applications that have complex event spaces. Many software testing techniques attempt to cost-effectively improve the quality of such software. This dissertation primarily focuses on that of hybrid test suite prioritization. The techniques utilize two or more criteria to perform test suite prioritization as it is often insufficient to use only a single criterion. The dissertation consists of the following contributions: (1) a weighted test suite prioritization technique that employs the distance between criteria as a weighting factor, (2) a coarse-to-fine grained test suite prioritization technique that uses a multilevel approach to increase the granularity of the criteria at each subsequent iteration, (3) the Caret-HM tool for Android user session-based testing that allows testers to record, replay, and create heat maps from user interactions with Android applications via a web browser, and (4) Android user session-based test suite prioritization techniques that utilize heuristics developed from user sessions created by Caret-HM. Each of the chapters empirically evaluate the respective techniques. The proposed techniques generally show improved or equally good performance when compared to the baselines, depending on an application under test. Further, this dissertation provides guidance to testers as it relates to the use of the proposed hybrid techniques.
|
10 |
The utilization of log files generated by test executions: A systematic literature reviewGabaire, Elmi Bile January 2023 (has links)
Context: Testing is an important activity in software development and is typically estimated to account for nearly half of the efforts in the software development cycle. This puts a great demand on improving the artifacts involved in this task such as the test cases and test suites (a collection of test cases). Objective: When executing test programs, it is typical to record runtime information associated with the test cases in the form of test execution logs or traces. The aim of this work is to explore how this information can be utilized to improve the software testing process. To this end, two main aspects are investigated which are (1) in the context of test case generation and (2) in the context of different optimizations regarding existing test suites. Furthermore, the role of the logs regarding fault localization in connection with improving the existing test suites is investigated. Method: A systematic literature review is conducted to investigate, identify and analyze the existing literature on test case generation and test suite optimization that utilizes the test execution logs. Results: After a rigorous search in six digital databases, 26 primary studies were identified. 5 of the selected papers propose approaches in the context of test data generation, 8 papers suggest test case prioritization (TCP) techniques, 4 papers discuss approaches in test case selection (TCS), and 5 papers propose approaches in test suite minimization (TSM). Furthermore, we identified, 3 papers that discuss fault localization, and one paper that discussed the decomposition of large test cases into smaller single purpose test cases using the logs from previous test executions. Conclusion: The test execution logs are a useful source of information for different testing activities. Regarding test case generation, the main theme observed is the use of genetic algorithms in attempting to generate appropriate test cases when the alternative might have been to use random test data generation methods. When it comes to improving existing test suites several approaches within TCP, TCS and TSM such as similarity-based, modification-based, cluster-based, and search-based were put forward by the authors of the selected primary studies. Furthermore, several fault localization techniques using the logs were suggested.
|
Page generated in 0.065 seconds