Spelling suggestions: "subject:"5oftware testing"" "subject:"1software testing""
111 |
Impact-Driven Regression Test Selection for Mainframe Business SystemsDharmapurikar, Abhishek V. 25 July 2013 (has links)
No description available.
|
112 |
Falcon: A Testing Language to Support Test Creation and ComprehensionKuhlman, Aaron J. 05 May 2023 (has links)
No description available.
|
113 |
Examining Introductory Computer Science Student Cognition When Testing Software Under Different Test Adequacy CriteriaShin, Austin 01 August 2022 (has links) (PDF)
The ability to test software is invaluable in all areas of computer science, but it is often neglected in computer science curricula. Test adequacy criteria (TAC), tools that measure the effectiveness of a test suite, have been used as aids to improve software testing teaching practices, but little is known about how students respond to them. Studies have examined the cognitive processes of students programming and professional developers writing tests, but none have investigated how student testers test with TAC. If we are to improve how they are used in the classroom, we must start by understanding the different ways that they affect students’ thought processes as they write tests.
In this thesis, we take a grounded theory approach to reveal the underlying cognitive processes that students utilize as they test under no feedback, condition coverage, and mutation analysis. We recorded 12 students as they thought aloud while creating test suites under these feedback mechanisms, and then we analyzed these recordings to identify the thought processes they used. We present our findings in the form of the phenomena we identified, which can be further investigated to shed more light on how different TAC affect students as they write tests. i
|
114 |
Safety Critical Software - Test Coverage vs Remaining FaultsSundell, Johan January 2022 (has links)
Safety-critical software systems have traditionally been found in the aerospace-, nuclear- andmedical domains. As technology advances and software complexity increases, such systemscan be found in more and more applications, e.g. self driving cars. These systems need to meetexceptionally strict standards in terms of dependability. Proving compliance is a challenge forthe industry. The regulatory bodies often require a certain amount of testing to be performed butdo not require evidence of a given failure rate (which for software is hard to deal with comparedto hardware). This Licentiate thesis discusses how to quantify test results and analyses whatconclusions can be drawn from a given test effort, in terms of remaining faults in the software.
|
115 |
Implementation and testing of a blackbox and a whitebox fuzzer for file compression routinesTobkin, Toby 01 May 2013 (has links)
Fuzz testing is a software testing technique that has risen to prominence over the past two decades. The unifying feature of all fuzz testers (fuzzers) is their ability to somehow automatically produce random test cases for software. Fuzzers can generally be placed in one of two classes: black-box or white-box. Blackbox fuzzers do not derive information from a program's source or binary in order to restrict the domain of their generated input while white-box fuzzers do. A tradeoff involved in the choice between blackbox and whitebox fuzzing is the rate at which inputs can be produced; since blackbox fuzzers need not do any "thinking" about the software under test to generate inputs, blackbox fuzzers can generate more inputs per unit time if all other factors are equal. The question of how blackbox and whitebox fuzzing should be used together for ideal economy of software testing has been posed and even speculated about, however, to my knowledge, no publically available study with the intent of characterizing an answer exists. The purpose of this thesis is to provide an initial exploration of the bug-finding characteristics of blackbox and whitebox fuzzers. A blackbox fuzzer is implemented and extended with a concolic execution program to make it whitebox. Both versions of the fuzzer are then used to run tests on some small programs and some parts of a file compression library.
|
116 |
TOOL DEVELOPMENT FOR TEST OPTIMIZATION PURPOSESCako, Gezim January 2021 (has links)
Background: Software testing is a crucial part of the system's development life-cycle, which pays off in detecting flaws and defects, alternatively leading to high-quality products. Generally, software testing is performed manually by a human operator or automatically. While many test cases are written and executed, the testing process checks if all the requirements are covered, and the system exhibits the expected behavior. A great portion of the cost and time of the software development is spent on testing; therefore, considering the type of the software, test optimization is needed and presented as a solution in cost efficiency and time-saving. Aim: This thesis aims to propose and evaluate the improved sOrTES+ tool for test optimization purposes, consisting of selection, prioritization, and scheduling of the test cases integrated into a dynamic user interface. Method: In this thesis, test optimization is addressed in two aspects, low-level requirements and high-level requirements. Our solution analyzes these requirements to detect the dependencies between test cases. Thus, we propose sOrTES+, a tool that uses three different scheduling techniques: Greedy, Greedy DO(direct output), and Greedy TO(total output) for test optimization. The mentioned techniques are integrated into a dynamic user interface that allows testers to manage their projects, see useful information about test cases and requirements, store the executed test cases while scheduling the remaining ones for execution, and also switch between the mentioned scheduling techniques regarding the project requirements. Finally, we demonstrated its applicability and compared our tool with existing testing techniques used by our industrial partner, Alstom company, evaluating the efficiency in terms of requirement coverage and troubleshooting time. Results: Our comparison shows that our solution improves the requirement coverage, increasing it by 26.4% while decreasing the troubleshooting time by 6%. Conclusion: Based on our results, we conclude that our proposed tool, sOrTES+, can be used for test optimization and it performs more efficiently than the existing methods used by industrial partner Alstom company.
|
117 |
Programming Language and Tools for Automated TestingTan, Roy Patrick 27 August 2007 (has links)
Software testing is a necessary and integral part of the software quality process. It is estimated that inadequate testing infrastructure cost the US economy between $22.2 and $59.5 billion.
We present Sulu, a programming language designed with automated unit testing specifically in mind, as a demonstration of how software testing may be more integrated and automated into the software development process. Sulu's runtime and tools support automated testing from end to end; automating the generation, execution, and evaluation of test suites using both code coverage and mutation analysis. Sulu is also designed to fully integrate automatically generated tests with manually written test suites. Sulu's tools incorporate pluggable test case generators, which enables the software developer to employ different test case generation algorithms.
To show the effectiveness of this integrated approach, we designed an experiment to evaluate a family of test suites generated using one test case generation algorithm, which exhaustively enumerates every sequence of method calls within a certain bound. The results show over 80\% code coverage and high mutation coverage for the most comprehensive test suite generated. / Ph. D.
|
118 |
Partitioning Strategies to Enhance Symbolic ExecutionMarcellino, Brendan Adrian 11 August 2015 (has links)
Software testing is a fundamental part of the software development process. However, testing is still costly and consumes about half of the development cost. The path explosion problem often necessitates one to consider an extremely large number of paths in order to reach a specific target. Symbolic execution can reduce this cost by using symbolic values and heuristic exploration strategies. Although various exploration strategies have been proposed in the past, the number of Satisfiability Modulo Theories (SMT) solver calls for reaching a target is still large, resulting in longer execution times for programs containing many paths. In this paper, we present two partitioning strategies in order to mitigate this problem, consequently reducing unnecessary SMT solver calls as well. In sequential partitioning, code sections are analyzed sequentially to take advantage of infeasible paths discovered in earlier sections. On the other hand, using dynamic partitioning on SSA-applied code, the code sections are analyzed in a non-consecutive order guided by data dependency metrics within the sections. Experimental results show that both strategies can achieve significant speedup in reducing the number of unnecessary solver calls in large programs. More than 1000x speedup can be achieved in large programs over conflict-driven learning. / Master of Science
|
119 |
Predictive software design measuresLove, Randall James 11 June 2009 (has links)
This research develops a set of predictive measures enabling software testers and designers to identify and target potential problem areas for additional and/or enhanced testing. Predictions are available as early in the design process as requirements allocation and as late as code walk-throughs. These predictions are based on characteristics of the design artifacts prior to coding.
Prediction equations are formed at established points in the software development process called milestones. Four areas of predictive measurement are examined at each design milestone for candidate predictive metrics. These areas are: internal complexity, information flow, defect categorization, and the change in design. Prediction equations are created from the set of candidate predictive metrics at each milestone. The most promising of the prediction equations are selected and evaluated. The single "best" prediction equation is selected at each design milestone.
The resulting predictions are promising in terms of ranking areas of the software design by the number of predicted defects. Predictions of the actual number of defects are less accurate. / Master of Science
|
120 |
Measuring the Software Development Process to Enable Formative FeedbackKazerouni, Ayaan Mehdi 16 April 2020 (has links)
Graduating CS students face well-documented difficulties upon entering the workforce, with reports of a gap between what they learn and what is expected of them in industry. Project management, software testing, and debugging have been repeatedly listed as common "knowledge deficiencies" among newly hired CS graduates. Similar difficulties manifest themselves on a smaller scale in upper-level CS courses, like the Data Structures and Algorithms course at Virginia Tech: students are required to develop large and complex projects over a three to four week lifecycle, and it is common to see close to a quarter of the students drop or fail the course, largely due to the difficult and time-consuming nature of the projects. My research is driven by the hypothesis that regular feedback about the software development process, delivered during development, will help ameliorate these difficulties. Assessment of software currently tends to focus on qualities like correctness, code coverage from test suites, and code style. Little attention or tooling has been developed for the assessment of the software development process. I use empirical software engineering methods like IDE-log analysis, software repository mining, and semi-structured interviews with students to identify effective and ineffective software practices to formulate. Using the results of these analyses, I have worked on assessing students' development in terms of time management, test writing, test quality, and other "self-checking" behaviours like running the program locally or submitting to an oracle of instructor-written test cases. The goal is to use this information to formulate formative feedback about the software development process. In addition to educators, this research is relevant to software engineering researchers and practitioners, since the results from these experiments are based on the work of upper-level students who grapple with issues of design and work-flow that are not far removed from those faced by professionals in industry. / Doctor of Philosophy / Graduating CS students face well-documented difficulties upon entering the workforce, with reports of a gap between what they learn and what is expected of them as professional soft-ware developers. Project management, software testing, and debugging have been repeatedly listed as common "knowledge deficiencies" among newly hired CS graduates. Similar difficulties manifest themselves on a smaller scale in upper-level CS courses, like the DataStructures and Algorithms course at Virginia Tech: students are required to develop large and complex software projects over a three to four week lifecycle, and it is common to see close to a quarter of the students drop or fail the course, largely due to the difficult and time-consuming nature of the projects. The development of these projects necessitates adherence to disciplined software process, i.e., incremental development, testing, and debugging of small pieces of functionality. My research is driven by the hypothesis that regular feedback about the software development process, delivered during development, will help ameliorate these difficulties. However, in educational contexts, assessment of software currently tends to focus on properties of the final product like correctness, quality of automated software tests, and adherence to code style requirements. Little attention or tooling has been developed for the assessment of the software development process. In this dissertation, I quantitatively characterise students' software development habits, using data from numerous sources: us-age logs from students' software development environments, detailed sequences of snapshots showing the project's evolution over time, and interviews with the students themselves. I analyse the relationships between students' development behaviours and their project out-comes, and use the results of these analyses to determine the effectiveness or ineffectiveness of students' software development processes. I have worked on assessing students' development in terms of time management, test writing, test quality, and other "self-checking"behaviours like running their programs locally or submitting them to an online system that uses instructor-written tests to generate a correctness score. The goal is to use this information to assess the quality of one's software development process in a way that is formative instead of summative, i.e., it can be done while students work toward project completion as opposed to after they are finished. For example, if we can identify procrastinating students early in the project timeline, we could intervene as needed and possibly help them to avoid the consequences of bad project management (e.g., unfinished or late project submissions).In addition to educators, this research is relevant to software engineering researchers and practitioners, since the results from these experiments are based on the work of upper-level students who grapple with issues of design and work-flow that are not far removed from those faced by professionals in industry.
|
Page generated in 0.0828 seconds