Spelling suggestions: "subject:"computer softwaretesting"" "subject:"computer softwareusing""
31 |
Effective Randomized Concurrency Testing with Partial Order MethodsYuan, Xinhao January 2020 (has links)
Modern software systems have been pervasively concurrent to utilize parallel hardware and perform asynchronous tasks. The correctness of concurrent programming, however, has been challenging for real-world and large systems. As the concurrent events of a system can interleave arbitrarily, unexpected interleavings may lead the system to undefined states, resulting in denials of services, performance degradation, inconsistent data, security issues, etc. To detect such concurrency errors, concurrency testing repeatedly explores the interleavings of a system to find the ones that induce errors. Traditional systematic testing, however, suffers from the intractable number of interleavings due to the complexity in real-world systems. Moreover, each iteration in systematic testing adjusts the explored interleaving with a minimal change that swaps the ordering of two events. Such exploration may waste time in large homogeneous sub-spaces leading to the same testing result. Thus on real-world systems, systematic testing often performs poorly to reveal even simple errors within a limited time budget. On the other hand, randomized testing samples interleavings of the system to quickly surface simple errors with substantial chances, but it may as well explore equivalent interleavings that do not affect the testing results. Such redundancies weaken the probabilistic guarantees and performance of randomized testing to find any errors.
Towards effective concurrency testing, this thesis leverages partial order semantics with randomized testing to find errors with strong probabilistic guarantees. First, we propose partial order sampling (POS), a new randomized testing framework to sample interleavings of a concurrent program with a novel partial order method. It effectively and simultaneously explores the orderings of all events of the program, and has high probabilities to manifest any errors of unexpected interleavings. We formally proved that our approach has exponentially better probabilistic guarantees to sample any partial orders of the program than state-of-the-art approaches. Our evaluation over 32 known concurrency errors in public benchmarks shows that our framework performed 2.6 times better than state-of-the-art approaches to find the errors. Secondly, we describe Morpheus, a new practical concurrency testing tool to apply POS to high-level distributed systems in Erlang. Morpheus leverages dynamic analysis to identify and predict critical events to reorder during testing, and significantly improves the exploration effectiveness of POS. We performed a case study to apply Morpheus on four popular distributed systems in Erlang, including Mnesia, the database system in standard Erlang distribution, and RabbitMQ, the message broker service. Morpheus found 11 previously unknown errors leading to unexpected crashes, deadlocks, and inconsistent states, demonstrating the effectiveness and practicalness of our approaches.
|
32 |
Predictive software design measuresLove, Randall James 11 June 2009 (has links)
This research develops a set of predictive measures enabling software testers and designers to identify and target potential problem areas for additional and/or enhanced testing. Predictions are available as early in the design process as requirements allocation and as late as code walk-throughs. These predictions are based on characteristics of the design artifacts prior to coding.
Prediction equations are formed at established points in the software development process called milestones. Four areas of predictive measurement are examined at each design milestone for candidate predictive metrics. These areas are: internal complexity, information flow, defect categorization, and the change in design. Prediction equations are created from the set of candidate predictive metrics at each milestone. The most promising of the prediction equations are selected and evaluated. The single "best" prediction equation is selected at each design milestone.
The resulting predictions are promising in terms of ranking areas of the software design by the number of predicted defects. Predictions of the actual number of defects are less accurate. / Master of Science
|
33 |
Minimizing software testing time without degrading reliabilityRocke, Adam Jay 01 January 1999 (has links)
No description available.
|
34 |
Model-based automation of statistical testing of softwareHu, Xiaomei 01 October 2000 (has links)
No description available.
|
35 |
A software tool to support the generation of optimal Markov chain usage probabilitesTripatra, Ponpat 01 July 2001 (has links)
No description available.
|
36 |
Analysis of historical test artifactsHua, Rong 01 April 2001 (has links)
No description available.
|
37 |
Adaptive Sampling for Targeted Software TestingShah, Abhishek January 2024 (has links)
Targeted software testing is a critical task in development of secure software. The core challenge of targeted software testing is to generate many inputs that reach specific code target locations in a given program. However, this task is challenging because it is NP-hard in theory and real-world programs contain very large input spaces and many lines of code, making this difficult in practice.
In this thesis, I introduce a new approach for targeted software testing based on adaptive sampling. The key insight is to reduce the original problem to a sequence of approximate counting problems, and I apply this approach to targeted software testing in two stages.
First, to find a single target-reaching input when no such input is given, I develop a new search algorithm MC2 that adaptively uses approximate-count feedback to narrow down which input region is more likely to contain a target-reaching input using probabilistic bisection.
Second, given a single target-reaching input, I develop a new set approximation algorithm ProgramSampler that adaptively learns an approximation of the set of target-reaching inputs based on approximate-count feedback, where the set approximation can be efficiently uniformly sampled for many target-reaching inputs.
Backed by theoretical guarantees, these techniques have been highly effective in practice: outperforming existing methods on average by 1-2 orders of magnitude.
|
38 |
A decision support system framework for testing and evaluating software in organisationsSekgweleo, Tefo Gordon January 2018 (has links)
Thesis (DPhil (Informatics))--Cape Peninsula University of Technology, 2018. / Increasingly, organisations in South African and across the world rely on software for various reasons, such as competitiveness and sustainability. The software are either developed in-house or purchased from the shelf. Irrespective of how the software was acquired, they do encounter challenges, from implementation to support, and use stages. The challenges sometimes hinder and are prohibitive to processes and activities that the software is intended to enable and support. Majority of the challenges that are encountered with software are attributed to the fact that they were not tested or appropriately tested before implementation. Some of the challenges has been costly to many organisations, particularly in South Africa. As a result, some organisations have been lacking in their efforts toward growth, competitiveness and sustainability. The challenges manifest from the fact that there are no testing tools and methods that can be easily customised for an organisation’s purposes. As a result, some organisations adopt more tools and methods for the same testing purposes, which has not solved the problem, as the challenges continue among South Africa organisations. Based on the challenges as stated above, this study was undertaken. The aim was to develop a decision support system framework, which can be used for software testing by any organisation, owing to its flexibility for customisation. The interpretivist and inductive approaches were employed. The qualitative methods and the case study design approach were applied. Three South African organisations, a private, public and small to medium enterprise (SME) were used as cases in this study. A set of criteria was used to select the organisations. The analysis of the data was guided by two sociotechnical theories, actor network theory (ANT) and diffusion of innovation (DOI). The theories were complementarily applied because of their different focuses. The actor network theory focuses on actors, which are both human and non-human, heterogeneity of networks, and the relationship between the actors within networks. This includes the interactions that happen at different moments as translated within the heterogeneous networks. Thus, ANT was employed to examine and gain better understanding of the factors that influence software testing in organisations. The DOI focuses on how new (fresh) ideas are diffused in an environment, with particular focus on innovation decision process, which constitute five stages: knowledge, persuasion, decision, implementation and confirmation. Findings from the data analysis of the three cases were further interpreted. Based on the interpretation, a decision support system framework was developed. The framework is intended to be of interest to software developers, software project managers and other stakeholders, most importantly, to provide guide to software testers in their tasks of testing software. Thus, this research is intended to be of interest and benefit to organisations and academic through its theoretical, practical and methodological contribution as detailed in the chapter seven (conclusion).
In conclusion, even though this research is rigorous, comprehensive and holistic, there are room for future studies. I would like to propose that future research should be in the areas of measurement of software testing. Also, sociotechnical theories like structuration theory and technology acceptance model should be considered in the analysis of such studies.
|
39 |
Identifying Testing Requirements for Modified SoftwareApiwattanapong, Taweesup 09 July 2007 (has links)
Throughout its lifetime, software must be changed for many reasons, such as bug fixing, performance tuning, and code restructuring. Testing modified software is the main activity performed to gain confidence that changes behave as they are intended and do not have adverse effects on the rest of the software. A fundamental problem of testing evolving software is determining whether test suites adequately exercise changes and, if not, providing suitable guidance for generating new test inputs that target the modified behavior. Existing techniques evaluate the adequacy of test suites based only on control- and data-flow testing criteria. They do not consider the effects of changes on program states and, thus, are not sufficiently strict to guarantee that the modified behavior is exercised. Also, because of the lack of this guarantee, these techniques can provide only limited guidance for generating new test inputs.
This research has developed techniques that will assist testers in testing evolving software and provide confidence in the quality of modified versions. In particular, this research has developed a technique to identify testing requirements that ensure that the test cases satisfying them will result in different program
states at preselected parts of the software. This research has also developed supporting techniques for identifying testing requirements. Such techniques include (1) a differencing technique, which computes differences and correspondences between two software versions and (2) two dynamic-impact-analysis techniques, which identify parts of software that are likely affected by changes with respect to a set of executions.
|
40 |
Efficient specification-based testing using incremental techniquesUzuncaova, Engin 10 October 2012 (has links)
As software systems grow in complexity, the need for efficient automated techniques for design, testing and verification becomes more and more critical. Specification-based testing provides an effective approach for checking the correctness of software in general. Constraint-based analysis using specifications enables checking various rich properties by automating generation of test inputs. However, as specifications get more complex, existing analyses face a scalability problem due to state explosion. This dissertation introduces a novel approach to analyze declarative specifications incrementally; presents a constraint prioritization and partitioning methodology to enable efficient incremental analyses; defines a suite of optimizations to improve the analyses further; introduces a novel approach for testing software product lines; and provides an experimental evaluation that shows the feasibility and scalability of the approach. The key insight behind the incremental technique is declarative slicing, which is a new class of optimizations. The optimizations are inspired by traditional program slicing for imperative languages but are applicable to analyzable declarative languages, in general, and Alloy, in particular. We introduce a novel algorithm for slicing declarative models. Given an Alloy model, our fully automatic tool, Kato, partitions the model into a base slice and a derived slice using constraint prioritization. As opposed to the conventional use of the Alloy Analyzer, where models are analyzed as a whole, we perform analysis incrementally, i.e., using several steps. A satisfying solution to the base slice is systematically extended to generate a solution for the entire model, while unsatisfiability of the base implies unsatisfiability of the entire model. We show how our incremental technique enables different analysis tools and solvers to be used in synergy to further optimize our approach. Compared to the conventional use of the Alloy Analyzer, this means even more overall performance enhancements for solving declarative models. Incremental analyses have a natural application in the software product line domain. A product line is a family of programs built from features that are increments in program functionality. Given properties of features as firstorder logic formulas, we automatically generate test inputs for each product in a product line. We show how to map a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experiments using different data structure product lines show that our approach can provide an order of magnitude speed-up over conventional techniques. / text
|
Page generated in 0.095 seconds