31 |
Automatic software testing via mining software data. / 基於軟件數據挖掘的自動軟件測試 / CUHK electronic theses & dissertations collection / Ji yu ruan jian shu ju wa jue de zi dong ruan jian ce shiJanuary 2011 (has links)
Zheng, Wujie. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 128-141). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
32 |
An experimental study of cost cognizant test case prioritizationGoel, Amit 02 December 2002 (has links)
Test case prioritization techniques schedule test cases for regression testing
in an order that increases their ability to meet some performance goal. One performance
goal, rate of fault detection, measures how quickly faults are detected
within the testing process. The APFD metric had been proposed for measuring
the rate of fault detection. This metric applies, however, only in cases in which
test costs and fault costs are uniform. In practice, fault costs and test costs
are not uniform. For example, some faults which lead to system failures might
be more costly than faults which lead to minor errors. Similarly, a test case
that runs for several hours is much more costly than a test case that runs for
a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for
measuring rate of fault detection, that incorporates test costs and fault costs.
However, studies of this metric thus far have been limited to abstract distribution
models of costs. These distribution models did not represent actual fault
costs and test costs for software systems.
In this thesis, we describe some practical ways to estimate real fault costs
and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques
which focus on the APFD[subscript c] metric. We report results of an empirical
study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost"
across various cost-cognizant prioritization techniques and tradeoffs between
techniques.
The results of our empirical study indicate that cost-cognizant test case prioritization
techniques can substantially improve the rate of fault detection of
test suites. The results also provide insights into the tradeoffs among various
prioritization techniques. For example: (1) techniques incorporating feedback
information (information from previous tests) outperformed those without any
feedback information; (2) technique effectiveness differed most when faults are
relatively difficult to detect; (3) in most cases, technique performance was similar
at function and statement level; (4) surprisingly, techniques considering
change location did not perform as well as expected. The study also reveals
several practical issues that might arise in applying test case prioritization, as
well as opportunities for future work. / Graduation date: 2003
|
33 |
Test case prioritizationMalishevsky, Alexey Grigorievich 19 June 2003 (has links)
Regression testing is an expensive software engineering activity intended to provide
confidence that modifications to a software system have not introduced faults.
Test case prioritization techniques help to reduce regression testing cost by ordering
test cases in a way that better achieves testing objectives. In this thesis, we are interested
in prioritizing to maximize a test suite's rate of fault detection, measured by a
metric, APED, trying to detect regression faults as early as possible during testing.
In previous work, several prioritization techniques using low-level code coverage
information had been developed. These techniques try to maximize APED over
a sequence of software releases, not targeting a particular release. These techniques'
effectiveness was empirically evaluated.
We present a larger set of prioritization techniques that use information at arbitrary
granularity levels and incorporate modification information, targeting prioritization
at a particular software release. Our empirical studies show significant
improvements in the rate of fault detection over randomly ordered test suites.
Previous work on prioritization assumed uniform test costs and fault seventies,
which might not be realistic in many practical cases. We present a new cost-cognizant
metric, APFD[subscript c], and prioritization techniques, together with approaches
for measuring and estimating these costs. Our empirical studies evaluate prioritization
in a cost-cognizant environment.
Prioritization techniques have been developed independently with little consideration
of their similarities. We present a general prioritization framework that allows
us to express existing prioritization techniques by a framework algorithm using
parameters and specific functions.
Previous research assumed that prioritization was always beneficial if it improves
the APFD metric. We introduce a prioritization cost-benefit model that more
accurately captures relevant cost and benefit factors, and allows practitioners to assess
whether it is economical to employ prioritization.
Prioritization effectiveness varies across programs, versions, and test suites. We
empirically investigate several of these factors on substantial software systems and
present a classification-tree-based predictor that can help select the most appropriate
prioritization technique in advance.
Together, these results improve our understanding of test case prioritization and
of the processes by which it is performed. / Graduation date: 2004
|
34 |
Pair testing : comparing Windows Exploratory Testing in pairs with testing aloneLischner, Ray 31 May 2001 (has links)
Windows Exploratory Testing (WET) is examined to determine whether testers working in
pairs produce higher quality results, are more productive, or exhibit greater confidence and
job satisfaction than testers working alone.
WET is a form of application testing where a tester (or testers) explores an unknown
application to determine the application's purpose and main user, produce a list of
functions (categorized as primary and contributing), write a test case outline, and capture a
list of instabilities. The result of performing WET is a report that includes the above with a
list of issues and questions raised by the tester. The experiment measured and compared
the quality of these reports.
Pair testing is a new field of study, one suggested by the success of pair programming,
especially in the use of Extreme Programming (XP). In pair programming, two
programmers work at a single workstation, with a single keyboard and mouse, performing
a single programming task. Experimental and anecdotal evidence shows that programs
written by pairs are of higher quality than programs written solo. This success suggests that
pair testing might yield positive results.
As a result of the experiment, we conclude that pair testing does not produce significantly
higher quality results than solo testing. Nor are pairs more productive. Nonetheless, some
areas are noted as deserving further study. / Graduation date: 2002
|
35 |
Test case prioritizationChu, Chengyun, 1974- 01 June 1999 (has links)
Prioritization techniques are used to schedule test cases to execute in a specific order to maximize some objective function. There are a variety of possible objective functions, such as a function that measures how quickly faults can be detected within the testing process, or a function that measures how fast coverage of the program can be increased. In this paper, we describe several test case prioritization techniques, and empirical studies performed to investigate their relative abilities to improve how quickly faults can be detected by test suites. An improved rate of fault detection during regression testing can provide faster feedback about a system under regression test and let debuggers begin their work earlier than might otherwise be possible. The results of our studies indicate that test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoff's among various prioritization techniques. / Graduation date: 2000
|
36 |
A Test Data Evolution Strategy under Program ChangesHsu, Chang-ming 23 July 2007 (has links)
Since the cost of software testing has continuously accounted for large proportion of the software development total cost, automatic test data generation becomes a hot topic in recent software testing research. These researches attempt to reduce the cost of software testing by generating test data automatically, but they are discussed only for the single version programs not for the programs which are needed re-testing after changing. On the other hand, the regression testing researches discuss about how to re-test programs after changing, but they don¡¦t talk about how to generate test data automatically. Therefore, we propose an automatic test data evolution strategy in this paper. We use the method of regression testing to find out the part of programs which need re-testing, then automatic evolutes the test data by hybrid genetic algorithm. According to the experiment result, our strategy has the same or better testing ability but needs less cost than the other strategies.
|
37 |
Effective and efficient regression testing and fault localization through diversification, prioritization, and randomizationJiang, Bo, 姜博 January 2011 (has links)
published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
38 |
On the effectiveness of metamorphic testing for numerical programsFeng, Jianqiang., 馮建強. January 2003 (has links)
published_or_final_version / abstract / toc / Computer Science and Information Systems / Master / Master of Philosophy
|
39 |
A hybrid object-oriented class testing method : based on state-based and data-flow testingTsai, Bor-Yuan January 2000 (has links)
No description available.
|
40 |
Towards quality programming in the automated testing of distributed applicationsChu, Huey-Der January 1998 (has links)
Software testing is a very time-consuming and tedious activity and accounts for over 25% of the cost of software development. In addition to its high cost, manual testing is unpopular and often inconsistently executed. Software Testing Environments (STEs) overcome the deficiencies of manual testing through automating the test process and integrating testing tools to support a wide range of test capabilities. Most prior work on testing is in single-thread applications. This thesis is a contribution to testing of distributed applications, which has not been well explored. To address two crucial issues in testing, when to stop testing and how good the software is after testing, a statistics-based integrated test environment which is an extension of the testing concept in Quality Programming for distributed applications is presented. It provides automatic support for test execution by the Test Driver, test development by the SMAD Tree Editor and the Test Data Generator, test failure analysis by the Test Results Validator and the Test Paths Tracer, test measurement by the Quality Analyst, test management by the Test Manager and test planning by the Modeller. These tools are integrated around a public, shared data model describing the data entities and relationships which are manipulable by these tools. It enables early entry of the test process into the life cycle due to the definition of the quality planning and message-flow routings in the modelling. After well-prepared modelling and requirements specification are undertaken, the test process and the software design and implementation can proceed concurrently. A simple banking application written using Java Remote Method Invocation (RMI) and Java DataBase Connectivity (JDBC) shows the testing process of fitting it into the integrated test environment. The concept of the automated test execution through mobile agents across multiple platforms is also illustrated on this 3-tier client/server application.
|
Page generated in 0.0173 seconds