Spelling suggestions: "subject:"[een] SOFTWARE TESTING"" "subject:"[enn] SOFTWARE TESTING""
21 |
Towards a proportional sampling strategy according to path complexity: a simulation studyYip, Wang, 葉弘 January 2000 (has links)
published_or_final_version / Computer Science and Information Systems / Master / Master of Philosophy
|
22 |
Message from the A-MOST 2021 Workshop ChairsLefticaru, Raluca, Lorber, F., Turker, U.C. 08 December 2021 (has links)
yes / We are pleased to welcome you to the 17th edition of the Advances in Model-Based Testing Workshop (A-MOST 2021), collocated with the IEEE International Conference on Software Testing, Verification and Validation (ICST 2021).
|
23 |
Towards a new extension relation for compositional test case generation for CSP concurrent processesChan, Wing-kwong., 陳榮光. January 2003 (has links)
published_or_final_version / Computer Science and Information Systems / Doctoral / Doctor of Philosophy
|
24 |
Interoperability of wireless communication technologies in hybrid networks : evaluation of end-to-end interoperability issues and quality of service requirementsAbbasi, Munir A. January 2011 (has links)
Hybrid Networks employing wireless communication technologies have nowadays brought closer the vision of communication “anywhere, any time with anyone”. Such communication technologies consist of various standards, protocols, architectures, characteristics, models, devices, modulation and coding techniques. All these different technologies naturally may share some common characteristics, but there are also many important differences. New advances in these technologies are emerging very rapidly, with the advent of new models, characteristics, protocols and architectures. This rapid evolution imposes many challenges and issues to be addressed, and of particular importance are the interoperability issues of the following wireless technologies: Wireless Fidelity (Wi-Fi) IEEE802.11, Worldwide Interoperability for Microwave Access (WiMAX) IEEE 802.16, Single Channel per Carrier (SCPC), Digital Video Broadcasting of Satellite (DVB-S/DVB-S2), and Digital Video Broadcasting Return Channel through Satellite (DVB-RCS). Due to the differences amongst wireless technologies, these technologies do not generally interoperate easily with each other because of various interoperability and Quality of Service (QoS) issues. The aim of this study is to assess and investigate end-to-end interoperability issues and QoS requirements, such as bandwidth, delays, jitter, latency, packet loss, throughput, TCP performance, UDP performance, unicast and multicast services and availability, on hybrid wireless communication networks (employing both satellite broadband and terrestrial wireless technologies). The thesis provides an introduction to wireless communication technologies followed by a review of previous research studies on Hybrid Networks (both satellite and terrestrial wireless technologies, particularly Wi-Fi, WiMAX, DVB-RCS, and SCPC). Previous studies have discussed Wi-Fi, WiMAX, DVB-RCS, SCPC and 3G technologies and their standards as well as their properties and characteristics, such as operating frequency, bandwidth, data rate, basic configuration, coverage, power, interference, social issues, security problems, physical and MAC layer design and development issues. Although some previous studies provide valuable contributions to this area of research, they are limited to link layer characteristics, TCP performance, delay, bandwidth, capacity, data rate, and throughput. None of the studies cover all aspects of end-to-end interoperability issues and QoS requirements; such as bandwidth, delay, jitter, latency, packet loss, link performance, TCP and UDP performance, unicast and multicast performance, at end-to-end level, on Hybrid wireless networks. Interoperability issues are discussed in detail and a comparison of the different technologies and protocols was done using appropriate testing tools, assessing various performance measures including: bandwidth, delay, jitter, latency, packet loss, throughput and availability testing. The standards, protocol suite/ models and architectures for Wi-Fi, WiMAX, DVB-RCS, SCPC, alongside with different platforms and applications, are discussed and compared. Using a robust approach, which includes a new testing methodology and a generic test plan, the testing was conducted using various realistic test scenarios on real networks, comprising variable numbers and types of nodes. The data, traces, packets, and files were captured from various live scenarios and sites. The test results were analysed in order to measure and compare the characteristics of wireless technologies, devices, protocols and applications. The motivation of this research is to study all the end-to-end interoperability issues and Quality of Service requirements for rapidly growing Hybrid Networks in a comprehensive and systematic way. The significance of this research is that it is based on a comprehensive and systematic investigation of issues and facts, instead of hypothetical ideas/scenarios or simulations, which informed the design of a test methodology for empirical data gathering by real network testing, suitable for the measurement of hybrid network single-link or end-to-end issues using proven test tools. This systematic investigation of the issues encompasses an extensive series of tests measuring delay, jitter, packet loss, bandwidth, throughput, availability, performance of audio and video session, multicast and unicast performance, and stress testing. This testing covers most common test scenarios in hybrid networks and gives recommendations in achieving good end-to-end interoperability and QoS in hybrid networks. Contributions of study include the identification of gaps in the research, a description of interoperability issues, a comparison of most common test tools, the development of a generic test plan, a new testing process and methodology, analysis and network design recommendations for end-to-end interoperability issues and QoS requirements. This covers the complete cycle of this research. It is found that UDP is more suitable for hybrid wireless network as compared to TCP, particularly for the demanding applications considered, since TCP presents significant problems for multimedia and live traffic which requires strict QoS requirements on delay, jitter, packet loss and bandwidth. The main bottleneck for satellite communication is the delay of approximately 600 to 680 ms due to the long distance factor (and the finite speed of light) when communicating over geostationary satellites. The delay and packet loss can be controlled using various methods, such as traffic classification, traffic prioritization, congestion control, buffer management, using delay compensator, protocol compensator, developing automatic request technique, flow scheduling, and bandwidth allocation.
|
25 |
Performance modelling of reactive web applications using trace data from automated testingAnderson, Michael 29 April 2019 (has links)
This thesis evaluates a method for extracting architectural dependencies and performance measures from an evolving distributed software system. The research goal was to establish methods of determining potential scalability issues in a distributed software system as it is being iteratively developed. The research evaluated the use of industry available distributed tracing methods to extract performance measures and queuing network model parameters for common user activities. Additionally, a method was developed to trace and collect system operations the correspond to these user activities utilizing automated acceptance testing. Performance measure extraction was tested across several historical releases of a real-world distributed software system with this method. The trends in performance measures across releases correspond to several scalability issues identified in the production software system. / Graduate
|
26 |
Automatic software testing via mining software data. / 基於軟件數據挖掘的自動軟件測試 / CUHK electronic theses & dissertations collection / Ji yu ruan jian shu ju wa jue de zi dong ruan jian ce shiJanuary 2011 (has links)
Zheng, Wujie. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 128-141). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
|
27 |
An experimental study of cost cognizant test case prioritizationGoel, Amit 02 December 2002 (has links)
Test case prioritization techniques schedule test cases for regression testing
in an order that increases their ability to meet some performance goal. One performance
goal, rate of fault detection, measures how quickly faults are detected
within the testing process. The APFD metric had been proposed for measuring
the rate of fault detection. This metric applies, however, only in cases in which
test costs and fault costs are uniform. In practice, fault costs and test costs
are not uniform. For example, some faults which lead to system failures might
be more costly than faults which lead to minor errors. Similarly, a test case
that runs for several hours is much more costly than a test case that runs for
a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for
measuring rate of fault detection, that incorporates test costs and fault costs.
However, studies of this metric thus far have been limited to abstract distribution
models of costs. These distribution models did not represent actual fault
costs and test costs for software systems.
In this thesis, we describe some practical ways to estimate real fault costs
and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques
which focus on the APFD[subscript c] metric. We report results of an empirical
study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost"
across various cost-cognizant prioritization techniques and tradeoffs between
techniques.
The results of our empirical study indicate that cost-cognizant test case prioritization
techniques can substantially improve the rate of fault detection of
test suites. The results also provide insights into the tradeoffs among various
prioritization techniques. For example: (1) techniques incorporating feedback
information (information from previous tests) outperformed those without any
feedback information; (2) technique effectiveness differed most when faults are
relatively difficult to detect; (3) in most cases, technique performance was similar
at function and statement level; (4) surprisingly, techniques considering
change location did not perform as well as expected. The study also reveals
several practical issues that might arise in applying test case prioritization, as
well as opportunities for future work. / Graduation date: 2003
|
28 |
Test case prioritizationMalishevsky, Alexey Grigorievich 19 June 2003 (has links)
Regression testing is an expensive software engineering activity intended to provide
confidence that modifications to a software system have not introduced faults.
Test case prioritization techniques help to reduce regression testing cost by ordering
test cases in a way that better achieves testing objectives. In this thesis, we are interested
in prioritizing to maximize a test suite's rate of fault detection, measured by a
metric, APED, trying to detect regression faults as early as possible during testing.
In previous work, several prioritization techniques using low-level code coverage
information had been developed. These techniques try to maximize APED over
a sequence of software releases, not targeting a particular release. These techniques'
effectiveness was empirically evaluated.
We present a larger set of prioritization techniques that use information at arbitrary
granularity levels and incorporate modification information, targeting prioritization
at a particular software release. Our empirical studies show significant
improvements in the rate of fault detection over randomly ordered test suites.
Previous work on prioritization assumed uniform test costs and fault seventies,
which might not be realistic in many practical cases. We present a new cost-cognizant
metric, APFD[subscript c], and prioritization techniques, together with approaches
for measuring and estimating these costs. Our empirical studies evaluate prioritization
in a cost-cognizant environment.
Prioritization techniques have been developed independently with little consideration
of their similarities. We present a general prioritization framework that allows
us to express existing prioritization techniques by a framework algorithm using
parameters and specific functions.
Previous research assumed that prioritization was always beneficial if it improves
the APFD metric. We introduce a prioritization cost-benefit model that more
accurately captures relevant cost and benefit factors, and allows practitioners to assess
whether it is economical to employ prioritization.
Prioritization effectiveness varies across programs, versions, and test suites. We
empirically investigate several of these factors on substantial software systems and
present a classification-tree-based predictor that can help select the most appropriate
prioritization technique in advance.
Together, these results improve our understanding of test case prioritization and
of the processes by which it is performed. / Graduation date: 2004
|
29 |
Pair testing : comparing Windows Exploratory Testing in pairs with testing aloneLischner, Ray 31 May 2001 (has links)
Windows Exploratory Testing (WET) is examined to determine whether testers working in
pairs produce higher quality results, are more productive, or exhibit greater confidence and
job satisfaction than testers working alone.
WET is a form of application testing where a tester (or testers) explores an unknown
application to determine the application's purpose and main user, produce a list of
functions (categorized as primary and contributing), write a test case outline, and capture a
list of instabilities. The result of performing WET is a report that includes the above with a
list of issues and questions raised by the tester. The experiment measured and compared
the quality of these reports.
Pair testing is a new field of study, one suggested by the success of pair programming,
especially in the use of Extreme Programming (XP). In pair programming, two
programmers work at a single workstation, with a single keyboard and mouse, performing
a single programming task. Experimental and anecdotal evidence shows that programs
written by pairs are of higher quality than programs written solo. This success suggests that
pair testing might yield positive results.
As a result of the experiment, we conclude that pair testing does not produce significantly
higher quality results than solo testing. Nor are pairs more productive. Nonetheless, some
areas are noted as deserving further study. / Graduation date: 2002
|
30 |
Test case prioritizationChu, Chengyun, 1974- 01 June 1999 (has links)
Prioritization techniques are used to schedule test cases to execute in a specific order to maximize some objective function. There are a variety of possible objective functions, such as a function that measures how quickly faults can be detected within the testing process, or a function that measures how fast coverage of the program can be increased. In this paper, we describe several test case prioritization techniques, and empirical studies performed to investigate their relative abilities to improve how quickly faults can be detected by test suites. An improved rate of fault detection during regression testing can provide faster feedback about a system under regression test and let debuggers begin their work earlier than might otherwise be possible. The results of our studies indicate that test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoff's among various prioritization techniques. / Graduation date: 2000
|
Page generated in 0.0512 seconds