Spelling suggestions: "subject:"5oftware - desting"" "subject:"5oftware - ingesting""
1 |
A framework for automated management of exploit testing environmentsFlansburg, Kevin 27 May 2016 (has links)
To demonstrate working exploits or vulnerabilities, people often share
their findings as a form of proof-of-concept (PoC) prototype. Such practices are particularly useful to learn about real vulnerabilities and state-of-the-art exploitation techniques. Unfortunately, the shared PoC exploits are seldom reproducible; in part because they are often not
thoroughly tested, but largely because authors lack a formal way to specify the tested environment or its dependencies. Although exploit writers attempt to overcome such problems by describing their
dependencies or testing environments using comments, this informal way of sharing PoC exploits makes it hard for exploit authors to achieve the original goal of demonstration. More seriously, these non- or hard-to-reproduce PoC exploits have limited potential to be utilized for other useful research purposes such as penetration testing, or in
benchmark suites to evaluate defense mechanisms. In this paper, we present XShop, a framework and infrastructure to
describe environments and dependencies for exploits in a formal way, and to automatically resolve these constraints and construct an isolated environment for development, testing, and to share with the community. We show how XShop's flexible design enables new possibilities for
utilizing these reproducible exploits in five practical use cases: as a security benchmark suite, in pen-testing, for large scale vulnerability analysis, as a shared development environment, and for regression
testing. We design and implement such applications by extending the
XShop framework and demonstrate its effectiveness with twelve real
exploits against well-known bugs that include GHOST, Shellshock, and Heartbleed. We believe that the proposed practice not only brings immediate incentives to exploit authors but also has the potential to be
grown as a community-wide knowledge base.
|
2 |
A study on improving adaptive random testingLiu, Ning, Lareina, 劉寧 January 2006 (has links)
published_or_final_version / abstract / Computer Science / Master / Master of Philosophy
|
3 |
Studies of different variations of Adaptive Random TestingTowey, David Peter. January 2006 (has links)
published_or_final_version / abstract / Computer Science / Doctoral / Doctor of Philosophy
|
4 |
Budget-sensitive testing and analysis strategies and their applications to concurrent and service-based systemsZhai, Ke, 翟可 January 2013 (has links)
Software testing is the most widely practiced approach to assure the correctness of programs. Despite decades of research progress, software testing is still considered very resource-demanding and time-consuming. In the recent decade, the wide adoption of multithreaded programs and the service-based architecture has further aggravated the problem that we are facing. In this thesis, we study issues in software testing where resource constraints (such as time spent and memory space allocated) are important considerations, and we look for testing techniques that are significantly advanced in effectiveness and efficiency given limited quotas of resources, which we refer to as budget. Our main focus is on two types of systems: concurrent systems and service-based systems.
The concurrent system is a class of computing system where programs are designed as collections of interacting and parallel computational processes. Unfortunately, concurrent programs are well known to be difficult to write and test: various concurrency bugs still exist in heavily-tested programs. To make it worse, detecting concurrency bugs is expensive, which is, for example, notorious for dynamic detection techniques that target high precision. This thesis proposes a dynamic sampling framework, CARISMA, to reduce the overhead dramatically while still largely preserving the bug detection capability. To achieve its goal, CARISMA intelligently allocates the limited budget on the computation resource through sampling. The core of CARISMA is a budget estimation and allocation framework whose correctness has been proven mathematically.
Another source of cost comes from the nondeterministic nature of concurrent systems. A common practice to test concurrent system is through stress testing where a system is executed with a large number of test cases to achieve a high coverage of the execution space. Stress testing is inherently costly. To this end, it is critical that the bug detection for each execution is effective, which demands a powerful test oracle. This thesis proposes such a test oracle, OLIN, which reports anomalies in the concurrent executions of programs. OLIN finds concurrency bugs that are consistently missed by previous techniques and incurs low overhead. OLIN can achieve a higher effectiveness within given time and computational budgets.
Service-based systems are composed of loosely coupled and unassociated units of functional units and are often highly concurrent and distributed. We have witnessed their prosperity in recent decades. Service-based systems are highly dynamic and regression testing techniques are applied to ensure their previously established functionality and correctness are not adversely affected by subsequent evolutions. However, regression testing is expensive and our thesis focuses on the prioritization of regression test cases to improve the effectiveness of testing within predefined constraints. This thesis proposes a family of prioritization metrics for regression testing of location-based services and presents a case study to evaluate their performance.
In conclusion, this thesis makes the following contributions to software testing and analysis: (1) a dynamic sampling framework for concurrency bug detection, (2) a test oracle for concurrent testing, and (3) a family of test case prioritization techniques for service-based systems. These contributions significantly improve the effectiveness and efficiency of resource utilization in software testing. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
5 |
Assessing the adequacy of test data for object-oriented programs using the mutation methodKim, Sun-Woo January 2001 (has links)
No description available.
|
6 |
Towards a satisfaction relation between CCS specifications and their refinementsBaillie, Elizabeth Jean January 1992 (has links)
No description available.
|
7 |
An effective approach for testing program branches and linear code sequences and jumpsMalevris, N. January 1988 (has links)
No description available.
|
8 |
Automated structural test data generationCousins, Michael Anthony January 1995 (has links)
No description available.
|
9 |
The derivation of a methodology with supporting software aids for testing structured data processing programsRoper, R. M. F. January 1988 (has links)
No description available.
|
10 |
Regression testing experimentsSayre, Kent 05 August 1999 (has links)
Software maintenance is an expensive part of the software lifecycle: estimates
put its cost at up to two-thirds of the entire cost of software. Regression testing,
which tests software after it has been modified to help assess and increase its
reliability, is responsible for a large part of this cost. Thus, making regression
testing more efficient and effective is worthwhile.
This thesis performs two experiments with regression testing techniques.
The first experiment involves two regression test selection techniques, Dejavu
and Pythia. These techniques select a subset of tests from the original test
suite to be rerun instead of the entire original test suite in an attempt to save
valuable testing time. The experiment investigates the cost and benefit tradeoffs
between these techniques. The data indicate that Dejavu can occasionally select
smaller test suites than Pythia while Pythia often is more efficient at figuring
out which test cases to select than Dejavu.
The second experiment involves the investigation of program spectra as a
tool to enhance regression testing. Program spectra characterize a program's
behavior. The experiment investigates the applicability of program spectra to
the detection of faults in modified software. The data indicate that certain types
of spectra identify faults on a consistent basis. The data also reveal cost-benefit
tradeoffs among spectra types. / Graduation date: 2000
|
Page generated in 0.0876 seconds