• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • Tagged with
  • 12
  • 12
  • 7
  • 6
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On adaptive random testing

Kuo, Fei-Ching, n/a January 2006 (has links)
Adaptive random testing (ART) has been proposed as an enhancement to random testing for situations where failure-causing inputs are clustered together. The basic idea of ART is to evenly spread test cases throughout the input domain. It has been shown by simulations and empirical analysis that ART frequently outperforms random testing. However, there are some outstanding issues on the cost-effectiveness and practicality of ART, which are the main foci of this thesis. Firstly, this thesis examines the basic factors that have an impact on the faultdetection effectiveness of adaptive random testing, and identifies favourable and unfavourable conditions for ART. Our study concludes that favourable conditions for ART occur more frequently than unfavourable conditions. Secondly, since all previous studies allow duplicate test cases, there has been a concern whether adaptive random testing performs better than random testing because ART uses fewer duplicate test cases. This thesis confirms that it is the even spread rather than less duplication of test cases which makes ART perform better than RT. Given that the even spread is the main pillar of the success of ART, an investigation has been conducted to study the relevance and appropriateness of several existing metrics of even spreading. Thirdly, the practicality of ART has been challenged for nonnumeric or high dimensional input domains. This thesis provides solutions that address these concerns. Finally, a new problem solving technique, namely, mirroring, has been developed. The integration of mirroring with adaptive random testing has been empirically shown to significantly increase the cost-effectiveness of ART. In summary, this thesis significantly contributes to both the foundation and the practical applications of adaptive random testing.
2

A comparison between random testing and adaptive random testing

Johansson, Nicklas, Aareskjold, Ola January 2023 (has links)
Software testing is essential for quality assurance, with automated techniques such as random testing and adaptive random testing being cost-effective solutions compared to others. Adaptive random testing seeks to enhance random testing, and there is a conception that adaptive random testing always should replace random testing. Our research question investigates this conception by addressing a gap in the literature, where a comparison between the two techniques in terms of certain key metrics is missing, namely defect detection efficiency and test case generation time. Defect detection efficiency is the amount of defects detected divided by the number defects in the system multiplied by one hundred. Test case generation time is the time it takes to generate all of the test case inputs. These metrics where chosen as they can be seen as a measurement of the techniques effectiveness and efficiency respectively. In order to address this research question we employ a quantitative experiment where we compare the performance of random testing and adaptive random testing with a sole focus on these two metrics. The comparison is performed by implementing and testing both algorithms on eight error-seeded numerical programs and measuring the results. The results displayed that adaptive random testing had a defect detection efficiency total average of 21.59% and a test case generation time total average of 35.37 (ms), while random testing had a defect detection efficiency total average of 22.28% and a test case generation time total average of 0.26 (ms). These results might contribute to disproving the conception that adaptive random testing always should replace random testing, as random testing evidently performed better on both the measured metrics.
3

Analysis and enhancements of adaptive random testing

Merkel, Robert Graham, robert.merkel@benambra.org January 2005 (has links)
Random testing is a standard software testing method. It is a popular method for reli-ability assessment, but its use for debug testing has been opposed by some authorities. Random testing does not use any information to guide test case selection, and so, it is argued, testing is less likely to be effective than other methods. Based on the observation that failures often cluster in contiguous regions, Adaptive Random Testing (ART) is a more effective random testing method. While retaining random selection of test cases, selection is guided by the idea that tests should be widely spread throughout the input domain. A simple way to implement this concept, FSCS-ART, involves randomly generating a number of candidates, and choosing the candidate most widely spread from any already-executed test. This method has already shown to be up to 50% more effective than random testing. This thesis examines a number of theoretical and practical issues related to ART. Firstly, an theoretical examination of the scope of adaptive methods to improve testing effectiveness is conducted. Our results show that the maximum improvement in failure detection effectiveness possible is only 50% - so ART performs close to this limit on many occasions. Secondly, the statistical validity of the previous empirical results is examined. A mathematical analysis of the sampling distribution of the various failure-detection effectiveness methods shows that the measure preferred in previous studies has a slightly unusual distribution known as the geometric distribution, and that that it and other measures are likely to show high variance, requiring very large sample sizes for accurate comparisons. A potential limitation of current ART methods is the relatively high selection overhead. A number of methods to obtain lower overheads are proposed and evaluated, involving a less-strict randomness or wide-spreading criterion. Two methods use dynamic, as-needed partitioning to divide the input domain, spreading test cases throughout the partitions as required. Another involves using a class of numeric sequences called quasi-random sequences. Finally, a more efficient implementation of the existing FSCS-ART method is proposed using the mathematical structure known as the Voronoi diagram. Finally, the use of ART on programs whose input is non-numeric is examined. While existing techniques can be used to generate random non-numeric candidates, a criterion for 'wide spread' is required to perform ART effectively. It is proposed to use the notion of category-partition as such a criterion.
4

Effectiveness comparison between Concolic and Random Testing

Lai, Yan-shun 31 October 2011 (has links)
The development of software today, the company has their own test system usually. Because there has a few bugs in the every software. And it will make the damage of company¡¦s property or security of information. We can find the bugs in the software by the test systems. But the few bugs will appear repeatedly even if you have been fixed it. In this time, it will be effective if we use the automatic test systems. They can solve the waste of time and cost. Appearance of the automatic test system has been solved the defect of the test method in the past. In this paper will mention two kind of automatic test systems, one of them is concolic testing, and another is random testing. In the 2009, there had the few of evidence to discuss that the concolic testing was more effective than the random testing, but there wasn¡¦t have the enough demonstration. So I hope to prove that the effectiveness comparison between concolic and random testing by this paper.
5

Design of an Automated Validation Environment For A Radiation Hardened MIPS Microprocessor

January 2011 (has links)
abstract: Ever reducing time to market, along with short product lifetimes, has created a need to shorten the microprocessor design time. Verification of the design and its analysis are two major components of this design cycle. Design validation techniques can be broadly classified into two major categories: simulation based approaches and formal techniques. Simulation based microprocessor validation involves running millions of cycles using random or pseudo random tests and allows verification of the register transfer level (RTL) model against an architectural model, i.e., that the processor executes instructions as required. The validation effort involves model checking to a high level description or simulation of the design against the RTL implementation. Formal techniques exhaustively analyze parts of the design but, do not verify RTL against the architecture specification. The focus of this work is to implement a fully automated validation environment for a MIPS based radiation hardened microprocessor using simulation based approaches. The basic framework uses the classical validation approach in which the design to be validated is described in a Hardware Definition Language (HDL) such as VHDL or Verilog. To implement a simulation based approach a number of random or pseudo random tests are generated. The output of the HDL based design is compared against the one obtained from a "perfect" model implementing similar functionality, a mismatch in the results would thus indicate a bug in the HDL based design. Effort is made to design the environment in such a manner that it can support validation during different stages of the design cycle. The validation environment includes appropriate changes so as to support architecture changes which are introduced because of radiation hardening. The manner in which the validation environment is build is highly dependent on the specifications of the perfect model used for comparisons. This work implements the validation environment for two MIPS simulators as the reference model. Two bugs have been discovered in the RTL model, using simulation based approaches through the validation environment. / Dissertation/Thesis / M.S. Electrical Engineering 2011
6

A Factorial Experiment on Scalability of Search-based Software Testing

Mehrmand, Arash January 2009 (has links)
Software testing is an expensive process, which is vital in the industry. Construction of the test-data in software testing requires the major cost and knowing which method to use in order to generate the test data is very important. This paper discusses the performance of search-based algorithms (preferably genetic algorithm) versus random testing, in software test-data generation. A factorial experiment is designed so that, we have more than one factor for each experiment we make. Although many researches have been done in the area of automated software testing, this research differs from all of them due to sample programs (SUTs) which are used. Since the program generation is automatic as well, Grammatical Evolution is used to guide the program generations. They are not goal based, but generated according to the grammar we provide, with different levels of complexity. Genetic algorithm is first applied to programs, then we apply random testing. Based on the results which come up, this paper recommends one method to use for software testing, if the SUT has the same conditions as we had in this study. SUTs are not like the sample programs, provided by other studies since they are generated using a grammar.
7

Automated Debugging in a Trading System

Ansariramandi, Saeed January 2012 (has links)
Verifying the reliability and functionality of a complex system like a trading system is highly demanding since failure in such a system can cause serious economic problems. Automated random testing is a good solution to find new and rare failures in such a system. Test cases in random testing usually contain a long sequence of actions that debugging them manually to find the root cause of the failure is a very boring and tiresome task. This thesis aims to create a model for automating the task of the debugging to reduce the failed test case to an equivalent test case that only contains relevant actions that together cause the failure. Delta debugging is the core algorithm of the model that simplifies a failed test case by successive testing. The target of the project is TRADExpress system of Cinnober Financial Technology AB. The model is integrated to the random testing framework of the TRADExpress system.
8

Constraint Programming for Random Testing of a Trading System

Castañeda Lozano, Roberto January 2010 (has links)
Financial markets use complex computer trading systems whose failures can cause serious economic damage, making reliability a major concern. Automated random testing has been shown to be useful in finding defects in these systems, but its inherent test oracle problem (automatic generation of the expected system output) is a drawback that has typically prevented its application on a larger scale. Two main tasks have been carried out in this thesis as a solution to the test oracle problem. First, an independent model of a real trading system based on constraint programming, a method for solving combinatorial problems, has been created. Then, the model has been integrated as a true test oracle in automated random tests. The test oracle maintains the expected state of an order book throughout a sequence of random trade order actions, and provides the expected output of every auction triggered in the order book by generating a corresponding constraint program that is solved with the aid of a constraint programming system. Constraint programming has allowed the development of an inexpensive, yet reliable test oracle. In 500 random test cases, the test oracle has detected two system failures. These failures correspond to defects that had been present for several years without being discovered neither by less complete oracles nor by the application of more systematic testing approaches. The main contributions of this thesis are: (1) empirical evidence of both the suitability of applying constraint programming to solve the test oracle problem and the effectiveness of true test oracles in random testing, and (2) a first attempt, as far as the author is aware, to model a non-theoretical continuous double auction using constraint programming. / Winner of the Swedish AI Society's prize for the best AI Master's Thesis 2010.
9

On Reducing the Trusted Computing Base in Binary Verification

An, Xiaoxin 15 June 2022 (has links)
The translation of binary code to higher-level models has wide applications, including decompilation, binary analysis, and binary rewriting. This calls for high reliability of the underlying trusted computing base (TCB) of the translation methodology. A key challenge is to reduce the TCB by validating its soundness. Both the definition of soundness and the validation method heavily depend on the context: what is in the TCB and how to prove it. This dissertation presents three research contributions. The first two contributions include reducing the TCB in binary verification, and the last contribution includes a binary verification process that leverages a reduced TCB. The first contribution targets the validation of OCaml-to-PVS translation -- commonly used to translate instruction-set-architecture (ISA) specifications to PVS -- where the destination language is non-executable. We present a methodology called OPEV to validate the translation between OCaml and PVS, supporting non-executable semantics. The validation includes generating large-scale tests for OCaml implementations, generating test lemmas for PVS, and generating proofs that automatically discharge these lemmas. OPEV incorporates an intermediate type system that captures a large subset of OCaml types, employing a variety of rules to generate test cases for each type. To prove the PVS lemmas, we develop automatic proof strategies and discharge the test lemmas using PVS Proof-Lite, a powerful proof scripting utility of the PVS verification system. We demonstrate our approach in two case studies that include 259 functions selected from the Sail and Lem libraries. For each function, we generate thousands of test lemmas, all of which are automatically discharged. The dissertation's second contribution targets the soundness validation of a disassembly process where the source language does not have well-defined semantics. Disassembly is a crucial step in binary security, reverse engineering, and binary verification. Various studies in these fields use disassembly tools and hypothesize that the reconstructed disassembly is correct. However, disassembly is an undecidable problem. State-of-the-art disassemblers suffer from issues ranging from incorrectly recovered instructions to incorrectly assessing which addresses belong to instructions and which to data. We present DSV, a systematic and automated approach to validate whether the output of a disassembler is sound with respect to the input binary. No source code, debugging information, or annotations are required. DSV defines soundness using a transition relation defined over concrete machine states: a binary is sound if, for all addresses in the binary that can be reached from the binary's entry point, the bytes of the (disassembled) instruction located at an address are the same as the actual bytes read from the binary. Since computing this transition relation is undecidable, DSV uses over-approximation by preventing false positives (i.e., the existence of an incorrectly disassembled reachable instruction but deemed unreachable) and allowing, but minimizing, false negatives. We apply DSV to 102 binaries of GNU Coreutils with eight different state-of-the-art disassemblers from academia and industry. DSV is able to find soundness issues in the output of all disassemblers. The dissertation's third contribution is WinCheck: a concolic model checker that detects memory-related properties of closed-source binaries. Bugs related to memory accesses are still a major issue for security vulnerabilities. Even a single buffer overflow or use-after-free in a large program may be the cause of a software crash, a data leak, or a hijacking of the control flow. Typical static formal verification tools aim to detect these issues at the source code level. WinCheck is a model-checker that is directly applicable to closed-source and stripped Windows executables. A key characteristic of WinCheck is that it performs its execution as symbolically as possible while leaving any information related to pointers concrete. This produces a model checker tailored to pointer-related properties, such as buffer overflows, use-after-free, null-pointer dereferences, and reading from uninitialized memory. The technique thus provides a novel trade-off between ease of use, accuracy, applicability, and scalability. We apply WinCheck to ten closed-source binaries available in a Windows 10 distribution, as well as the Windows version of the entire Coreutils library. We conclude that the approach taken is precise -- provides only a few false negatives -- but may not explore the entire state space due to unresolved indirect jumps. / Doctor of Philosophy / Binary verification is a process that verifies a class of properties, usually security-related properties, on binary files, and does not need access to source code. Since a binary file is composed of byte sequences and is not human-readable, in the binary verification process, a number of assumptions are usually made. The assumptions often involve the error-free nature of a set of subsystems used in the verification process and constitute the verification process's trusted computing base (or TCB). The reliability of the verification process therefore depends on how reliable the TCB is. The dissertation presents three research contributions in this regard. The first two contributions include reducing the TCB in binary verification, and the last contribution includes a binary verification process that leverages a reduced TCB. The dissertation's first contribution presents a validation on OCaml-to-PVS translations -- commonly used to translate a computer architecture's instruction specifications to PVS, a language that allows mathematical specifications. To build up a reliable semantical model of assembly instructions, which is assumed to be in the TCB, it is necessary to validate the translation. The dissertation's second contribution validates the soundness of the disassembly process, which translates a binary file to corresponding assembly instructions. Since the disassembly process is generally assumed to be trustworthy in many binary verification works, the TCB of binary verification could be reduced by validating the soundness of the disassembly process. With the reduced TCB, the dissertation introduces WinCheck, the dissertation's third and final contribution: a concolic model checker that validates pointer-related properties of closed-source Windows binaries. The pointer-related properties include absence of buffer overflow, absence of use-after-free, and absence of null-pointer dereference.
10

Testing the Internet state management mechanism

Tappenden, Andrew 06 1900 (has links)
This thesis presents an extensive survey of 100,000 websites as the basis for understanding the deployment of cookies across the Internet. The survey indicates cookie deployment on the Internet is approaching universal levels. The survey identifies the presence of P3P policies and dynamic web technologies as major predictors of cookie usage, and a number of significant relationships are established between the origin of the web application and cookie deployment. Large associations are identified between third-party persistent cookie usage and a countrys e-business environment. Cookie collection testing (CCT), a strategy for testing web applications, is presented. Cookies maintained in a browser are explored in light of anti random testing techniques, culminating in the definition of seeding vectors as the basis for a scalable test suite. Essentially CCT seeks to verify web application robustness against the modificationintentional or otherwiseof an application's internal state variables. Automation of CCT is outlined through the definition of test oracles and evaluation criterion. Evolutionary adaptive random (eAR) testing is proposed for application to the cookie collection testing strategy. A simulation study is undertaken to evaluate eAR against the current state-of-the-art in adaptive random testingfixed size candidate set, restricted random testing, quasi-random testing, and random testing. eAR is demonstrated to be superior to the other techniques for block pattern simulations. For fault patterns of increased complexity, eAR is shown to be comparable to the other methods. An empirical investigation of CCT is undertaken. CCT is demonstrated to reveal defects within web applications, and is found to have a substantial fault-triggering rate. Furthermore, CCT is demonstrated to interact with the underlying application, not just the technological platform upon which an application is implemented. Both seeding and generated vectors are found to be useful in triggering defects. A synergetic relationship is found to exist between the seeding and generated vectors with respect to distinct fault detection. Finally, a large significant relationship is established between structural and content similarity measures of web application responses, with a composite of the two similarity measures observed to be superior in the detection of faults. / Software Engineering and Intelligent Systems

Page generated in 0.1189 seconds