Spelling suggestions: "subject:"test generation"" "subject:"est generation""
31 |
Improving Branch Coverage in RTL Circuits with Signal Domain Analysis and Restrictive Symbolic ExecutionBagri, Sharad 18 March 2015 (has links)
Considerable research has been directed towards efficient test stimuli generation for Register Transfer Level (RTL) circuits. However, stimuli generation frameworks are still not capable of generating effective stimuli for all circuits. Some of the limiting factors are 1) It is hard to ascertain if a branch in the RTL code is reachable, and 2) Some hard-to-reach branches require intelligent algorithms to reach them.
Since unreachable branches cannot be reached by any test sequence, we propose a method to deduce unreachability of a branch by looking for the possible values which a signal can take in an RTL code without explicit unrolling of the design. To the best of our knowledge, this method has been able to identify more unreachable branches than any method published in this domain, while being computationally less expensive.
Moreover, some branches require very specific values on input signals in specific cycles to reach them. Conventional symbolic execution can generate those values but is computationally expensive. We propose a cycle-by-cycle restrictive symbolic execution that analyzes only a selected subset of program statements to reduce the computational cost. Our proposed method gathers information from an initial execution trace generated by any technique, to intelligently decide specific cycles where the application of this method will be helpful. This method can hybrid with simulation-based test stimuli generation methods to reduce the cost of formal verification. With this method, we were able to reach some previously unreached branches in ITC99 benchmark circuits. / Master of Science
|
32 |
Branch Guided Metrics for Functional and Gate-level TestingAcharya, Vineeth Vadiraj 31 March 2015 (has links)
With the increasing complexity of modern day processors and system-on-a-chip (SOCs), designers invest a lot of time and resources into testing and validating these designs. To reduce the time-to-market and cost, the techniques used to validate these designs have to constantly improve. Since most of the design activity has moved to the register transfer level (RTL), test methodologies at the RTL have been gaining momentum. We present a novel functional test generation framework for functional test generation at RTL. A popular software-based metric for measuring the effectiveness of an RTL test suite is branch coverage. But exercising hard-to-reach branches is still a challenge and requires good understanding of the design semantics. The proposed framework uses static analysis to extract certain semantics of the circuit and uses several data structures to model these semantics. Using these data structures, we assist the branch-guided search to exercise these hard-to-reach branches. Since the correlation between high branch coverage and detecting defects and bugs is not clear, we present a new metric at the RTL which augments the RTL branch coverage with state values. Vectors which have higher scores on the new metric achieve higher branch and state coverages, and therefore can be applied at different levels of abstraction such as post-silicon validation. Experimental results show that use of the new metric in our test generation framework can achieve a high level of branch and fault coverage for several benchmark circuits, while reducing the length of the vector sequence.
This work was supported in part by the NSF grant 1016675. / Master of Science
|
33 |
Improving Bio-Inspired FrameworksVaradarajan, Aravind Krishnan 05 October 2018 (has links)
In this thesis, we provide solutions to two different bio-inspired algorithms. The first is enhancing the performance of bio-inspired test generation for circuits described in RTL Verilog, specifically for branch coverage. We seek to improve upon an existing framework, BEACON, in terms of performance. BEACON is an Ant Colony Optimization (ACO) based test generation framework. Similar to other ACO frameworks, BEACON also has a good scope in improving performance using parallel computing. We try to exploit the available parallelism using both multi-core Central Processing Units (CPUs) and Graphics Processing Units(GPUs). Using our new multithreaded approach we can reduce test generation time by a factor of 25 — compared to the original implementation for a wide variety of circuits. We also provide a 2-dimensional factoring method for BEACON to improve available parallelism to yield some additional speedup. The second bio-inspired algorithm we address is for Deep Neural Networks. With the increasing prevalence of Neural Nets in artificial intelligence and mission-critical applications such as self-driving cars, questions arise about its reliability and robustness. We have developed a test-generation based technique and metric to evaluate the robustness of a Neural Nets outputs based on its sensitivity to its inputs. This is done by generating inputs which the neural nets find difficult to classify but at the same time is relatively apparent to human perception. We measure the degree of difficulty for generating such inputs to calculate our metric. / MS / High-level Hardware Design Languages (HDLs) has allowed designers to implement complicated hardware designs with considerably lesser effort. Unfortunately, design verification for the same circuits has failed to scale gracefully in terms of time and effort. Not only has it become more difficult for formal methods due to exponential complexity from increasing path explosion, but concrete test generation frameworks also face new issues such as the increased requirement in the volume of simulations. The advent of parallel computing using General Purpose Graphics Processing Units (GPGPUs) has led to improved performance for various applications. We propose to leverage both the multi-core CPU and the GPGPU for RTL test generation. This is achieved by implementing a test generation framework that can utilize the SIMD type parallelism available in GPGPUs and task level parallelism available on CPUs. The speedup achieved is extracted from both the test generation framework itself and also from refactoring the hardware model for multi-threaded test generation. For this purpose, we translate the RTL Verilog to a C++ and a CUDA compilable program. Experimental results show that considerable speedup can be achieved for test generation without loss of coverage.
In recent years, machine learning and artificial intelligence have taken a substantial leap forward with the discovery of Deep Neural Networks(DNN). Unfortunately, apart from Accuracy and FTest numbers, there exist very few metrics to qualify a DNN. This becomes a reliability issue as DNNs are quite frequently used in safety-critical applications. It is difficult to interpret how the parameters of a trained DNN help store the knowledge from the training inputs. Therefore it is also difficult to infer whether a DNN has learned parameters which might cause an output neuron to misfire wrongly, a bug. An exhaustive search of the input space of the DNN is not only infeasible but is also misleading. Thus, in our work, we try to apply test generation techniques to generate new test inputs based on existing training and testing set to qualify the underlying robustness. Attempts to generate these inputs are guided only by the prediction probability values at the final output layer. We observe that depending on the amount of perturbation and time needed to generate these inputs we can differentiate between DNNs of varying quality.
|
34 |
Secure Coding Practice in Java: Automatic Detection, Repair, and Vulnerability DemonstrationZhang, Ying 12 October 2023 (has links)
The Java platform and third-party open-source libraries provide various Application Programming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely is challenging for developers who lack cybersecurity training. Prior studies show that many developers use APIs insecurely, thereby introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, their effectiveness in helping developers with secure coding practices remains unclear. This dissertation focuses on two main objectives: (1) exploring the strengths and weaknesses of the existing automated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities.
Our research started with investigating the effectiveness of current tools in helping with developers' secure coding practices. We systematically explored the strengths and weaknesses of existing automated tools for detecting API-related vulnerabilities. Through comprehensive analysis, we observed that most existing tools merely report misuses, without suggesting any customized fixes. Moreover, developers often rejected tool-generated vulnerability reports due to their concerns on the correctness of detection, and the exploitability of the reported issues. To address these limitations, the second work proposed SEADER, an example-based approach to detect and repair security-API misuses. Given an exemplar ⟨insecure, secure⟩ code pair, SEADER compares the snippets to infer any API-misuse template and corresponding fixing edit. Based on the inferred information, given a program, SEADER performs inter-procedural static analysis to search for security-API misuses and to propose customized fixes. The third work leverages ChatGPT-4.0 to automatically generate security test cases. These test cases can demonstrate how vulnerable API usage facilitates supply chain attacks on specific software applications. By running such test cases during software development and maintenance, developers can gain more relevant information about exposed vulnerabilities, and may better create secure-by-design and secure-by-default software. / Doctor of Philosophy / The Java platform and third-party open-source libraries provide various Application Pro- gramming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely can be challenging, especially for developers who aren't trained in cybersecurity. Prior work shows that many developers use APIs insecurely, consequently introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, it is still unclear how well they help developers with secure coding practices.
This dissertation focuses on (1) exploring the strengths and weaknesses of the existing au- tomated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. We first systematically evaluated the strengths and weaknesses of the existing automated API-related vulnerability detection tools. We observed that most existing tools merely report misuses, without suggesting any cus- tomized fixes. Additionally, developers often reject tool-generated vulnerability reports due to their concerns about the correctness of detection, and whether the reported vulnerabil- ities are truly exploitable. To address the limitations found in our study, the second work proposed a novel example-based approach, SEADER, to detect and repair API insecure usage. The third work leverages ChatGPT-4.0 to automatically generate security test cases, and to demonstrate how vulnerable API usage facilitates the supply chain attacks to given software applications.
|
35 |
RTL Functional Test Generation Using Factored Concolic ExecutionPinto, Sonal 21 July 2017 (has links)
This thesis presents a novel concolic testing methodology and CORT, a test generation framework that uses it for high-level functional test generation. The test generation effort is visualized as the systematic unraveling of the control-flow response of the design over multiple (factored) explorations. We begin by transforming the Register Transfer Level (RTL) source for the design into a high-performance C++ compiled functional simulator which is instrumented for branch coverage. An exploration begins by simulating the design with concrete stimuli. Then, we perform an interleaved cycle-by-cycle symbolic evaluation over the concrete execution trace extracted from the Control Flow Graph (CFG) of the design. The purpose of this task is to dynamically discover means to divert the control flow of the system, by mutating primary-input stimulated control statements in this trace. We record the control-flow response as a Test Decision Tree (TDT), a new representation for the test generation effort. Successive explorations begin at system states heuristically selected from a global TDT, onto which each new decision tree resultant from an exploration is stitched. CORT succeeds at constructing functional tests for ITC99 and IWLS-2005 benchmarks that achieve high branch coverage using the fewest number of input vectors, faster than existing methods. Furthermore, we achieve orders of magnitude speedup compared to previous hybrid concrete and symbolic simulation based techniques. / Master of Science / In recent years, the cost of verifying digital designs has outpaced the cost of development, in terms of both resources and time. The scale and complexity of modern designs have made it increasingly impractical to manually verify the design. In the process of circuit design, designers use Hardware Descriptive Languages (HDL) to abstract the design in a manner similar to software programming languages. This thesis presents a novel methodology for the automation of testing functional level hardware description with the aim of maximizing branch coverage. Branches indicate decision points in the design, and tests with high branch coverage are able to thoroughly exercise the design in a manner that randomly generated tests cannot. In our work, the design is simulated concretely with a random test (a sequence of input or stimulus). During simulation, we analyze the flow of behavioral statements and decisions executed to construct a formulaic interpretation of the design execution in terms of syntactical elements, to uncover differentiating input that could have diverted the flow of execution to unstimulated parts of the design. This process is formally known as Concolic Execution. The techniques described in this thesis tightly interleaves concrete and symbolic simulation (concolic execution) of hardware designs to generate tests with high branch coverage, orders of magnitude faster than previous similar work.
|
36 |
Strategies for Scalable Symbolic Execution-based Test GenerationKrishnamoorthy, Saparya 02 August 2010 (has links)
With the advent of advanced program analysis and constraint solving techniques, several test generation tools use variants of symbolic execution. Symbolic techniques have been shown to be very effective in path-based test generation; however, they fail to scale to large programs due to the exponential number of paths to be explored. In this thesis, we focus on tackling this path explosion problem and propose search strategies to achieve quick branch coverage under symbolic execution, while exploring only a fraction of paths in the program. We present a reachability-guided strategy that makes use of the reachability graph of the program to explore unvisited portions of the program and a conflict driven backtracking strategy that utilizes conflict analysis to perform nonchronological backtracking. We also propose error-directed search strategies, that are aimed at catching bugs in the program faster, by targeting those parts of the program where bugs are likely to be found or those that are hard to reach. We present experimental evidence that these strategies can significantly reduce the search space and improve the speed of test generation for programs. / Master of Science
|
37 |
Classification de menaces d’erreurs par analyse statique, simplification syntaxique et test structurel de programmes / Classification of errors threats by static analysis, program sclicing and structural testing of programsChebaro, Omar 13 December 2011 (has links)
La validation des logiciels est une partie cruciale dans le cycle de leur développement. Deux techniques de vérification et de validation se sont démarquées au cours de ces dernières années : l’analyse statique et l’analyse dynamique. Les points forts et faibles des deux techniques sont complémentaires. Nous présentons dans cette thèse une combinaison originale de ces deux techniques. Dans cette combinaison, l’analyse statique signale les instructions risquant de provoquer des erreurs à l’exécution, par des alarmes dont certaines peuvent être de fausses alarmes, puis l’analyse dynamique (génération de tests) est utilisée pour confirmer ou rejeter ces alarmes. L’objectif de cette thèse est de rendre la recherche d’erreurs automatique, plus précise, et plus efficace en temps. Appliquée à des programmes de grande taille, la génération de tests, peut manquer de temps ou d’espace mémoire avant de confirmer certaines alarmes comme de vraies erreurs ou conclure qu’aucun chemin d’exécution ne peut atteindre l’état d’erreur de certaines alarmes et donc rejeter ces alarmes. Pour surmonter ce problème, nous proposons de réduire la taille du code source par le slicing avant de lancer la génération de tests. Le slicing transforme un programme en un autre programme plus simple, appelé slice, qui est équivalent au programme initial par rapport à certains critères. Quatre utilisations du slicing sont étudiées. La première utilisation est nommée all. Elle consiste à appliquer le slicing une seule fois, le critère de simplification étant l’ensemble de toutes les alarmes du programme qui ont été détectées par l’analyse statique. L’inconvénient de cette utilisation est que la génération de tests peut manquer de temps ou d’espace et les alarmes les plus faciles à classer sont pénalisées par l’analyse d’autres alarmes plus complexes. Dans la deuxième utilisation, nommée each, le slicing est effectué séparément par rapport à chaque alarme. Cependant, la génération de tests est exécutée pour chaque programme et il y a un risque de redondance d’analyse si des alarmes sont incluses dans d’autres slices. Pour pallier ces inconvénients, nous avons étudié les dépendances entre les alarmes et nous avons introduit deux utilisations avancées du slicing, nommées min et smart, qui exploitent ces dépendances. Dans l’utilisation min, le slicing est effectué par rapport à un ensemble minimal de sous-ensembles d’alarmes. Ces sous-ensembles sont choisis en fonction de dépendances entre les alarmes et l’union de ces sous-ensembles couvre l’ensemble de toutes les alarmes. Avec cette utilisation, on a moins de slices qu’avec each, et des slices plus simples qu’avec all. Cependant, l’analyse dynamique de certaines slices peut manquer de temps ou d’espace avant de classer certaines alarmes, tandis que l’analyse dynamique d’une slice éventuellement plus simple permettrait de les classer. L’utilisation smart consiste à appliquer l’utilisation précédente itérativement en réduisant la taille des sous-ensembles quand c’est nécessaire. Lorsqu’une alarme ne peut pas être classée par l’analyse dynamique d’une slice, des slices plus simples sont calculées. Nous prouvons la correction de la méthode proposée. Ces travaux sont implantés dans sante, notre outil qui relie l’outil de génération de tests PathCrawler et la plate-forme d’analyse statique Frama-C. Des expérimentations ont montré, d’une part, que notre combinaison est plus performante que chaque technique utilisée indépendamment et, d’autre part, que la vérification devient plus rapide avec l’utilisation du slicing. De plus, la simplification du programme par le slicing rend les erreurs détectées et les alarmes restantes plus faciles à analyser / Software validation remains a crucial part in software development process. Two major techniques have improved in recent years, dynamic and static analysis. They have complementary strengths and weaknesses. We present in this thesis a new original combination of these methods to make the research of runtime errors more accurate, automatic and reduce the number of false alarms. We prove as well the correction of the method. In this combination, static analysis reports alarms of runtime errors some of which may be false alarms, and test generation is used to confirm or reject these alarms. When applied on large programs, test generation may lack time or space before confirming out certain alarms as real bugs or finding that some alarms are unreachable. To overcome this problem, we propose to reduce the source code by program slicing before running test generation. Program slicing transforms a program into another simpler program, which is equivalent to the original program with respect to certain criterion. Four usages of program slicing were studied. The first usage is called all. It applies the slicing only once, the simplification criterion is the set of all alarms in the program. The disadvantage of this usage is that test generation may lack time or space and alarms that are easier to classify are penalized by the analysis of other more complex alarms. In the second usage, called each, program slicing is performed with respect to each alarm separately. However, test generation is executed for each sliced program and there is a risk of redundancy if some alarms are included in many slices. To overcome these drawbacks, we studied dependencies between alarms on which we base to introduce two advanced usages of program slicing : min and smart. In the min usage, the slicing is performed with respect to subsets of alarms. These subsets are selected based on dependencies between alarms and the union of these subsets cover the whole set of alarms. With this usage, we analyze less slices than with each, and simpler slices than with all. However, the dynamic analysis of some slices may lack time or space before classifying some alarms, while the dynamic analysis of a simpler slice could possibly classify some. Usage smart applies previous usage iteratively by reducing the size of the subsets when necessary. When an alarm cannot be classified by the dynamic analysis of a slice, simpler slices are calculated. These works are implemented in sante, our tool that combines the test generation tool PathCrawler and the platform of static analysis Frama-C. Experiments have shown, firstly, that our combination is more effective than each technique used separately and, secondly, that the verification is faster after reducing the code with program slicing. Simplifying the program by program slicing also makes the detected errors and the remaining alarms easier to analyze
|
38 |
Symbolic string executionRedelinghuys, Gideon 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Symbolic execution is a well-established technique for automated test generation
and for nding errors in complex code. Most of the focus has however
been on programs that manipulate integers, booleans, and even, references in
object-oriented programs. Recently researchers have started looking at programs
that do lots of string processing, motivated, in part, by the popularity of
the web and the risk that errors in web servers may lead to security violations.
Attempts to extend symbolic execution to the domain of strings are mainly
divided into one of two camps: automata-based approaches and approaches
based on bitvector analysis. Here we investigate these two approaches in a
uni ed setting, namely the symbolic execution framework of Java PathFinder.
We describe the implementations of both approaches and then do an evaluation
to show under what circumstances each approach performs well (or not
so well). We also illustrate the usefulness of the symbolic execution of strings
by nding errors in real-world examples. / AFRIKAANSE OPSOMMING: Simboliese uitvoering is 'n bekende tegniek vir automatiese genereering van
toetse en om foute te vind in ingewikkelde bronkode. Die fokus sover was
grotendeels op programme wat gebruik maak van heelgetalle, boolse waardes
en selfs verwysings in objek geörienteerde programme. Navorsers het onlangs
begin kyk na programme wat baie gebruik maak van string prosessering, deelteliks
gemotiveerd deur die populariteit van die web en die gepaardgaande
risiko's daarvan. Vorige implementasies van simboliese string uitvoering word
binne twee kampe verdeel: die automata gebaseerde benadering en bitvektoor
gebaseerde benadering. Binne hierdie tesis word die twee benaderings
onder een dak gebring, naamliks Java PathFinder. Die implentasie van beide
benaderings word bespreek en ge-evalueer om die omstandighede uit te wys
waarbinne elk beter sou vaar. Die nut van simboliese string uitvoering word
geïllustreer deur dit toe te pas in foutiewe regte wêreld voorbeelde.
|
39 |
Efficient state space exploration for parallel test generationRamasamy Kandasamy, Manimozhian 03 September 2009 (has links)
Automating the generation of test cases for software is an active area of research.
Specification based test generation is an approach in which a formal representation of a
method is analyzed to generate valid test cases. Constraint solving and state space
exploration are important aspects of the specification based test generation. One problem
with specification based testing is that the size of the state space explodes when we apply
this approach to a code of practical size. Hence finding ways to reduce the number of
candidates to explore within the state space is important to make this approach practical
in industry. Korat is a tool which generates test cases for Java programs based on
predicates that validate the inputs to the method. Various ongoing researches intend to
increase the tools effectiveness in handling large state space. Parallelizing Korat and
minimizing the exploration of invalid candidates are the active research directions.
This report surveys the basic algorithms of Korat, PKorat, and Fast Korat. PKorat
is a parallel version of Korat and aims to take advantage of multi-processor and multicore
systems available. Fast Korat implements four optimizations which reduce the
number of candidate explored to generate validate candidates and reduce the amount of
time required to explore each candidate. This report also presents the execution time
results for generating test candidates for binary tree, doubly linked list, and sorted singly
linked list, from their respective predicates. / text
|
40 |
Passive interoperability testing for communication protocols / Le test d'interopérabilité passif pour les protocoles de communicationChen, Nanxing 24 June 2013 (has links)
Dans le domaine des réseaux, le test de protocoles de communication est une activité importante afin de valider les protocoles applications avant de les mettre en service. Généralement, les services qu'un protocole doit fournir sont décrits dans sa spécification. Cette spécification est une norme ou un standard défini par des organismes de normalisation tels que l'ISO (International Standards Organisation), l'IETF (Internet Engineering Task Force), l'ITU (International Telecommunication Union), etc. Le but du test est de vérifier que les implémentations du protocole fonctionnent correctement et rendent bien les services prévus. Pour atteindre cet objectif, différentes méthodes de tests peuvent être utilisées. Parmi eux, le test de conformité vérifie qu'un produit est conforme à sa spécification. Le test de robustesse vérifie les comportements de l'implémentation de protocole face à des événements imprévus. Dans cette thèse, nous nous intéressons plus particulièrement au test d'interopérabilité, qui vise à vérifier que plusieurs composants réseaux interagissent correctement et fournissent les services prévus. L'architecture générale de test d'interopérabilité fait intervenir un système sous test (SUT) composé de plusieurs implémentations sous test (IUT). Les objectifs du test d'interopérabilité sont à la fois de vérifier que plusieurs implémentations (basées sur des protocoles conçus pour fonctionner ensemble) sont capables d'interagir et que, lors de leur interaction, elles rendent les services prévus dans leurs spécifications respectives. En général, les méthodes de test d'interopérabilité peuvent être classées en deux grandes approches: le test actif et le test passif. Le test actif est une technique de validation très populaire, dont l'objectif est essentiellement de tester les implémentations (IUT), en pratiquant une suite de contrôles et d'observations sur celles-ci. Cependant, une caractéristique fondamentale du test actif est que le testeur possède la capacité de contrôler les IUTs. Cela implique que le testeur perturbe le fonctionnement normal du système testé. De ce fait, le test actif n'est pas une technique appropriée pour le test d'interopérabilité, qui est souvent effectué dans les réseaux opérationnels, où il est difficile d'insérer des entrées arbitraires sans affecter les services ou les fonctionnements normaux des réseaux. A l'inverse, le test passif est une technique se basant uniquement sur les observations. Le testeur n'a pas besoin d'agir sur le SUT notamment en lui envoyant des stimuli. Cela permet au test d'être effectué sans perturber l'environnement normal du système sous test. Le test passif possède également d'autres avantages comme par exemple, pour les systèmes embarqués où le testeur n'a pas d'accès direct, de pourvoir effectuer le test en collectant des traces d'exécution du système, puis de détecter les éventuelles erreurs ou déviations de ces traces vis-à-vis du comportement du système. / In the field of networking, testing of communication protocols is an important activity to validate protocol applications before commercialisation. Generally, the services that must be provided by a protocol are described in its specification(s). A specification is generally a standard defined by standards bodies such as ISO (International Standards Organization), IETF (Internet Engineering Task Force), ITU (International Telecommunication Union), etc. The purpose of testing is to verify that the protocol implementations work correctly and guarantee the quality of the services in order to meet customers expectations. To achieve this goal, a variety of testing methods have been developed. Among them, interoperability testing is to verify that several network components cooperate correctly and provide expected services. Conformance testing verifies that a product conforms to its specification. Robustness testing determines the degree to which a system operates correctly in the presence of exceptional inputs or stressful environmental conditions. In this thesis, we focus on interoperability testing. The general architecture of interoperability testing involves a system under test (SUT), which consists of at least two implementations under test (IUT). The objectives of interoperability testing are to ensure that interconnected protocol implementations are able to interact correctly and, during their interaction, provide the services predefined in their specifications. In general, the methods of interoperability testing can be classified into two approaches: active and passive testing. Among them, active test is the most conventionally used technique, which aims to test the implementations (IUT) by injecting a series of test messages (stimuli) and observing the corresponding outputs. However, the intrusive nature of active testing is that the tester has the ability to control IUTS. This implies that the tester interrupts inevitably the normal operations of the system under test. In this sense, active testing is not a suitable technique for interoperability testing, which is often carried out in operational networks. In such context, it is difficult to insert arbitrary testing messages without affecting the normal behavior and the services of the system. On the contrary, passive testing is a technique based only on observation. The tester does not need to interact with the SUT. This allows the test to be carried out without disturbing the normal operations of the system under test. Besides, passive testing also has other advantages such as: for embedded systems to which the tester does not have direct access, test can still be performed by collecting the execution traces of the system and then detect errors by comparing the trace with the behavior of the system described in its specification. In addition, passive testing makes it possible to moniter a system over a long period, and report abnomality at any time.
|
Page generated in 0.1132 seconds