171 |
Model-Based Testing: An EvaluationNordholm, Johan January 2010 (has links)
<p>Testing is a critical activity in the software development process in order to obtain systems of high quality. Tieto typically develops complex systems, which are currently tested through a large number of manually designed test cases. Recent development within software testing has resulted in methods and tools that can automate the test case design, the generation of test code and the test result evaluation based on a model of the system under test. This testing approach is called model-based testing (MBT).</p><p>This thesis is a feasibility study of the model-based testing concept and has been performed at the Tieto office in Karlstad. The feasibility study included the use and evaluation of the model-based testing tool Qtronic, developed by Conformiq, which automatically designs test cases given a model of the system under test as input. The experiments for the feasibility study were based on the incremental development of a test object, which was the client protocol module of a simplified model for an ATM (Automated Teller Machine) client-server system. The experiments were evaluated both individually and by comparison with the previous experiment since they were based on incremental development. For each experiment the different tasks in the process of testing using Qtronic were analyzed to document the experience gained as well as to identify strengths and weaknesses.</p><p>The project has shown the promise inherent in using a model-based testing approach. The application of model-based testing and the project results indicate that the approach should be further evaluated since experience will be crucial if the approach is to be adopted within Tieto’s organization.</p>
|
172 |
Consistency techniques for test data generationTran Sy, Nguyen 10 June 2005 (has links)
This thesis presents a new approach for automated test data generation of imperative programs containing integer, boolean and/or float variables. A test program (with procedure calls) is represented by an Interprocedural Control Flow Graph (ICFG). The classical testing criteria (statement, branch, and path coverage), widely used in unit testing, are extended to the ICFG. Path coverage is the core of our approach. Given a specified path of the ICFG, a path constraint is derived and solved to obtain a test case. The constraint solving is carried out based on a consistency notion. For statement (and branch) coverage, paths reaching a specified node or branch are dynamically constructed. The search for suitable paths is guided by the interprocedural control dependences of the program. The search is also pruned by our consistency filter. Finally, test data are generated by the application of the proposed path coverage algorithm. A prototype system implements our approach for C programs. Experimental results, including complex numerical programs, demonstrate the feasibility of the method and the efficiency of the system, as well as its versatility and flexibility to different classes of problems (integer and/or float variables; arrays, procedures, path coverage, statement coverage).
|
173 |
Consistency techniques for test data generationTran Sy, Nguyen 10 June 2005 (has links)
This thesis presents a new approach for automated test data generation of imperative programs containing integer, boolean and/or float variables. A test program (with procedure calls) is represented by an Interprocedural Control Flow Graph (ICFG). The classical testing criteria (statement, branch, and path coverage), widely used in unit testing, are extended to the ICFG. Path coverage is the core of our approach. Given a specified path of the ICFG, a path constraint is derived and solved to obtain a test case. The constraint solving is carried out based on a consistency notion. For statement (and branch) coverage, paths reaching a specified node or branch are dynamically constructed. The search for suitable paths is guided by the interprocedural control dependences of the program. The search is also pruned by our consistency filter. Finally, test data are generated by the application of the proposed path coverage algorithm. A prototype system implements our approach for C programs. Experimental results, including complex numerical programs, demonstrate the feasibility of the method and the efficiency of the system, as well as its versatility and flexibility to different classes of problems (integer and/or float variables; arrays, procedures, path coverage, statement coverage).
|
174 |
Testability of Dynamic Real-Time SystemsLindström, Birgitta January 2009 (has links)
This dissertation concerns testability of event-triggered real-time systems. Real-time systems are known to be hard to test because they are required to function correct both with respect to what the system does and when it does it. An event-triggered real-time system is directly controlled by the events that occur in the environment, as opposed to a time-triggered system, which behavior with respect to when the system does something is constrained, and therefore more predictable. The focus in this dissertation is the behavior in the time domain and it is shown how testability is affected by some factors when the system is tested for timeliness. This dissertation presents a survey of research that focuses on software testability and testability of real-time systems. The survey motivates both the view of testability taken in this dissertation and the metric that is chosen to measure testability in an experiment. We define a method to generate sets of traces from a model by using a meta algorithm on top of a model checker. Defining such a method is a necessary step to perform the experiment. However, the trace sets generated by this method can also be used by test strategies that are based on orderings, for example execution orders. An experimental study is presented in detail. The experiment investigates how testability of an event-triggered real-time system is affected by some constraining properties of the execution environment. The experiment investigates the effect on testability from three different constraints regarding preemptions, observations and process instances. All of these constraints were claimed in previous work to be significant factors for the level of testability. Our results support the claim for the first two of the constraints while the third constraint shows no impact on the level of testability. Finally, this dissertation discusses the effect on the event-triggered semantics when the constraints are applied on the execution environment. The result from this discussion is that the first two constraints do not change the semantics while the third one does. This result indicates that a constraint on the number of process instances might be less useful for some event-triggered real-time systems.
|
175 |
A Requirements-Based Partition Testing Framework Using Particle Swarm Optimization TechniqueGanjali, Afshar January 2008 (has links)
Modern society is increasingly dependent on the quality of software systems. Software failure can cause severe consequences, including loss of human life. There are various ways of fault prevention and detection that can be deployed in different stages of software development. Testing is the most widely used approach for ensuring software quality.
Requirements-Based Testing and Partition Testing are two of the widely used approaches for testing software systems. Although both of these techniques are mature and are addressed widely in the literature and despite the general agreement on both of these key techniques of functional testing, a combination of them lacks a systematic approach. In this thesis, we propose a framework along with a procedural process for testing a system using Requirements-Based Partition Testing (RBPT). This framework helps testers to start from the requirements documents and follow a straightforward step by step process to generate the required test cases without loosing any required data. Although many steps of the process are manual, the framework can be used as a foundation for automating the whole test case generation process.
Another issue in testing a software product is the test case selection problem. Choosing appropriate test cases is an essential part of software testing that can lead to significant improvements in efficiency, as well as reduced costs of combinatorial testing. Unfortunately, the problem of finding minimum size test sets is NP-complete in general. Therefore, artificial intelligence-based search algorithms have been widely used for generating near-optimal solutions. In this thesis, we also propose a novel technique for test case generation using Particle Swarm Optimization (PSO), an effective optimization tool which has emerged in the last decade. Empirical studies show that in some domains particle swarm optimization is equally well-suited or even better than some other techniques. At the same time, a particle swarm algorithm is much simpler, easier to implement, and has just a few parameters that the user needs to adjust. These properties make PSO an ideal technique for test case generation. In order to have a fair comparison of our newly proposed algorithm against existing techniques, we have designed and implemented a framework for automatic evaluation of these methods. Through experiments using our evaluation framework, we illustrate how this new test case generation technique can outperform other existing methodologies.
|
176 |
Model Based Testing for Non-Functional RequirementsCherukuri, Vijaya Krishna, Gupta, Piyush January 2010 (has links)
Model Based Testing (MBT) is a new-age test automation technique traditionally used for Functional Black-Box Testing. Its capability of generating test cases by using model developed from the analysis of the abstract behavior of the System under Test is gaining popularity. Many commercial and open source MBT tools are available currently in market. But each one has its own specific way of modeling and test case generation mechanism that is suitable for varied types of systems. Ericsson, a telecommunication equipment provider company, is currently adapting Model Based Testing in some of its divisions for functional testing. Those divisions haven’t yet attempted adapting Model Based Testing for non-functional testing in a full-pledged manner. A comparative study between various MBT tools will help one of the Ericsson’s testing divisions to select the best tool for adapting to its existing test environment. This also helps in improving the quality of testing while reducing cost, time and effort. This thesis work helps Ericsson testing division to select such an effective MBT tool. Based on aspects such as functionality, flexibility, adaptability, performance etc., a comparative study is carried out on various available MBT tools and a few were selected among them: Qtronic, ModelJUnit and Elvior Motes.This thesis also helps to understand the usability of the selected tools for modeling of non-functional requirements using a new method. A brief idea of modeling the non-functional requirements is suggested in this thesis. A System under Test was identified and its functional behavior was modeled along with the non functional requirements in Qtronic and ModelJUnit. An experimental analysis, backed by observations of using the new proposed method indicates that the method is efficient enough to carry out modeling non-functional requirements along with modeling of functional requirements by identifying the appropriate approach.Model Based Testing (MBT) is a new-age test automation technique traditionally used for Functional Black-Box Testing. Its capability of generating test cases by using model developed from the analysis of the abstract behavior of the System under Test is gaining popularity. Many commercial and open source MBT tools are available currently in market. But each one has its own specific way of modeling and test case generation mechanism that is suitable for varied types of systems. Ericsson, a telecommunication equipment provider company, is currently adapting Model Based Testing in some of its divisions for functional testing. Those divisions haven’t yet attempted adapting Model Based Testing for non-functional testing in a full-pledged manner. A comparative study between various MBT tools will help one of the Ericsson’s testing divisions to select the best tool for adapting to its existing test environment. This also helps in improving the quality of testing while reducing cost, time and effort. This thesis work helps Ericsson testing division to select such an effective MBT tool. Based on aspects such as functionality, flexibility, adaptability, performance etc., a comparative study is carried out on various available MBT tools and a few were selected among them: Qtronic, ModelJUnit and Elvior Motes. This thesis also helps to understand the usability of the selected tools for modeling of non-functional requirements using a new method. A brief idea of modeling the non-functional requirements is suggested in this thesis. A System under Test was identified and its functional behavior was modeled along with the non functional requirements in Qtronic and ModelJUnit. An experimental analysis, backed by observations of using the new proposed method indicates that the method is efficient enough to carry out modeling non-functional requirements along with modeling of functional requirements by identifying the appropriate approach.
|
177 |
A Requirements-Based Partition Testing Framework Using Particle Swarm Optimization TechniqueGanjali, Afshar January 2008 (has links)
Modern society is increasingly dependent on the quality of software systems. Software failure can cause severe consequences, including loss of human life. There are various ways of fault prevention and detection that can be deployed in different stages of software development. Testing is the most widely used approach for ensuring software quality.
Requirements-Based Testing and Partition Testing are two of the widely used approaches for testing software systems. Although both of these techniques are mature and are addressed widely in the literature and despite the general agreement on both of these key techniques of functional testing, a combination of them lacks a systematic approach. In this thesis, we propose a framework along with a procedural process for testing a system using Requirements-Based Partition Testing (RBPT). This framework helps testers to start from the requirements documents and follow a straightforward step by step process to generate the required test cases without loosing any required data. Although many steps of the process are manual, the framework can be used as a foundation for automating the whole test case generation process.
Another issue in testing a software product is the test case selection problem. Choosing appropriate test cases is an essential part of software testing that can lead to significant improvements in efficiency, as well as reduced costs of combinatorial testing. Unfortunately, the problem of finding minimum size test sets is NP-complete in general. Therefore, artificial intelligence-based search algorithms have been widely used for generating near-optimal solutions. In this thesis, we also propose a novel technique for test case generation using Particle Swarm Optimization (PSO), an effective optimization tool which has emerged in the last decade. Empirical studies show that in some domains particle swarm optimization is equally well-suited or even better than some other techniques. At the same time, a particle swarm algorithm is much simpler, easier to implement, and has just a few parameters that the user needs to adjust. These properties make PSO an ideal technique for test case generation. In order to have a fair comparison of our newly proposed algorithm against existing techniques, we have designed and implemented a framework for automatic evaluation of these methods. Through experiments using our evaluation framework, we illustrate how this new test case generation technique can outperform other existing methodologies.
|
178 |
An Ingetrated Method for Model-Based TestingHsu, Ling-hsin 17 July 2008 (has links)
The main goal of testing is to find errors in the System Under Test (SUT). Prior research indicated that Model-Based Testing is indeed good at finding SUT errors, can lead to less time and effort spent on testing if the time needed to write and maintain the model plus the time spent on directing the test generation is less than the cost of manually designing and maintaining a test suite. This study proposed a methodology for Model-Based Testing. In this approach, Sequence Diagrams and Class Diagram are used to determine the testing path and test case and Object Constraint Language is used to specify the business logic constraint. Three real-world cases and a CASE tool are used to test the usability (including the concepts, application, and advantages) of the proposed methodology. With this approach, SUT errors can be found at the systems analysis and design stage and thereby reduce the cost of software testing and enhance the efficiency of system development.
|
179 |
Towards tool support for phase 2 in 2GStefánsson, Vilhjálmur January 2002 (has links)
<p>When systematically adopting a CASE (Computer-Aided Software Engineering) tool, an organisation evaluates candidate tools against a framework of requirements, and selects the most suitable tool for usage. A method, called 2G, has been proposed that aims at developing such frameworks based on the needs of a specific organisation.</p><p>This method includes a pilot evaluation phase, where state-of-the-art CASE-tools are explored with the aim of gaining more understanding of the requirements that the organisation adopting CASE-tools puts on candidate tools. This exploration results in certain output data, parts of which are used in interviews to discuss the findings of the tool exploration with the organisation. This project has focused on identifying the characteristics of these data, and subsequently to hypothesise a representation of the data, with the aim of providing guidelines for future tool support for the 2G method.</p><p>The approach to reaching this aim was to conduct a case study of a new application of the pilot evaluation phase, which resulted in data that could subsequently be analysed with the aim of identifying characteristics. This resulted in a hypothesised data representation, which was found to fit the data from the conducted application well, although certain situations were identified that the representation might not be able to handle.</p>
|
180 |
Characterization and generation of streaming video traces [electronic resource] / by John N Shahbazian.Shahbazian, John N. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 68 pages. / Thesis (M.S.C.S.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: This thesis describes two methods collectively called Time Series Generation (TSG) that can be used to generate time series inputs modeling packet loss to test IP-based streaming video software. The TSG methods create packet loss models that recreate the mean, variance, and autocorrelation signatures of an actual trace. The synthetic packet loss traces can have their inherent statistics altered, thus allowing for thorough testing of video software in ways that could not be done on actual networks. The two methods comprising TSG, which are individually called the primary and secondary method, use the principle of iterated uniformity to create a time series that attempts to match mean, variance, and autocorrelation. The two methods differ in their approach to generating autocorrelation. This leads to trade-offs between the two. The TSG methods are embodied in a software program called TSGen. / ABSTRACT: An evaluation of TSGen is conducted, including a comparison with the well-known Autoregressive-To-Anything Generation algorithm (ARTAGEN) method and tool. The details of capturing packets and parsing video frame counts from packet streams are explained and demonstrated. Sixteen video stream traces were collected from a variety of sources and used to evaluate TSGen. Synthetic traces are generated for the sixteen original traces and both their summary statistics and autocorrelation signatures are compared against the originals. One of the sixteen traces is also compared against a synthetic trace generated using the ARTAGEN tool. Twelve out of the sixteen synthetic traces when compared to the actual traces had Least Square Error (LSE) values under 0.1, three were under 0.4, and the remaining one was under 1.1. / ABSTRACT: Nine synthetic traces had their percent error differences between the mean and variance of the synthetic and actual traces below 5%, one was below 7%, four were under 18%, and the two remaining were at 41%. TSGen is able to effectively model autocorrelation, mean, and variance. Additional intangible benefits of TSG include adjustable run time for the matching process, with longer run time equating to better accuracy, and a simple theoretical model that was easily implemented. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
|
Page generated in 0.0926 seconds