181 |
Practical software testing for an FDA-regulated environmentVadysirisack, Pang Lithisay 27 February 2012 (has links)
Unlike hardware, software does not degrade over time or frequency use. This is good for software. Also unlike hardware, software can be easily changed. This unique characteristic gives software much of its power, but is also responsible for possible failures in software applications. When software is used within medical devices, software failures may result in bodily injury or death. As a result, regulations have been imposed on the makers of medical devices to ensure their safety, which includes the safety of the devices’ software. The U.S. Food and Drug Administration requires establishment of systems and control processes to ensure quality devices. A principal part of the quality assurance effort is testing. This paper explores the unique role of software testing in the design, development, and release of software used for medical devices and applications. It also provides practical, industry-driven guidance on medical device software testing techniques and strategies. / text
|
182 |
Efficient specification-based testing using incremental techniquesUzuncaova, Engin 10 October 2012 (has links)
As software systems grow in complexity, the need for efficient automated techniques for design, testing and verification becomes more and more critical. Specification-based testing provides an effective approach for checking the correctness of software in general. Constraint-based analysis using specifications enables checking various rich properties by automating generation of test inputs. However, as specifications get more complex, existing analyses face a scalability problem due to state explosion. This dissertation introduces a novel approach to analyze declarative specifications incrementally; presents a constraint prioritization and partitioning methodology to enable efficient incremental analyses; defines a suite of optimizations to improve the analyses further; introduces a novel approach for testing software product lines; and provides an experimental evaluation that shows the feasibility and scalability of the approach. The key insight behind the incremental technique is declarative slicing, which is a new class of optimizations. The optimizations are inspired by traditional program slicing for imperative languages but are applicable to analyzable declarative languages, in general, and Alloy, in particular. We introduce a novel algorithm for slicing declarative models. Given an Alloy model, our fully automatic tool, Kato, partitions the model into a base slice and a derived slice using constraint prioritization. As opposed to the conventional use of the Alloy Analyzer, where models are analyzed as a whole, we perform analysis incrementally, i.e., using several steps. A satisfying solution to the base slice is systematically extended to generate a solution for the entire model, while unsatisfiability of the base implies unsatisfiability of the entire model. We show how our incremental technique enables different analysis tools and solvers to be used in synergy to further optimize our approach. Compared to the conventional use of the Alloy Analyzer, this means even more overall performance enhancements for solving declarative models. Incremental analyses have a natural application in the software product line domain. A product line is a family of programs built from features that are increments in program functionality. Given properties of features as firstorder logic formulas, we automatically generate test inputs for each product in a product line. We show how to map a formula that specifies a feature into a transformation that defines incremental refinement of test suites. Our experiments using different data structure product lines show that our approach can provide an order of magnitude speed-up over conventional techniques. / text
|
183 |
Unit testing database applications using SpecDB: A database of software specificationsMikhail, Rana Farid 01 June 2006 (has links)
In this dissertation I introduce SpecDB, a database created to represent and host software specifications in a machine-readable format. The specifications represented in SpecDB are for the purpose of unit testing database operations. A structured representation aids in the processes of both automated software testing and software code generation, based on the actual software specifications. I describe the design of SpecDB, the underlying database that can hold the specifications required for unit testing database operations.Specifications can be fed directly into SpecDB, or, if available, the formal specifications can be translated to the SpecDB representation. An algorithm that translates formal specifications to the SpecDB representation is described. The Z formal specification language has been chosen as an example for the translation algorithm. The outcome of the translation algorithm is a set of machine-readable formal specifications.To demonstrate the use of Sp
ecDB, two automated tools are presented. The first automatically generates database constraints from represented business rules in SpecDB. This constraint generator gives the advantage of enforcing some business rules at the database level for better data quality. The second automated application of SpecDB is a reverse engineering tool that logs the actual execution of the program from the code. By Automatically comparing the output of this tool to the specifications in SpecDB, errors of commission are highlighted that might otherwise not be identified. Some errors of commission including coding unspecified behavior together with correct coding of the specifications cannot be discovered through black box testing techniques, since these techniques cannot observe what other modifications or outputs have happened in the background. For example, black box, functional testing techniques cannot identify an error if the software being tested produced the correct specified output but mor
e over, sent classified data to insecure locations. Accordingly, the decision of whether a software application passed a test depends on whether it coded all the specifications and only the specifications for that unit. Automated tools, using the reverse engineering application introduced in this dissertation, can thus automatically make the decision whether the software passed a test or not based on the provided specifications.
|
184 |
Quality of Test Design in Test Driven DevelopmentČaušević, Adnan January 2013 (has links)
One of the most emphasised software testing activities in an Agile environment is the usage of the Test Driven Development (TDD) approach. TDD is a development activity where test cases are created by developers before writing the code, and all for the purpose of guiding the actual development process. In other words, test cases created when following TDD could be considered as a by-product of software development. However, TDD is not fully adopted by the industry, as indicated by respondents from our industrial survey who pointed out that TDD is the most preferred but least practised activity. Our further research identified seven potentially limiting factors for industrial adoption of TDD, out of which one of the prominent factor was lack of developers’ testing skills. We subsequently defined and categorised appropriate quality attributes which describe the quality of test case design when following TDD. Through a number of empirical studies, we have clearly established the effect of “positive test bias”, where the participants focused mainly on the functionality while generating test cases. In other words, there existed less number of “negative test cases” exercising the system beyond the specified functionality, which is an important requirement for high reliability systems. On an average, in our studies, around 70% of test cases created by the participants were positive while only 30% were negative. However, when measuring defect detecting ability of those sets of test cases, an opposite ratio was observed. Defect detecting ability of negative test cases were above 70% while positive test cases contributed only by 30%. We propose a TDDHQ concept as an approach for achieving higher quality testing in TDD by using combinations of quality improvement aspects and test design techniques to facilitate consideration of unspecified requirements during the development to a higher extent and thus minimise the impact of potentially inherent positive test bias in TDD. This way developers do not necessarily focus only on verifying functionality, but they can as well increase security, robustness, performance and many other quality improvement aspects for the given software product. An additional empirical study, evaluating this method, showed a noticeable improvement in the quality of test cases created by developers utilising TDDHQ concept. Our research findings are expected to pave way for further enhancements to the way of performing TDD, eventually resulting in better adoption of it by the industry.
|
185 |
Design structure and iterative release analysis of scientific softwareZulkarnine, Ahmed Tahsin January 2012 (has links)
One of the main objectives of software development in scientific computing is efficiency.
Being focused on highly specialized application domain, important software quality metrics,
e.g., usability, extensibility ,etc may not be amongst the list of primary objectives.
In this research, we have studied the design structures and iterative releases of scientific
research software using Design Structure Matrix(DSM). We implemented a DSM partitioning
algorithm using sparse matrix data structure Compressed Row Storage(CRS), and
its timing was better than those obtained from the most widely used C++ library boost. Secondly,
we computed several architectural complexity metrics, compared releases and total
release costs of a number of open source scientific research software. One of the important
finding is the absence of circular dependencies in studied software which attributes to the
strong emphasis on computational performance of the code. Iterative release analysis indicates
that there might be a correspondence between “clustering co-efficient” and “release
rework cost” of the software. / x, 87 leaves : ill. ; 29 cm
|
186 |
A Fault-Based Model of Fault Localization TechniquesHays, Mark A 01 January 2014 (has links)
Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important.
In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault's location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future.
While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers' work. The results of the survey indicate that software testing researchers are underreporting the significance of their work.
|
187 |
Software testing tools and productivityMoschoglou, Georgios Moschos January 1996 (has links)
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department. / Department of Computer Science
|
188 |
Automated test of evolving softwareShaw, Hazel Anne January 2005 (has links)
Computers and the software they run are pervasive, yet released software is often unreliable, which has many consequences. Loss of time and earnings can be caused by application software (such as word processors) behaving incorrectly or crashing. Serious disruption can occur as in the l4th August 2003 blackouts in North East USA and Canadal, or serious injury or death can be caused as in the Therac-25 overdose incidents. One way to improve the quality of software is to test it thoroughly. However, software testing is time consuming, the resources, capabilities and skills needed to carry it out are often not available and the time required is often curtailed because of pressures to meet delivery deadlines3. Automation should allow more thorough testing in the time available and improve the quality of delivered software, but there are some problems with automation that this research addresses. Firstly, it is difficult to determine ifthe system under test (SUT) has passed or failed a test. This is known as the oracle problem4 and is often ignored in software testing research. Secondly, many software development organisations use an iterative and incremental process, known as evolutionary development, to write software. Following release, software continues evolving as customers demand new features and improvements to existing ones5. This evolution means that automated test suites must be maintained throughout the life ofthe software. A contribution of this research is a methodology that addresses automatic generation of the test cases, execution of the test cases and evaluation of the outcomes from running each test. "Predecessor" software is used to solve the oracle problem. This is software that already exists, such as a previous version of evolving software, or software from a different vendor that solves the same, or similar, problems. However, the resulting oracle is assumed not be perfect, so rules are defined in an interface, which are used by the evaluator in the test evaluation stage to handle the expected differences. The interface also specifies functional inputs and outputs to the SUT. An algorithm has been developed that creates a Markov Chain Transition Matrix (MCTM) model of the SUT from the interface. Tests are then generated automatically by making a random walk of the MCTM. This means that instead of maintaining a large suite of tests, or a large model of the SUT, only the interface needs to be maintained.
|
189 |
An Automated Tool For Requirements VerificationTekin, Yasar 01 September 2004 (has links) (PDF)
In today& / #8217 / s world, only those software organizations that consistently produce high quality products can succeed. This situation enforces the effective usage of defect prevention and detection techniques.
One of the most effective defect detection techniques used in software development life cycle is verification of software requirements applied at the end of the requirements engineering phase. If the existing verification techniques can be automated to meet today& / #8217 / s work environment needs, the
effectiveness of these techniques can be increased.
This study focuses on the development and implementation of an automated tool that automates verification of software requirements modeled in Aris eEPC and Organizational Chart for automatically detectable defects. The application of reading techniques on a project and comparison of results
of manual and automated verification techniques applied to a project are also discussed.
|
190 |
Specification-based testing of interactive systemsDr Ian Maccoll Unknown Date (has links)
No description available.
|
Page generated in 0.0761 seconds