Spelling suggestions: "subject:"[een] SOFTWARE QUALITY"" "subject:"[enn] SOFTWARE QUALITY""
1 |
Quality metrics in software engineeringMasoud, F. A. January 1987 (has links)
In the first part of this study software metrics are classified into three categories: primitive, abstract and structured. A comparative and analytical study of metrics from these categories was performed to provide software developers, users and management with a correct and consistent evaluation of a representative sample of the software metrics available in the literature. This analysis and comparison was performed in an attempt to: assist the software developers, users and management in selecting suitable quality metric(s) for their specific software quality requirements and to examine various definitions used to calculate these metrics. In the second part of this study an approach towards attaining software quality is developed. This approach is intended to help all the people concerned with the evaluation of software quality in the earlier stages of software systems development. The approach developed is intended to be uniform, consistent, unambiguous and comprehensive and one which makes the concept of software quality more meaningful and visible. It will help the developers both to understand the concepts of software quality and to apply and control it according to the expectations of users, management, customers etc.. The clear definitions provided for the software quality terms should help to prevent misinterpretation, and the definitions will also serve as a touchstone against which new ideas can be tested.
|
2 |
Inference graphs : a structural model and measures for evaluating knowledge-based systemsMcNaughton, Ross January 1995 (has links)
No description available.
|
3 |
The use of Bayesian networks to determine software inspection process efficiencyCockram, Trevor John January 2001 (has links)
Adherence to a defined process or standards is necessary to achieve satisfactory software quality. However, in order to judge whether practices are effective at achieving the required integrity of a software product, a measurement-based approach to the correctness of the software development is required. A defined and measurable process is a requirement for producing safe software productively. In this study the contribution of quality assurance to the software development process, and in particular the contribution that software inspections make to produce satisfactory software products, is addressed. I have defined a new model of software inspection effectiveness, which uses a Bayesian Belief Network to combine both subjective and objective data to evaluate the probability of an effective software inspection. Its performance shows an improvement over the existing published models of inspection effectiveness. These previous models made questionable assumptions over the distribution of errors and were essentially static. They could not make use of experience both in terms of process improvement and the increased experience of the inspectors. A sensitivity analysis of my model showed that it is consistent with the attributes which were thought important by Michael Fagan in his research into the software inspection method. The performance of my model show that it is an improvement over published models and over a multiple logistic regression model, which was formed using the same calibration data. By applying my model of software inspection effectiveness before the inspection takes place, project managers will be able to make better use of inspection resource available. Applying the model using data collected during the inspection will help in estimation of residual errors in a product. Decisions can then be made if further investigations are required to identify errors. The modelling process has been used successfully in an industrial application.
|
4 |
Quality Market: Design and Field Study of Prediction Market for Software Quality ControlKrishnamurthy, Janaki 01 January 2010 (has links)
Given the increasing competition in the software industry and the critical consequences of software errors, it has become important for companies to achieve high levels of software quality. While cost reduction and timeliness of projects continue to be important measures, software companies are placing increasing attention on identifying the user needs and better defining software quality from a customer perspective. Software quality goes beyond just correcting the defects that arise from any deviations from the functional requirements. System engineers also have to focus on a large number of quality requirements such as security, availability, reliability, maintainability, performance and temporal correctness requirements. The fulfillment of these run-time observable quality requirements is important for customer satisfaction and project success.
Generating early forecasts of potential quality problems can have significant benefits to quality improvement. One approach to better software quality is to improve the overall development cycle in order to prevent the introduction of defects and improve run-time quality factors. Many methods and techniques are available which can be used to forecast quality of an ongoing project such as statistical models, opinion polls, survey methods etc. These methods have known strengths and weaknesses and accurate forecasting is still a major issue.
This research utilized a novel approach using prediction markets, which has proved useful in a variety of situations. In a prediction market for software quality, individual estimates from diverse project stakeholders such as project managers, developers, testers, and users were collected at various points in time during the project. Analogous to the financial futures markets, a security (or contract) was defined that represents the quality requirements and various stakeholders traded the securities using the prevailing market price and their private information. The equilibrium market price represents the best aggregate of diverse opinions. Among many software quality factors, this research focused on predicting the software correctness.
The goal of the study was to evaluate if a suitably designed prediction market would generate a more accurate estimate of software quality than a survey method which polls subjects. Data were collected using a live software project in three stages: viz., the requirements phase, an early release phase and a final release phase. The efficacy of the market was tested with results from prediction markets by (i) comparing the market outcomes to final project outcome, and (ii) by comparing market outcomes to results of opinion poll.
Analysis of data suggests that predictions generated using the prediction market are significantly different from those generated using polls at early release and final release stages. The prediction market estimates were also closer to the actual probability estimates for quality compared to the polls. Overall, the results suggest that suitably designed prediction markets provide better forecasts of potential quality problems than polls.
|
5 |
Quality aspects of software product supply and support using the InternetBraude, Bruce Shaun January 1998 (has links)
A dissertation submitted to the Faculty of Engineering ,
University of the Witwatersrand, Johannesburg, in
fulfilment of the requirements for the degree of Master of
Science in Engineering.
Johannesburg, 1998. / This project explores the use of the Internet to supply and support software
products within a quality management system. The Software Engineering
Applications Laboratory (SEAL) at the University of the Witwatersrand is in the
process of developing various software products that will be commercially
distributed in the near future. The SEAL has chosen to use the Internet to
supply and support these products. A system has been developed for this task
and has been named the Internet System for the Supply and Support of
Software (IS4).
The SEAL is committed to developing and supplying software within a quality
management system. Consequently an investigation was undertaken into the
quality characteristics and requirements based on the ISO 9001 standard for
quality assurance and the ISO/lEC JTG1/SC7 software engineering standards.
The investigation focused on quality requirements for processes related to
supplying and supporting software as well as on the quality characteristics of
the IS4 and the IS4 development process. These quality concerns have been
incorporated into the SEAL's quality management system, the design and
development of the IS4 and the development process for SEAL products.
Major technical issues that have influenced the design of the IS4 have been the
control of the supply and licensing of the supplied products and the transaction
processing of the on-line sales. To control the supply and licenSing of the
supplied products, various issues such as unlock keys, Internet based
registration, controlled access and hardware control have been investigated.
The advantages and disadvantages of each have been investigated and a
suitable lmplernentat'on has been used in the IS4. To process the on-line
transactions the IS4 will be developed to be compliant with the recently released
'Secure Electronic Transactions' (SET) standard.
The project has been managed in accordance with the SEAL's Quality
Management System (QMS) which is ISO 9001 compliant. The system contains
a Shopper Interface for purchasing of SEAL products and a Manager Interface
for administration of the system. The Microsoft BackOffice® set of software has
formed the foundation on which the system has been developed. One of the
focuses of the project was maintainability of the IS4. Documentation and
procedures have been developed to aid in administration and perfective
maintenance in the future.
|
6 |
A system of automated tools to support control of software development through software configuration managementWalsh, Martha Geiger January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
|
7 |
An experimental study of cost cognizant test case prioritizationGoel, Amit 02 December 2002 (has links)
Test case prioritization techniques schedule test cases for regression testing
in an order that increases their ability to meet some performance goal. One performance
goal, rate of fault detection, measures how quickly faults are detected
within the testing process. The APFD metric had been proposed for measuring
the rate of fault detection. This metric applies, however, only in cases in which
test costs and fault costs are uniform. In practice, fault costs and test costs
are not uniform. For example, some faults which lead to system failures might
be more costly than faults which lead to minor errors. Similarly, a test case
that runs for several hours is much more costly than a test case that runs for
a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for
measuring rate of fault detection, that incorporates test costs and fault costs.
However, studies of this metric thus far have been limited to abstract distribution
models of costs. These distribution models did not represent actual fault
costs and test costs for software systems.
In this thesis, we describe some practical ways to estimate real fault costs
and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques
which focus on the APFD[subscript c] metric. We report results of an empirical
study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost"
across various cost-cognizant prioritization techniques and tradeoffs between
techniques.
The results of our empirical study indicate that cost-cognizant test case prioritization
techniques can substantially improve the rate of fault detection of
test suites. The results also provide insights into the tradeoffs among various
prioritization techniques. For example: (1) techniques incorporating feedback
information (information from previous tests) outperformed those without any
feedback information; (2) technique effectiveness differed most when faults are
relatively difficult to detect; (3) in most cases, technique performance was similar
at function and statement level; (4) surprisingly, techniques considering
change location did not perform as well as expected. The study also reveals
several practical issues that might arise in applying test case prioritization, as
well as opportunities for future work. / Graduation date: 2003
|
8 |
Test case prioritizationChu, Chengyun, 1974- 01 June 1999 (has links)
Prioritization techniques are used to schedule test cases to execute in a specific order to maximize some objective function. There are a variety of possible objective functions, such as a function that measures how quickly faults can be detected within the testing process, or a function that measures how fast coverage of the program can be increased. In this paper, we describe several test case prioritization techniques, and empirical studies performed to investigate their relative abilities to improve how quickly faults can be detected by test suites. An improved rate of fault detection during regression testing can provide faster feedback about a system under regression test and let debuggers begin their work earlier than might otherwise be possible. The results of our studies indicate that test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoff's among various prioritization techniques. / Graduation date: 2000
|
9 |
Impact of automated validation on software model qualityTufvesson, Hampus January 2013 (has links)
Model driven development is gaining momentum, and thus, larger and more complex systems are being represented and developed with the help of modeling. Complex systems often suffer from a number of problems such as difficulties in keeping the model understandable, long compilation times and high coupling. With modeling comes the possibility to validate the models against constraints, which makes it possible to handle problems that traditional static analysis tools can't solve. This thesis is a study on to what degree the usage of automatic model validation can be a useful tool in addressing some of the problems that appear in the development of complex systems. This is done by compiling a list of validation constraints based on existing problems, implementing and applying fixes for these and measuring how a number of different aspects of the model is affected. After applying the fixes and measuring the impact on the models ,it could be seen that validation of dependencies can have a signicant impact on the models by reducing build times of the generated code. Other types of validation constraints require further study to decide what impact they might have on model quality.
|
10 |
Effective Static Debugging via Compential Set-Based AnalysisJanuary 1997 (has links)
Sophisticated software systems are inherently complex. Understanding, debugging
and maintaining such systems requires inferring high-level characteristics of the
system's behavior from a myriad of low-level details. For large systems, this quickly
becomes an extremely difficult task.
MrSpidey is a static debugger that augments the programmers ability to deal
with such complex systems. It statically analyzes the program and uses the results
of the analysis to identify and highlight any program operation may cause a run-time
fault. The programmer can then investigate each potential fault site and, using the
graphical explanation facilities of MrSpidey, determine if the fault will really happen
or whether the corresponding correctness proof is beyond the analysis's capabilities.
In practice, MrSpidey has proven to be an effective tool for debugging program under
development and understanding existing programs.
The key technology underlying MrSpidey is componential set-based analysis. This
is a constraint-based, whole-program analysis for object-oriented and functional programs.
The analysis first processes each program component (eg. module or package)
independently, generating and simplifying a constraint system describing the data
flow behavior of that component. The analysis then combines and solves these simplified
constraint systems to yield invariants characterizing the run-time behavior of
the entire program. This component-wise approach yields an analysis that handles
significantly larger programs than previous analyses of comparable accuracy.
The simplification of constraint systems raises a number of questions. In particular,
we need to ensure that simplification preserves the observable behavior, or
solution space, of a constraint system. This dissertation provides a complete proof-theoretic
and algorithmic characterization of the observable behavior of constraint
systems, and establishes a close connection between the observable equivalence of
constraint systems and the equivalence of regular tree grammars. We exploit this
connection to develop a complete algorithm for deciding the observable equivalence
of constraint systems, and to adapt a variety of algorithms for simplifying regular
tree grammars to the problem of simplifying constraint systems. The resulting constraint
simplification algorithms yield an order of magnitude reduction in the size of
constraint systems for typical program expressions.
|
Page generated in 0.0597 seconds