Spelling suggestions: "subject:"computer 5oftware - coequality"" "subject:"computer 5oftware - c.equality""
1 |
Quality aspects of software product supply and support using the InternetBraude, Bruce Shaun January 1998 (has links)
A dissertation submitted to the Faculty of Engineering ,
University of the Witwatersrand, Johannesburg, in
fulfilment of the requirements for the degree of Master of
Science in Engineering.
Johannesburg, 1998. / This project explores the use of the Internet to supply and support software
products within a quality management system. The Software Engineering
Applications Laboratory (SEAL) at the University of the Witwatersrand is in the
process of developing various software products that will be commercially
distributed in the near future. The SEAL has chosen to use the Internet to
supply and support these products. A system has been developed for this task
and has been named the Internet System for the Supply and Support of
Software (IS4).
The SEAL is committed to developing and supplying software within a quality
management system. Consequently an investigation was undertaken into the
quality characteristics and requirements based on the ISO 9001 standard for
quality assurance and the ISO/lEC JTG1/SC7 software engineering standards.
The investigation focused on quality requirements for processes related to
supplying and supporting software as well as on the quality characteristics of
the IS4 and the IS4 development process. These quality concerns have been
incorporated into the SEAL's quality management system, the design and
development of the IS4 and the development process for SEAL products.
Major technical issues that have influenced the design of the IS4 have been the
control of the supply and licensing of the supplied products and the transaction
processing of the on-line sales. To control the supply and licenSing of the
supplied products, various issues such as unlock keys, Internet based
registration, controlled access and hardware control have been investigated.
The advantages and disadvantages of each have been investigated and a
suitable lmplernentat'on has been used in the IS4. To process the on-line
transactions the IS4 will be developed to be compliant with the recently released
'Secure Electronic Transactions' (SET) standard.
The project has been managed in accordance with the SEAL's Quality
Management System (QMS) which is ISO 9001 compliant. The system contains
a Shopper Interface for purchasing of SEAL products and a Manager Interface
for administration of the system. The Microsoft BackOffice® set of software has
formed the foundation on which the system has been developed. One of the
focuses of the project was maintainability of the IS4. Documentation and
procedures have been developed to aid in administration and perfective
maintenance in the future.
|
2 |
A system of automated tools to support control of software development through software configuration managementWalsh, Martha Geiger January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
|
3 |
An experimental study of cost cognizant test case prioritizationGoel, Amit 02 December 2002 (has links)
Test case prioritization techniques schedule test cases for regression testing
in an order that increases their ability to meet some performance goal. One performance
goal, rate of fault detection, measures how quickly faults are detected
within the testing process. The APFD metric had been proposed for measuring
the rate of fault detection. This metric applies, however, only in cases in which
test costs and fault costs are uniform. In practice, fault costs and test costs
are not uniform. For example, some faults which lead to system failures might
be more costly than faults which lead to minor errors. Similarly, a test case
that runs for several hours is much more costly than a test case that runs for
a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for
measuring rate of fault detection, that incorporates test costs and fault costs.
However, studies of this metric thus far have been limited to abstract distribution
models of costs. These distribution models did not represent actual fault
costs and test costs for software systems.
In this thesis, we describe some practical ways to estimate real fault costs
and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques
which focus on the APFD[subscript c] metric. We report results of an empirical
study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost"
across various cost-cognizant prioritization techniques and tradeoffs between
techniques.
The results of our empirical study indicate that cost-cognizant test case prioritization
techniques can substantially improve the rate of fault detection of
test suites. The results also provide insights into the tradeoffs among various
prioritization techniques. For example: (1) techniques incorporating feedback
information (information from previous tests) outperformed those without any
feedback information; (2) technique effectiveness differed most when faults are
relatively difficult to detect; (3) in most cases, technique performance was similar
at function and statement level; (4) surprisingly, techniques considering
change location did not perform as well as expected. The study also reveals
several practical issues that might arise in applying test case prioritization, as
well as opportunities for future work. / Graduation date: 2003
|
4 |
Test case prioritizationChu, Chengyun, 1974- 01 June 1999 (has links)
Prioritization techniques are used to schedule test cases to execute in a specific order to maximize some objective function. There are a variety of possible objective functions, such as a function that measures how quickly faults can be detected within the testing process, or a function that measures how fast coverage of the program can be increased. In this paper, we describe several test case prioritization techniques, and empirical studies performed to investigate their relative abilities to improve how quickly faults can be detected by test suites. An improved rate of fault detection during regression testing can provide faster feedback about a system under regression test and let debuggers begin their work earlier than might otherwise be possible. The results of our studies indicate that test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoff's among various prioritization techniques. / Graduation date: 2000
|
5 |
Effective Static Debugging via Compential Set-Based AnalysisJanuary 1997 (has links)
Sophisticated software systems are inherently complex. Understanding, debugging
and maintaining such systems requires inferring high-level characteristics of the
system's behavior from a myriad of low-level details. For large systems, this quickly
becomes an extremely difficult task.
MrSpidey is a static debugger that augments the programmers ability to deal
with such complex systems. It statically analyzes the program and uses the results
of the analysis to identify and highlight any program operation may cause a run-time
fault. The programmer can then investigate each potential fault site and, using the
graphical explanation facilities of MrSpidey, determine if the fault will really happen
or whether the corresponding correctness proof is beyond the analysis's capabilities.
In practice, MrSpidey has proven to be an effective tool for debugging program under
development and understanding existing programs.
The key technology underlying MrSpidey is componential set-based analysis. This
is a constraint-based, whole-program analysis for object-oriented and functional programs.
The analysis first processes each program component (eg. module or package)
independently, generating and simplifying a constraint system describing the data
flow behavior of that component. The analysis then combines and solves these simplified
constraint systems to yield invariants characterizing the run-time behavior of
the entire program. This component-wise approach yields an analysis that handles
significantly larger programs than previous analyses of comparable accuracy.
The simplification of constraint systems raises a number of questions. In particular,
we need to ensure that simplification preserves the observable behavior, or
solution space, of a constraint system. This dissertation provides a complete proof-theoretic
and algorithmic characterization of the observable behavior of constraint
systems, and establishes a close connection between the observable equivalence of
constraint systems and the equivalence of regular tree grammars. We exploit this
connection to develop a complete algorithm for deciding the observable equivalence
of constraint systems, and to adapt a variety of algorithms for simplifying regular
tree grammars to the problem of simplifying constraint systems. The resulting constraint
simplification algorithms yield an order of magnitude reduction in the size of
constraint systems for typical program expressions.
|
6 |
Design metrics forensics : an analysis of the primitive metrics in the Zage design metricsKwan, Pak Leung January 1994 (has links)
The Software Engineering Research Center (SERC) Design Metrics Research Team at Ball State University has developed a design metric D(G) of the form:D(G) = D~ + DiWhere De is the architectural design metric (external design metric) and D; is the detailed design metric (internal design metric).Questions to be investigated in this thesis are:Why can D, be an indicator of the potential error modules?Why can D; be an indicator of the potential error modules?Are there any significant factors that dominate the design metrics?In this thesis, the report of the STANFINS data is evaluated by using correlation analysis, regression analysis, and several other statistical techiques. The STANFINS study is chosen because it contains approximately 532 programs, 3,000 packages and 2,500,000 lines of Ada.The design metrics study was completed on 21 programs (approximately 24,000 lines of code) which were selected by CSC development teams. Error reports were also provided by CSC personnel. / Department of Computer Science
|
7 |
Applying design metrics to large-scale telecommunications softwarePipkin, Jeffrey A. January 1996 (has links)
The design metrics developed by the Design Metrics team at Ball State University are a suite of metrics that can be applied during the design phase of software development. The benefit of the metrics lies in the fact that the metrics can be applied early in the software development cycle. The suite includes the external design metric De,the internal design metric D27 D(G), the design balance metric DB, and the design connectivity metric DC.The suite of design metrics have been applied to large-scale industrial software as well as student projects. Bell Communications Research of New Jersey has made available a software system that can be used to apply design metrics to large-scale telecommunications software. This thesis presents the suite of design metrics and attempts to determine if the characteristics of telecommunications software are accurately reflected in the conventions used to compute the metrics. / Department of Computer Science
|
8 |
Software reliability prediction based on design metricsStineburg, Jeffrey January 1999 (has links)
This study has presented a new model for predicting software reliability based on design metrics. An introduction to the problem of software reliability is followed by a brief overview of software reliability models. A description of the models is given, including a discussion of some of the issues associated with them. The intractability of validating life-critical software is presented. Such validation is shown to require extended periods of test time that are impractical in real world situations. This problem is also inherent in fault tolerant software systems of the type currently being implemented in critical applications today. The design metrics developed at Ball State University is proposed as the basis of a new model for predicting software reliability from information available during the design phase of development. The thesis investigates the proposition that a relationship exists between the design metric D(G) and the errors that are found in the field. A study, performed on a subset of a large defense software system, discovered evidence to support the proposition. / Department of Computer Science
|
9 |
Using the Design Metrics Analyzer to improve software qualityWilburn, Cathy A. January 1994 (has links)
Effective software engineering techniques are needed to increase the reliability of software systems, to increase the productivity of development teams, and to reduce the costs of software development. Companies search for an effective software engineering process as they strive to reach higher process maturity levels and produce better software. To aid in this quest for better methods of software engineering. the Design Metrics Research Team at Ball State University has analyzed university and industry software to be able to detect error-prone modules. The research team has developed, tested and validated their design metrics and found them to be highly successful. These metrics were typically collected and calculated by hand. So that these metrics can be collected more consistently, more accurately and faster, the Design Metrics Analyzer for Ada (DMA) was created. The DMA collects metrics from the files submitted based on a subprogram level. The metrics results are then analyzed to yield a list of stress points, which are modules that are considered to be error-prone or difficult for developers. This thesis describes the Design Metrics Analyzer, explains its output and how it functions. Also, ways that the DMA can be used in the software development life cycle are discussed. / Department of Computer Science
|
10 |
An empirical study of software design balance dynamicsBhattrai, Gopendra R. January 1995 (has links)
The Design Metrics Research Team in the Computer Science Department at Ball State University has been engaged in developing and validating quality design metrics since 1987. Since then a number of design metrics have been developed and validated. One of the design metrics developed by the research team is design balance (DB). This thesis is an attempt to validate the metric DB. In this thesis, results of the analysis of five systems are presented. The main objective of this research is to examine if DB can be used to evaluate the complexity of a software design and hence the quality of the resulting software. Two of the five systems analyzed were student projects and the remaining three were from industry. The five systems analyzed were written in different languages, had different sizes and exhibited different error rates. / Department of Computer Science
|
Page generated in 0.0595 seconds