• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 6
  • 4
  • 1
  • Tagged with
  • 35
  • 35
  • 35
  • 33
  • 12
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quality metrics in software engineering

Masoud, F. A. January 1987 (has links)
In the first part of this study software metrics are classified into three categories: primitive, abstract and structured. A comparative and analytical study of metrics from these categories was performed to provide software developers, users and management with a correct and consistent evaluation of a representative sample of the software metrics available in the literature. This analysis and comparison was performed in an attempt to: assist the software developers, users and management in selecting suitable quality metric(s) for their specific software quality requirements and to examine various definitions used to calculate these metrics. In the second part of this study an approach towards attaining software quality is developed. This approach is intended to help all the people concerned with the evaluation of software quality in the earlier stages of software systems development. The approach developed is intended to be uniform, consistent, unambiguous and comprehensive and one which makes the concept of software quality more meaningful and visible. It will help the developers both to understand the concepts of software quality and to apply and control it according to the expectations of users, management, customers etc.. The clear definitions provided for the software quality terms should help to prevent misinterpretation, and the definitions will also serve as a touchstone against which new ideas can be tested.
2

Quality aspects of software product supply and support using the Internet

Braude, Bruce Shaun January 1998 (has links)
A dissertation submitted to the Faculty of Engineering , University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Engineering. Johannesburg, 1998. / This project explores the use of the Internet to supply and support software products within a quality management system. The Software Engineering Applications Laboratory (SEAL) at the University of the Witwatersrand is in the process of developing various software products that will be commercially distributed in the near future. The SEAL has chosen to use the Internet to supply and support these products. A system has been developed for this task and has been named the Internet System for the Supply and Support of Software (IS4). The SEAL is committed to developing and supplying software within a quality management system. Consequently an investigation was undertaken into the quality characteristics and requirements based on the ISO 9001 standard for quality assurance and the ISO/lEC JTG1/SC7 software engineering standards. The investigation focused on quality requirements for processes related to supplying and supporting software as well as on the quality characteristics of the IS4 and the IS4 development process. These quality concerns have been incorporated into the SEAL's quality management system, the design and development of the IS4 and the development process for SEAL products. Major technical issues that have influenced the design of the IS4 have been the control of the supply and licensing of the supplied products and the transaction processing of the on-line sales. To control the supply and licenSing of the supplied products, various issues such as unlock keys, Internet based registration, controlled access and hardware control have been investigated. The advantages and disadvantages of each have been investigated and a suitable lmplernentat'on has been used in the IS4. To process the on-line transactions the IS4 will be developed to be compliant with the recently released 'Secure Electronic Transactions' (SET) standard. The project has been managed in accordance with the SEAL's Quality Management System (QMS) which is ISO 9001 compliant. The system contains a Shopper Interface for purchasing of SEAL products and a Manager Interface for administration of the system. The Microsoft BackOffice® set of software has formed the foundation on which the system has been developed. One of the focuses of the project was maintainability of the IS4. Documentation and procedures have been developed to aid in administration and perfective maintenance in the future.
3

A system of automated tools to support control of software development through software configuration management

Walsh, Martha Geiger January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Computer Science.
4

An experimental study of cost cognizant test case prioritization

Goel, Amit 02 December 2002 (has links)
Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate of fault detection, measures how quickly faults are detected within the testing process. The APFD metric had been proposed for measuring the rate of fault detection. This metric applies, however, only in cases in which test costs and fault costs are uniform. In practice, fault costs and test costs are not uniform. For example, some faults which lead to system failures might be more costly than faults which lead to minor errors. Similarly, a test case that runs for several hours is much more costly than a test case that runs for a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for measuring rate of fault detection, that incorporates test costs and fault costs. However, studies of this metric thus far have been limited to abstract distribution models of costs. These distribution models did not represent actual fault costs and test costs for software systems. In this thesis, we describe some practical ways to estimate real fault costs and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques which focus on the APFD[subscript c] metric. We report results of an empirical study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost" across various cost-cognizant prioritization techniques and tradeoffs between techniques. The results of our empirical study indicate that cost-cognizant test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoffs among various prioritization techniques. For example: (1) techniques incorporating feedback information (information from previous tests) outperformed those without any feedback information; (2) technique effectiveness differed most when faults are relatively difficult to detect; (3) in most cases, technique performance was similar at function and statement level; (4) surprisingly, techniques considering change location did not perform as well as expected. The study also reveals several practical issues that might arise in applying test case prioritization, as well as opportunities for future work. / Graduation date: 2003
5

Test case prioritization

Chu, Chengyun, 1974- 01 June 1999 (has links)
Prioritization techniques are used to schedule test cases to execute in a specific order to maximize some objective function. There are a variety of possible objective functions, such as a function that measures how quickly faults can be detected within the testing process, or a function that measures how fast coverage of the program can be increased. In this paper, we describe several test case prioritization techniques, and empirical studies performed to investigate their relative abilities to improve how quickly faults can be detected by test suites. An improved rate of fault detection during regression testing can provide faster feedback about a system under regression test and let debuggers begin their work earlier than might otherwise be possible. The results of our studies indicate that test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoff's among various prioritization techniques. / Graduation date: 2000
6

Effective Static Debugging via Compential Set-Based Analysis

January 1997 (has links)
Sophisticated software systems are inherently complex. Understanding, debugging and maintaining such systems requires inferring high-level characteristics of the system's behavior from a myriad of low-level details. For large systems, this quickly becomes an extremely difficult task. MrSpidey is a static debugger that augments the programmers ability to deal with such complex systems. It statically analyzes the program and uses the results of the analysis to identify and highlight any program operation may cause a run-time fault. The programmer can then investigate each potential fault site and, using the graphical explanation facilities of MrSpidey, determine if the fault will really happen or whether the corresponding correctness proof is beyond the analysis's capabilities. In practice, MrSpidey has proven to be an effective tool for debugging program under development and understanding existing programs. The key technology underlying MrSpidey is componential set-based analysis. This is a constraint-based, whole-program analysis for object-oriented and functional programs. The analysis first processes each program component (eg. module or package) independently, generating and simplifying a constraint system describing the data flow behavior of that component. The analysis then combines and solves these simplified constraint systems to yield invariants characterizing the run-time behavior of the entire program. This component-wise approach yields an analysis that handles significantly larger programs than previous analyses of comparable accuracy. The simplification of constraint systems raises a number of questions. In particular, we need to ensure that simplification preserves the observable behavior, or solution space, of a constraint system. This dissertation provides a complete proof-theoretic and algorithmic characterization of the observable behavior of constraint systems, and establishes a close connection between the observable equivalence of constraint systems and the equivalence of regular tree grammars. We exploit this connection to develop a complete algorithm for deciding the observable equivalence of constraint systems, and to adapt a variety of algorithms for simplifying regular tree grammars to the problem of simplifying constraint systems. The resulting constraint simplification algorithms yield an order of magnitude reduction in the size of constraint systems for typical program expressions.
7

Design metrics forensics : an analysis of the primitive metrics in the Zage design metrics

Kwan, Pak Leung January 1994 (has links)
The Software Engineering Research Center (SERC) Design Metrics Research Team at Ball State University has developed a design metric D(G) of the form:D(G) = D~ + DiWhere De is the architectural design metric (external design metric) and D; is the detailed design metric (internal design metric).Questions to be investigated in this thesis are:Why can D, be an indicator of the potential error modules?Why can D; be an indicator of the potential error modules?Are there any significant factors that dominate the design metrics?In this thesis, the report of the STANFINS data is evaluated by using correlation analysis, regression analysis, and several other statistical techiques. The STANFINS study is chosen because it contains approximately 532 programs, 3,000 packages and 2,500,000 lines of Ada.The design metrics study was completed on 21 programs (approximately 24,000 lines of code) which were selected by CSC development teams. Error reports were also provided by CSC personnel. / Department of Computer Science
8

Applying design metrics to large-scale telecommunications software

Pipkin, Jeffrey A. January 1996 (has links)
The design metrics developed by the Design Metrics team at Ball State University are a suite of metrics that can be applied during the design phase of software development. The benefit of the metrics lies in the fact that the metrics can be applied early in the software development cycle. The suite includes the external design metric De,the internal design metric D27 D(G), the design balance metric DB, and the design connectivity metric DC.The suite of design metrics have been applied to large-scale industrial software as well as student projects. Bell Communications Research of New Jersey has made available a software system that can be used to apply design metrics to large-scale telecommunications software. This thesis presents the suite of design metrics and attempts to determine if the characteristics of telecommunications software are accurately reflected in the conventions used to compute the metrics. / Department of Computer Science
9

Software reliability prediction based on design metrics

Stineburg, Jeffrey January 1999 (has links)
This study has presented a new model for predicting software reliability based on design metrics. An introduction to the problem of software reliability is followed by a brief overview of software reliability models. A description of the models is given, including a discussion of some of the issues associated with them. The intractability of validating life-critical software is presented. Such validation is shown to require extended periods of test time that are impractical in real world situations. This problem is also inherent in fault tolerant software systems of the type currently being implemented in critical applications today. The design metrics developed at Ball State University is proposed as the basis of a new model for predicting software reliability from information available during the design phase of development. The thesis investigates the proposition that a relationship exists between the design metric D(G) and the errors that are found in the field. A study, performed on a subset of a large defense software system, discovered evidence to support the proposition. / Department of Computer Science
10

Using the Design Metrics Analyzer to improve software quality

Wilburn, Cathy A. January 1994 (has links)
Effective software engineering techniques are needed to increase the reliability of software systems, to increase the productivity of development teams, and to reduce the costs of software development. Companies search for an effective software engineering process as they strive to reach higher process maturity levels and produce better software. To aid in this quest for better methods of software engineering. the Design Metrics Research Team at Ball State University has analyzed university and industry software to be able to detect error-prone modules. The research team has developed, tested and validated their design metrics and found them to be highly successful. These metrics were typically collected and calculated by hand. So that these metrics can be collected more consistently, more accurately and faster, the Design Metrics Analyzer for Ada (DMA) was created. The DMA collects metrics from the files submitted based on a subprogram level. The metrics results are then analyzed to yield a list of stress points, which are modules that are considered to be error-prone or difficult for developers. This thesis describes the Design Metrics Analyzer, explains its output and how it functions. Also, ways that the DMA can be used in the software development life cycle are discussed. / Department of Computer Science

Page generated in 0.1063 seconds