• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 70
  • 11
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 215
  • 215
  • 74
  • 70
  • 57
  • 42
  • 36
  • 32
  • 32
  • 29
  • 28
  • 26
  • 26
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quality metrics in software engineering

Masoud, F. A. January 1987 (has links)
In the first part of this study software metrics are classified into three categories: primitive, abstract and structured. A comparative and analytical study of metrics from these categories was performed to provide software developers, users and management with a correct and consistent evaluation of a representative sample of the software metrics available in the literature. This analysis and comparison was performed in an attempt to: assist the software developers, users and management in selecting suitable quality metric(s) for their specific software quality requirements and to examine various definitions used to calculate these metrics. In the second part of this study an approach towards attaining software quality is developed. This approach is intended to help all the people concerned with the evaluation of software quality in the earlier stages of software systems development. The approach developed is intended to be uniform, consistent, unambiguous and comprehensive and one which makes the concept of software quality more meaningful and visible. It will help the developers both to understand the concepts of software quality and to apply and control it according to the expectations of users, management, customers etc.. The clear definitions provided for the software quality terms should help to prevent misinterpretation, and the definitions will also serve as a touchstone against which new ideas can be tested.
2

Inference graphs : a structural model and measures for evaluating knowledge-based systems

McNaughton, Ross January 1995 (has links)
No description available.
3

The use of Bayesian networks to determine software inspection process efficiency

Cockram, Trevor John January 2001 (has links)
Adherence to a defined process or standards is necessary to achieve satisfactory software quality. However, in order to judge whether practices are effective at achieving the required integrity of a software product, a measurement-based approach to the correctness of the software development is required. A defined and measurable process is a requirement for producing safe software productively. In this study the contribution of quality assurance to the software development process, and in particular the contribution that software inspections make to produce satisfactory software products, is addressed. I have defined a new model of software inspection effectiveness, which uses a Bayesian Belief Network to combine both subjective and objective data to evaluate the probability of an effective software inspection. Its performance shows an improvement over the existing published models of inspection effectiveness. These previous models made questionable assumptions over the distribution of errors and were essentially static. They could not make use of experience both in terms of process improvement and the increased experience of the inspectors. A sensitivity analysis of my model showed that it is consistent with the attributes which were thought important by Michael Fagan in his research into the software inspection method. The performance of my model show that it is an improvement over published models and over a multiple logistic regression model, which was formed using the same calibration data. By applying my model of software inspection effectiveness before the inspection takes place, project managers will be able to make better use of inspection resource available. Applying the model using data collected during the inspection will help in estimation of residual errors in a product. Decisions can then be made if further investigations are required to identify errors. The modelling process has been used successfully in an industrial application.
4

Impact of automated validation on software model quality

Tufvesson, Hampus January 2013 (has links)
Model driven development is gaining momentum, and thus, larger and more complex systems are being represented and developed with the help of modeling. Complex systems often suffer from a number of problems such as difficulties in keeping the model understandable, long compilation times and high coupling. With modeling comes the possibility to validate the models against constraints, which makes it possible to handle problems that traditional static analysis tools can't solve. This thesis is a study on to what degree the usage of automatic model validation can be a useful tool in addressing some of the problems that appear in the development of complex systems. This is done by compiling a list of validation constraints based on existing problems, implementing and applying fixes for these and measuring how a number of different aspects of the model is affected. After applying the fixes and measuring the impact on the models ,it could be seen that validation of dependencies can have a signicant impact on the models by reducing build times of the generated code. Other types of validation constraints require further study to decide what impact they might have on model quality.
5

Effective Static Debugging via Compential Set-Based Analysis

January 1997 (has links)
Sophisticated software systems are inherently complex. Understanding, debugging and maintaining such systems requires inferring high-level characteristics of the system's behavior from a myriad of low-level details. For large systems, this quickly becomes an extremely difficult task. MrSpidey is a static debugger that augments the programmers ability to deal with such complex systems. It statically analyzes the program and uses the results of the analysis to identify and highlight any program operation may cause a run-time fault. The programmer can then investigate each potential fault site and, using the graphical explanation facilities of MrSpidey, determine if the fault will really happen or whether the corresponding correctness proof is beyond the analysis's capabilities. In practice, MrSpidey has proven to be an effective tool for debugging program under development and understanding existing programs. The key technology underlying MrSpidey is componential set-based analysis. This is a constraint-based, whole-program analysis for object-oriented and functional programs. The analysis first processes each program component (eg. module or package) independently, generating and simplifying a constraint system describing the data flow behavior of that component. The analysis then combines and solves these simplified constraint systems to yield invariants characterizing the run-time behavior of the entire program. This component-wise approach yields an analysis that handles significantly larger programs than previous analyses of comparable accuracy. The simplification of constraint systems raises a number of questions. In particular, we need to ensure that simplification preserves the observable behavior, or solution space, of a constraint system. This dissertation provides a complete proof-theoretic and algorithmic characterization of the observable behavior of constraint systems, and establishes a close connection between the observable equivalence of constraint systems and the equivalence of regular tree grammars. We exploit this connection to develop a complete algorithm for deciding the observable equivalence of constraint systems, and to adapt a variety of algorithms for simplifying regular tree grammars to the problem of simplifying constraint systems. The resulting constraint simplification algorithms yield an order of magnitude reduction in the size of constraint systems for typical program expressions.
6

Design metrics forensics : an analysis of the primitive metrics in the Zage design metrics

Kwan, Pak Leung January 1994 (has links)
The Software Engineering Research Center (SERC) Design Metrics Research Team at Ball State University has developed a design metric D(G) of the form:D(G) = D~ + DiWhere De is the architectural design metric (external design metric) and D; is the detailed design metric (internal design metric).Questions to be investigated in this thesis are:Why can D, be an indicator of the potential error modules?Why can D; be an indicator of the potential error modules?Are there any significant factors that dominate the design metrics?In this thesis, the report of the STANFINS data is evaluated by using correlation analysis, regression analysis, and several other statistical techiques. The STANFINS study is chosen because it contains approximately 532 programs, 3,000 packages and 2,500,000 lines of Ada.The design metrics study was completed on 21 programs (approximately 24,000 lines of code) which were selected by CSC development teams. Error reports were also provided by CSC personnel. / Department of Computer Science
7

Toward Improved Understanding and Management of Software Clones

Wang, Wei 18 April 2012 (has links)
The cloning of code is controversial as a development practice. Empirical studies on the long-term effects of cloning on software quality and maintainability have produced mixed results. Some studies have found that cloning has a negative impact on code readability, bug propagation, and the presence of cloning may indicate wider problems in software design and management. At the same time, other studies have found that cloned code is less likely to have defects, and thus is arguably more stable, better designed, and better maintained. These results suggest that the effect of cloning on software quality and maintainability may be determinable only on a case-by-case basis, and this only aggravates the challenge of establishing a principled framework of clone management and understanding. This thesis aims to improve the understanding and management of clones within software systems. There are two main contributions. First, we have conducted an empirical study on cloning in one of the major device drivers families of the Linux kernel. Different from many previous empirical studies on cloning, we incorporate the knowledge about the development style, and the architecture of the subject system into our study; our findings address the evolution of clones; we have also found that the presence of cloning is a strong predictor (87\% accuracy) of one aspect of underlying hardware similarity when compared to a vendor-based model (55\% accuracy) and a randomly chosen model (9\% accuracy). The effectiveness of using the presence of cloning to infer high-level similarity suggests a new perspective of using cloning information to assist program comprehension, aspect mining, and software product-line engineering. Second, we have devised a triage-oriented taxonomy of clones to aid developers in prioritizing which kinds of clones are most likely to be problematic and require attention; a preliminary validation of the utility of this taxonomy has been performed against a large open source system. The cloning-based software quality assurance (QA) framework based on our taxonomy adds a new dimension to traditional software QA processes; by exploiting the clone detection results within a guided framework, the developer is able to evaluate which instances of cloning are most likely to require urgent attention.
8

Test case prioritization

Chu, Chengyun, 1974- 01 June 1999 (has links)
Prioritization techniques are used to schedule test cases to execute in a specific order to maximize some objective function. There are a variety of possible objective functions, such as a function that measures how quickly faults can be detected within the testing process, or a function that measures how fast coverage of the program can be increased. In this paper, we describe several test case prioritization techniques, and empirical studies performed to investigate their relative abilities to improve how quickly faults can be detected by test suites. An improved rate of fault detection during regression testing can provide faster feedback about a system under regression test and let debuggers begin their work earlier than might otherwise be possible. The results of our studies indicate that test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoff's among various prioritization techniques. / Graduation date: 2000
9

An experimental study of cost cognizant test case prioritization

Goel, Amit 02 December 2002 (has links)
Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate of fault detection, measures how quickly faults are detected within the testing process. The APFD metric had been proposed for measuring the rate of fault detection. This metric applies, however, only in cases in which test costs and fault costs are uniform. In practice, fault costs and test costs are not uniform. For example, some faults which lead to system failures might be more costly than faults which lead to minor errors. Similarly, a test case that runs for several hours is much more costly than a test case that runs for a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for measuring rate of fault detection, that incorporates test costs and fault costs. However, studies of this metric thus far have been limited to abstract distribution models of costs. These distribution models did not represent actual fault costs and test costs for software systems. In this thesis, we describe some practical ways to estimate real fault costs and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques which focus on the APFD[subscript c] metric. We report results of an empirical study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost" across various cost-cognizant prioritization techniques and tradeoffs between techniques. The results of our empirical study indicate that cost-cognizant test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoffs among various prioritization techniques. For example: (1) techniques incorporating feedback information (information from previous tests) outperformed those without any feedback information; (2) technique effectiveness differed most when faults are relatively difficult to detect; (3) in most cases, technique performance was similar at function and statement level; (4) surprisingly, techniques considering change location did not perform as well as expected. The study also reveals several practical issues that might arise in applying test case prioritization, as well as opportunities for future work. / Graduation date: 2003
10

Toward Improved Understanding and Management of Software Clones

Wang, Wei 18 April 2012 (has links)
The cloning of code is controversial as a development practice. Empirical studies on the long-term effects of cloning on software quality and maintainability have produced mixed results. Some studies have found that cloning has a negative impact on code readability, bug propagation, and the presence of cloning may indicate wider problems in software design and management. At the same time, other studies have found that cloned code is less likely to have defects, and thus is arguably more stable, better designed, and better maintained. These results suggest that the effect of cloning on software quality and maintainability may be determinable only on a case-by-case basis, and this only aggravates the challenge of establishing a principled framework of clone management and understanding. This thesis aims to improve the understanding and management of clones within software systems. There are two main contributions. First, we have conducted an empirical study on cloning in one of the major device drivers families of the Linux kernel. Different from many previous empirical studies on cloning, we incorporate the knowledge about the development style, and the architecture of the subject system into our study; our findings address the evolution of clones; we have also found that the presence of cloning is a strong predictor (87\% accuracy) of one aspect of underlying hardware similarity when compared to a vendor-based model (55\% accuracy) and a randomly chosen model (9\% accuracy). The effectiveness of using the presence of cloning to infer high-level similarity suggests a new perspective of using cloning information to assist program comprehension, aspect mining, and software product-line engineering. Second, we have devised a triage-oriented taxonomy of clones to aid developers in prioritizing which kinds of clones are most likely to be problematic and require attention; a preliminary validation of the utility of this taxonomy has been performed against a large open source system. The cloning-based software quality assurance (QA) framework based on our taxonomy adds a new dimension to traditional software QA processes; by exploiting the clone detection results within a guided framework, the developer is able to evaluate which instances of cloning are most likely to require urgent attention.

Page generated in 0.0449 seconds