• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 25
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 25
  • 22
  • 19
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Benchmarking tests on recovery oriented computing

Raman, Nandita 09 July 2012 (has links)
Benchmarks have played a very important role in guiding the progress of computer science systems in various ways. Specifically, in Autonomous environments it has a major role to play. System crashes and software failures are a basic part of a software system’s life-cycle and to overcome or rather make it as less vulnerable as possible is the main purpose of recovery oriented computing. This is usually done by trying to reduce the downtime by automatically and efficiently recovering from a broad class of transient software failures without having to modify applications. There have been various types of benchmarks for recovering from a failure, but in this paper we intend to create a benchmark framework called the warning benchmarks to measure and evaluate the recovery oriented systems. It consists of the known and the unknown failures and few benchmark techniques which the warning benchmarks handle with the help of various other techniques in software fault analysis. / text
2

System architecture metrics : an evaluation

Shepperd, Martin John January 1991 (has links)
The research described in this dissertation is a study of the application of measurement, or metrics for software engineering. This is not in itself a new idea; the concept of measuring software was first mooted close on twenty years ago. However, examination of what is a considerable body of metrics work, reveals that incorporating measurement into software engineering is rather less straightforward than one might pre-suppose and despite the advancing years, there is still a lack of maturity. The thesis commences with a dissection of three of the most popular metrics, namely Haistead's software science, McCabe's cyclomatic complexity and Henry and Kafura's information flow - all of which might be regarded as having achieved classic status. Despite their popularity these metrics are all flawed in at least three respects. First and foremost, in each case it is unclear exactly what is being measured: instead there being a preponderance of such metaphysical terms as complexIty and qualIty. Second, each metric is theoretically doubtful in that it exhibits anomalous behaviour. Third, much of the claimed empirical support for each metric is spurious arising from poor experimental design, and inappropriate statistical analysis. It is argued that these problems are not misfortune but the inevitable consequence of the ad hoc and unstructured approach of much metrics research: in particular the scant regard paid to the role of underlying models. This research seeks to address these problems by proposing a systematic method for the development and evaluation of software metrics. The method is a goal directed, combination of formal modelling techniques, and empirical ealiat%or. The met\io s applied to the problem of developing metrics to evaluate software designs - from the perspective of a software engineer wishing to minimise implementation difficulties, faults and future maintenance problems. It highlights a number of weaknesses within the original model. These are tackled in a second, more sophisticated model which is multidimensional, that is it combines, in this case, two metrics. Both the theoretical and empirical analysis show this model to have utility in its ability to identify hardto- implement and unreliable aspects of software designs. It is concluded that this method goes some way towards the problem of introducing a little more rigour into the development, evaluation and evolution of metrics for the software engineer.
3

An automated approach to the measurement and evaluation of software quality during development

Dixon, Mark Brian January 1997 (has links)
No description available.
4

Defining a Software Analysis Framework

Dogan, Oguzhan January 2008 (has links)
<p>Nowadays, assessing software quality and making predictions about the software are not</p><p>possible. Software metrics are useful tools for assessing software quality and for making</p><p>predictions. But currently the interpretation of the measured values is based on personal</p><p>experience. In order to be able to assess software quality, quantitative data has to be</p><p>obtained.</p><p>VizzAnalyzer is a program for analyzing open source Java Projects. It can be used</p><p>for collecting quantitative data for defining thresholds that can support the interpretation</p><p>of the measurement values. It helps to assess software quality by calculating over 20</p><p>different software metrics. I define a process for obtaining, storing and maintaining</p><p>software projects. I have used the defined process to analyze 60-80 software projects</p><p>delivering a large database with quantitative data.</p>
5

Usability and productivity for silicon debug software: a case study

Singh, Punit 24 February 2012 (has links)
Semiconductor manufacturing is complex. Companies strive to lead in the markets by delivering timely chips which are bug (a.k.a defect) free and have very low power consumption. The new research drives new features in chips. The case study research reported here is about the usability and productivity of the silicon debug software tools. Silicon debug software tools are a set of software used to find bugs before delivering chips to the customer. The study has an objective to improve usability and productivity of the tools, by introducing metrics. The results of the measurements drive a concrete plan of action. The GQM (Goal, Questions, Metrics) methodology was used to define and gather data for the measurements. The project was developed in two parts or phases. We took the measurements using the method over the two phases of the tool development. The findings from phase one improved the tool usability in the second phase. The lesson learnt is that tool usability is a complex measurement. Improving usability means that the user will use less of the tool help button; the user will have less downtime and will not input incorrect data. Even though for this study the focus was on three important tools, the same usability metrics can be applied to the remaining five tools. For defining productivity metrics, we also used the GQM methodology. A productivity measurement using historic data was done to establish a baseline. The baseline measurements identified some existing bottlenecks in the overall silicon debug process. We link productivity to time it takes for a debug tool user to complete the assigned task(s). The total time taken for using all the tools does not give us any actionable items for improving productivity. We will need to measure the time it takes for use of each tool in the debug process to give us actionable items. This is identified as future work. To improve usability we recommend making tools that are more robust to error handling and having good help features. To improve productivity we recommend getting data on where the user is spending most of the debug time. Then, we can focus on improving that time-consuming part of debug to make the users more productive. / text
6

Software metrics for social capital in social media

Carmichael, Dawn January 2015 (has links)
The aim of this research was creating metrics for measuring social connectedness in social media. This thesis made use of social capital theory in order to inform the construction of original metrics. The methodology used in this thesis involved conducting a literature review into the use of social capital theory in social media, proposing new metrics, implementation in software, validation, evaluation against other measures and finally demonstrating the utility of the new metrics. A preliminary case study verified the suitability of using Facebook as a context for developing the metrics. The main practical work outlined in this thesis aimed to validate Social Capital in Social Media (SCiSM) metrics against the Internet Social Capital Scale (ISCS) (Williams 2006). The SCiSM metrics were developed to relate to bonding social capital, bridging social capital and total social capital (Putnam 2000). The methodology used to validate the SCiSM metrics was Meneely (2012) and involved using two independent data sets to validate the SCiSM metrics using both correlations and linear regression. Statistical analysis found a strong positive correlation between ISCS and SCiSM whilst regression analysis demonstrated that the relationship between SCiSM and ISCS was concerned with ranking rather than an absolute number. SCiSM was evaluated against other social capital metrics used in the literature such as degree centrality. It was found that SCiSM had a higher number of significant correlations with the ISCS than other measures. The SCiSM metrics were then used to analyse the two independent data sets in order to demonstrate their utility. The first data set, taken from a Facebook group, was analysed using a paired t-test. It was found that bonding social capital increased over a twelve week period but that bridging social capital did not. The second data set, which was taken from Facebook status updates, was analysed using correlations. The result was that there was a positive correlation between number of Facebook friends and bonding social capital. However it was also found that there was a negative correlation between number of Facebook friends and bridging social capital. This suggests that there is a dilution effect in the usefulness of large friend networks for bridging social capital. In conclusion the problem that this research has addressed is providing a means to improve understanding of social capital in social media.
7

Defining and Implementing a Measurement-Based Software Maintenance Process

Henry, Joel, Blasewitz, Robert, Kettinger, David 01 January 1996 (has links)
This paper describes the measurement-based software maintenance process defined and implemented at Lockheed-Martin, Moorestown, NJ. The documented process includes extensive data collection, a tightly controlled but highly accessible database, data analysis techniques supported by software tools, and process assessment and improvement activities. The methods and techniques used are presented in a 'how to' fashion so that other organizations can leverage our efforts to define and implement a measurement-based process of their own. Our approach is an evolutionary one, rather than a revolutionary organizational upheaval. We describe the benefits gained from our process, including statistically validated metric results, and the subsequent process improvements implemented. This paper describes solutions to the 'real-world' issues faced by an organization which successfully implemented a measurement-based software maintenance process.
8

A methodology, based on a language's properties, for the selection and validation of a suite of software metrics

Bodnar, Roger P. Jr. 02 September 1997 (has links)
Software Engineering has attempted to improve the software development process for over two decades. A primary attempt at this process lies in the arena of measurement. "You can't control what you can't measure" [DEMT82]. This thesis attempts to measure the development of multimedia products. Multimedia languages seem to be the trend of future languages. Problem areas such as Education, Instruction, Training, and Information Systems require that various media allow the achievement of such goals. The first step in this measurement process is the placement of multimedia languages, namely Authorware, in the existing taxonomy of language paradigms. The next step involves the measurement of various distinguishing properties of the language. Finally, the measurement process is selected and evaluated. This evaluation gives insight as to the next step in establishing the goal of control, through measurement, of the multimedia software development process. / Master of Science
9

A Deep Learning approach to predict software bugs using micro patterns and software metrics

Brumfield, Marcus 07 August 2020 (has links)
Software bugs prediction is one of the most active research areas in the software engineering community. The process of testing and debugging code proves to be costly during the software development life cycle. Software metrics measure the quality of source code to identify software bugs and vulnerabilities. Traceable code patterns are able to de- scribe code at a finer granularity level to measure quality. Micro patterns will be used in this research to mechanically describe java code at the class level. Machine learning has also been introduced for bug prediction to localize source code for testing and debugging. Deep Learning is a branch of Machine Learning that is relatively new. This research looks to improve the prediction of software bugs by utilizing micro patterns with deep learning techniques. Software bug prediction at a finer granularity level will enable developers to localize code to test and debug during the development process.
10

A POT of Software Metrics: A Physiological Overturn of Technology of Software Metrics

Hingane, Amruta Laxman 20 November 2008 (has links)
No description available.

Page generated in 0.0369 seconds