• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 20
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 78
  • 78
  • 21
  • 21
  • 14
  • 13
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Benchmarking tests on recovery oriented computing

Raman, Nandita 09 July 2012 (has links)
Benchmarks have played a very important role in guiding the progress of computer science systems in various ways. Specifically, in Autonomous environments it has a major role to play. System crashes and software failures are a basic part of a software system’s life-cycle and to overcome or rather make it as less vulnerable as possible is the main purpose of recovery oriented computing. This is usually done by trying to reduce the downtime by automatically and efficiently recovering from a broad class of transient software failures without having to modify applications. There have been various types of benchmarks for recovering from a failure, but in this paper we intend to create a benchmark framework called the warning benchmarks to measure and evaluate the recovery oriented systems. It consists of the known and the unknown failures and few benchmark techniques which the warning benchmarks handle with the help of various other techniques in software fault analysis. / text

System architecture metrics : an evaluation

Shepperd, Martin John January 1991 (has links)
The research described in this dissertation is a study of the application of measurement, or metrics for software engineering. This is not in itself a new idea; the concept of measuring software was first mooted close on twenty years ago. However, examination of what is a considerable body of metrics work, reveals that incorporating measurement into software engineering is rather less straightforward than one might pre-suppose and despite the advancing years, there is still a lack of maturity. The thesis commences with a dissection of three of the most popular metrics, namely Haistead's software science, McCabe's cyclomatic complexity and Henry and Kafura's information flow - all of which might be regarded as having achieved classic status. Despite their popularity these metrics are all flawed in at least three respects. First and foremost, in each case it is unclear exactly what is being measured: instead there being a preponderance of such metaphysical terms as complexIty and qualIty. Second, each metric is theoretically doubtful in that it exhibits anomalous behaviour. Third, much of the claimed empirical support for each metric is spurious arising from poor experimental design, and inappropriate statistical analysis. It is argued that these problems are not misfortune but the inevitable consequence of the ad hoc and unstructured approach of much metrics research: in particular the scant regard paid to the role of underlying models. This research seeks to address these problems by proposing a systematic method for the development and evaluation of software metrics. The method is a goal directed, combination of formal modelling techniques, and empirical ealiat%or. The met\io s applied to the problem of developing metrics to evaluate software designs - from the perspective of a software engineer wishing to minimise implementation difficulties, faults and future maintenance problems. It highlights a number of weaknesses within the original model. These are tackled in a second, more sophisticated model which is multidimensional, that is it combines, in this case, two metrics. Both the theoretical and empirical analysis show this model to have utility in its ability to identify hardto- implement and unreliable aspects of software designs. It is concluded that this method goes some way towards the problem of introducing a little more rigour into the development, evaluation and evolution of metrics for the software engineer.

An automated approach to the measurement and evaluation of software quality during development

Dixon, Mark Brian January 1997 (has links)
No description available.

Usability and productivity for silicon debug software: a case study

Singh, Punit 24 February 2012 (has links)
Semiconductor manufacturing is complex. Companies strive to lead in the markets by delivering timely chips which are bug (a.k.a defect) free and have very low power consumption. The new research drives new features in chips. The case study research reported here is about the usability and productivity of the silicon debug software tools. Silicon debug software tools are a set of software used to find bugs before delivering chips to the customer. The study has an objective to improve usability and productivity of the tools, by introducing metrics. The results of the measurements drive a concrete plan of action. The GQM (Goal, Questions, Metrics) methodology was used to define and gather data for the measurements. The project was developed in two parts or phases. We took the measurements using the method over the two phases of the tool development. The findings from phase one improved the tool usability in the second phase. The lesson learnt is that tool usability is a complex measurement. Improving usability means that the user will use less of the tool help button; the user will have less downtime and will not input incorrect data. Even though for this study the focus was on three important tools, the same usability metrics can be applied to the remaining five tools. For defining productivity metrics, we also used the GQM methodology. A productivity measurement using historic data was done to establish a baseline. The baseline measurements identified some existing bottlenecks in the overall silicon debug process. We link productivity to time it takes for a debug tool user to complete the assigned task(s). The total time taken for using all the tools does not give us any actionable items for improving productivity. We will need to measure the time it takes for use of each tool in the debug process to give us actionable items. This is identified as future work. To improve usability we recommend making tools that are more robust to error handling and having good help features. To improve productivity we recommend getting data on where the user is spending most of the debug time. Then, we can focus on improving that time-consuming part of debug to make the users more productive. / text

Defining a Software Analysis Framework

Dogan, Oguzhan January 2008 (has links)
<p>Nowadays, assessing software quality and making predictions about the software are not</p><p>possible. Software metrics are useful tools for assessing software quality and for making</p><p>predictions. But currently the interpretation of the measured values is based on personal</p><p>experience. In order to be able to assess software quality, quantitative data has to be</p><p>obtained.</p><p>VizzAnalyzer is a program for analyzing open source Java Projects. It can be used</p><p>for collecting quantitative data for defining thresholds that can support the interpretation</p><p>of the measurement values. It helps to assess software quality by calculating over 20</p><p>different software metrics. I define a process for obtaining, storing and maintaining</p><p>software projects. I have used the defined process to analyze 60-80 software projects</p><p>delivering a large database with quantitative data.</p>

Software metrics for social capital in social media

Carmichael, Dawn January 2015 (has links)
The aim of this research was creating metrics for measuring social connectedness in social media. This thesis made use of social capital theory in order to inform the construction of original metrics. The methodology used in this thesis involved conducting a literature review into the use of social capital theory in social media, proposing new metrics, implementation in software, validation, evaluation against other measures and finally demonstrating the utility of the new metrics. A preliminary case study verified the suitability of using Facebook as a context for developing the metrics. The main practical work outlined in this thesis aimed to validate Social Capital in Social Media (SCiSM) metrics against the Internet Social Capital Scale (ISCS) (Williams 2006). The SCiSM metrics were developed to relate to bonding social capital, bridging social capital and total social capital (Putnam 2000). The methodology used to validate the SCiSM metrics was Meneely (2012) and involved using two independent data sets to validate the SCiSM metrics using both correlations and linear regression. Statistical analysis found a strong positive correlation between ISCS and SCiSM whilst regression analysis demonstrated that the relationship between SCiSM and ISCS was concerned with ranking rather than an absolute number. SCiSM was evaluated against other social capital metrics used in the literature such as degree centrality. It was found that SCiSM had a higher number of significant correlations with the ISCS than other measures. The SCiSM metrics were then used to analyse the two independent data sets in order to demonstrate their utility. The first data set, taken from a Facebook group, was analysed using a paired t-test. It was found that bonding social capital increased over a twelve week period but that bridging social capital did not. The second data set, which was taken from Facebook status updates, was analysed using correlations. The result was that there was a positive correlation between number of Facebook friends and bonding social capital. However it was also found that there was a negative correlation between number of Facebook friends and bridging social capital. This suggests that there is a dilution effect in the usefulness of large friend networks for bridging social capital. In conclusion the problem that this research has addressed is providing a means to improve understanding of social capital in social media.

Factors Affecting the Programming Performance of Computer Science Students

Raley, John B. 11 October 1996 (has links)
Two studies of factors affecting the programming performance of first- and second year Computer Science students were conducted. In one study students used GIL, a simple application framework, for their programming assignments in a second-semester programming course. Improvements in student performance were realized. In the other study, students submitted detailed logs of how time was spent on projects, along with their programs. Software metrics were computed on the students' source code. Correlations between student performance and the log data and software metric data were sought. No significant indicators of performance were found, even with factors that are commonly expected to indicate performance. However, results from previous research concerning variations in individual programmer performance and relationships between software metrics were obtained. / Master of Science

A methodology, based on a language's properties, for the selection and validation of a suite of software metrics

Bodnar, Roger P. Jr. 02 September 1997 (has links)
Software Engineering has attempted to improve the software development process for over two decades. A primary attempt at this process lies in the arena of measurement. "You can't control what you can't measure" [DEMT82]. This thesis attempts to measure the development of multimedia products. Multimedia languages seem to be the trend of future languages. Problem areas such as Education, Instruction, Training, and Information Systems require that various media allow the achievement of such goals. The first step in this measurement process is the placement of multimedia languages, namely Authorware, in the existing taxonomy of language paradigms. The next step involves the measurement of various distinguishing properties of the language. Finally, the measurement process is selected and evaluated. This evaluation gives insight as to the next step in establishing the goal of control, through measurement, of the multimedia software development process. / Master of Science

A POT of Software Metrics: A Physiological Overturn of Technology of Software Metrics

Hingane, Amruta Laxman 20 November 2008 (has links)
No description available.


Beaver, Justin 11 January 2007 (has links)
Software practitioners lack a consistent approach to assessing and predicting quality within their products. This research proposes a software quality model that accounts for the influences of development team skill/experience, process maturity, and problem complexity throughout the software engineering life cycle. The model is structured using Bayesian Belief Networks and, unlike previous efforts, uses widely-accepted software engineering standards and in-use industry techniques to quantify the indicators and measures of software quality. Data from 28 software engineering projects was acquired for this study, and was used for validation and comparison of the presented software quality models. Three Bayesian model structures are explored and the structure with the highest performance in terms of accuracy of fit and predictive validity is reported. In addition, the Bayesian Belief Networks are compared to both Least Squares Regression and Neural Networks in order to identify the technique is best suited to modeling software product quality. The results indicate that Bayesian Belief Networks outperform both Least Squares Regression and Neural Networks in terms of producing modeled software quality variables that fit the distribution of actual software quality values, and in accurately forecasting 25 different indicators of software quality. Between the Bayesian model structures, the simplest structure, which relates software quality variables to their correlated causal factors, was found to be the most effective in modeling software quality. In addition, the results reveal that the collective skill and experience of the development team, over process maturity or problem complexity, has the most significant impact on the quality of software products. / Ph.D. / School of Electrical Engineering and Computer Science / Engineering and Computer Science / Computer Engineering

Page generated in 0.0652 seconds