• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 467
  • 280
  • 75
  • 64
  • 35
  • 15
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1148
  • 240
  • 174
  • 157
  • 157
  • 148
  • 142
  • 129
  • 106
  • 95
  • 94
  • 93
  • 87
  • 84
  • 83
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Hybrid approaches for measuring use, users, and usage behaviors: A paper submitted to the NSF NSDL Webmetrics Workshop, Costa Mesa, CA, Aug. 2-3, 2004.

Coleman, Anita Sundaram, Budhu, Muniram 07 1900 (has links)
This paper was submitted as part of the requirements and statement of interest for participation in the NSF funded NSDL Webmetrics Workshop in Aug. 2004. It documents GROW's experience with regards to development of webmetrics software and intention to include webmetrics strategies as a part of evaluation. GROWâ s evaluation strategy was articulated in conjunction with the library design and development framework (Budhu & Coleman, 2002). A digital library is a complex thing to evaluate and the â interactivesâ evaluation framework we proposed uses hybrid methods to study distinct layers and objects in the digital library (resource itself, the interface, the search engine, etc.) and understand users and evaluate educational impact. Our Interactives Evaluation strategy has been shared with users and stakeholders at various venues such as the Harvill conference and the NSDL Participant Interaction Digital Workshop, February 2004
2

Web Metrics Bibliography

Coleman, Anita Sundaram, Neuhaus, Chris January 2004 (has links)
A review of the literature reveals that web metrics is a complex topic that can be found under many different terms and phrases: e-metrics, web site traffic measurement, web usage, web mining, online consumer/usage behavior, are just a few. Data mining, web analytics, knowledge data discovery, informetrics (bibliometrics and web-o-metrics) and business analytics are also relevant. â Metricsâ are measures and â analyticsâ are measurements of information, processes and data analysis from processes but web analytics is also becoming synonymous for e-metrics and web metrics. â Verticalizationâ is one of the newest trends in business web analytics/metrics; this means not just web traffic logging and analysis but has been broadened to include understanding and predicting customer behavior and customer relationship management. â Personalizationâ is considered fundamental to improving customer motivation, satisfaction and retention. What is the potential of web metrics for understanding educational use of the NSDL and measuring educational impact? As Einstein said, â Not everything that can be counted counts, and not everything that counts can be counted.â What do we want to count for the NSDL how, and why? These are the questions that have motivated the creation of this bibliography. We hope it will be a useful starting point and reference document as we begin framing a plan of action for the NSDL in this important area. Status: This bibliography is a work in progress. When it is completed (target date: 08/30/04) it will be a selective, annotated bibliography on web metrics. Currently, the abstracts in this bibliography are often taken directly from the source articles or websites and are not annotations. Books and journals dealing with this topic are not yet included (with one exception); we plan to include at least other texts and journals in the final version. Acknowledgments: Chris Neuhaus jumpstarted this bibliography and Anita conducted a literature search in databases such as the ACM Digital Library besides editing it. In addition, we found the statements of the Webmetrics Workshop participants most helpful in preparing this bibliography and some of the references in the statements have been included here. We also acknowledge the labor of Shawn Nelson, SIRLS, University of Arizona, in locating the abstracts and articles/items listed in this bibliography. Your feedback and comments (especially critical comments and reader annotations about any of the items) will help to improve this selective, annotative bibliography and are greatly encouraged. Version: This is version 2 of the bibliography prepared by volunteers of the Educational Impact and Evaluation Standing Committee for the NSDL Webmetrics Workshop ( a joint workshop of the Technology Standing Committee and the EIESC), Aug. 2-3, Costa Mesa, CA. This version adds two tools mentioned at the workshop and includes citations to two papers that were distributed at the workshop as well. Version 1 of the bibliography, also a volunteer effort, was distributed as a paper copy to the 26 participants of the workshop. For details about the workshop, visit http://webmetrics.comm.nsdl.org/. This bibliography is being made available through DLIST, http://dlist.sir.arizona.edu/.
3

A software testing estimation and process control model

Archibald, Colin J. January 1998 (has links)
The control of the testing process and estimation of the resource required to perform testing is key to delivering a software product of target quality on budget. This thesis explores the use of testing to remove errors, the part that metrics and models play in this process, and considers an original method for improving the quality of a software product. The thesis investigates the possibility of using software metrics to estimate the testing resource required to deliver a product of target quality into deployment and also determine during the testing phases the correct point in time to proceed to the next testing phase in the life-cycle. Along with the metrics Clear ratio. Chum, Error rate halving. Severity shift, and faults per week, a new metric 'Earliest Visibility' is defined and used to control the testing process. EV is constructed upon the link between the point at which an error is made within development and subsequently found during testing. To increase the effectiveness of testing and reduce costs, whilst maintaining quality the model operates by each test phase being targeted at the errors linked to that test phase and the ability for each test phase to build upon the previous phase. EV also provides a measure of testing effectiveness and fault introduction rate by development phase. The resource estimation model is based on a gradual refinement of an estimate, which is updated following each development phase as more reliable data is available. Used in conjunction with the process control model, which will ensure the correct testing phase is in operation, the estimation model will have accurate data for each testing phase as input. The proposed model and metrics have been developed and tested on a large-scale (4 million LOC) industrial telecommunications product written in C and C++ running within a Unix environment. It should be possible to extend this work to suit other environments and other development life-cycles.
4

Empirical Validation of the Usefulness of Information Theory-Based Software Metrics

Gottipati, Sampath 10 May 2003 (has links)
Software designs consist of software components and their relationships. Graphs are abstraction of software designs. Graphs composed of nodes and hyperedges are attractive for depicting software designs. Measurement of abstractions quantify relationships that exist among components. Most conventional metrics are based on counting. In contrast, this work adopts information theory because design decisions are information. The goal of this research is to show that information theory-based metrics proposed by Allen, namely size, complexity, coupling, and cohesion, can be useful in real-world software development projects, compared to the counting-based metrics. The thesis includes three case studies with the use of global variables as the abstraction. It is observed that one can use the counting metrics for the size and coupling measures and the information metrics for the complexity and cohesion measures.
5

Applications of computer algebra systems to general relativity theory

Joly, Gordon Charles January 1986 (has links)
No description available.
6

Cohesion prediction using information flow

Moses, John January 1997 (has links)
No description available.
7

On the fidelity of software

Counsell, Stephen J. January 2002 (has links)
No description available.
8

Functional Metrics in Axiomatic Design

Henley, Richard 31 October 2017 (has links)
The objective of this work is to study, functional metrics (FMs) in Axiomatic Design (Suh 1990), their relationship to each other within the functional domain, and understand ways in which they add value to the design process and the design solution, as well as variables that influence that value
9

Benchmarking tests on recovery oriented computing

Raman, Nandita 09 July 2012 (has links)
Benchmarks have played a very important role in guiding the progress of computer science systems in various ways. Specifically, in Autonomous environments it has a major role to play. System crashes and software failures are a basic part of a software system’s life-cycle and to overcome or rather make it as less vulnerable as possible is the main purpose of recovery oriented computing. This is usually done by trying to reduce the downtime by automatically and efficiently recovering from a broad class of transient software failures without having to modify applications. There have been various types of benchmarks for recovering from a failure, but in this paper we intend to create a benchmark framework called the warning benchmarks to measure and evaluate the recovery oriented systems. It consists of the known and the unknown failures and few benchmark techniques which the warning benchmarks handle with the help of various other techniques in software fault analysis. / text
10

System architecture metrics : an evaluation

Shepperd, Martin John January 1991 (has links)
The research described in this dissertation is a study of the application of measurement, or metrics for software engineering. This is not in itself a new idea; the concept of measuring software was first mooted close on twenty years ago. However, examination of what is a considerable body of metrics work, reveals that incorporating measurement into software engineering is rather less straightforward than one might pre-suppose and despite the advancing years, there is still a lack of maturity. The thesis commences with a dissection of three of the most popular metrics, namely Haistead's software science, McCabe's cyclomatic complexity and Henry and Kafura's information flow - all of which might be regarded as having achieved classic status. Despite their popularity these metrics are all flawed in at least three respects. First and foremost, in each case it is unclear exactly what is being measured: instead there being a preponderance of such metaphysical terms as complexIty and qualIty. Second, each metric is theoretically doubtful in that it exhibits anomalous behaviour. Third, much of the claimed empirical support for each metric is spurious arising from poor experimental design, and inappropriate statistical analysis. It is argued that these problems are not misfortune but the inevitable consequence of the ad hoc and unstructured approach of much metrics research: in particular the scant regard paid to the role of underlying models. This research seeks to address these problems by proposing a systematic method for the development and evaluation of software metrics. The method is a goal directed, combination of formal modelling techniques, and empirical ealiat%or. The met\io s applied to the problem of developing metrics to evaluate software designs - from the perspective of a software engineer wishing to minimise implementation difficulties, faults and future maintenance problems. It highlights a number of weaknesses within the original model. These are tackled in a second, more sophisticated model which is multidimensional, that is it combines, in this case, two metrics. Both the theoretical and empirical analysis show this model to have utility in its ability to identify hardto- implement and unreliable aspects of software designs. It is concluded that this method goes some way towards the problem of introducing a little more rigour into the development, evaluation and evolution of metrics for the software engineer.

Page generated in 0.0706 seconds