• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 435
  • 260
  • 75
  • 64
  • 34
  • 14
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 2
  • Tagged with
  • 1078
  • 223
  • 156
  • 144
  • 143
  • 137
  • 128
  • 124
  • 100
  • 88
  • 83
  • 82
  • 79
  • 77
  • 74
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A software testing estimation and process control model

Archibald, Colin J. January 1998 (has links)
The control of the testing process and estimation of the resource required to perform testing is key to delivering a software product of target quality on budget. This thesis explores the use of testing to remove errors, the part that metrics and models play in this process, and considers an original method for improving the quality of a software product. The thesis investigates the possibility of using software metrics to estimate the testing resource required to deliver a product of target quality into deployment and also determine during the testing phases the correct point in time to proceed to the next testing phase in the life-cycle. Along with the metrics Clear ratio. Chum, Error rate halving. Severity shift, and faults per week, a new metric 'Earliest Visibility' is defined and used to control the testing process. EV is constructed upon the link between the point at which an error is made within development and subsequently found during testing. To increase the effectiveness of testing and reduce costs, whilst maintaining quality the model operates by each test phase being targeted at the errors linked to that test phase and the ability for each test phase to build upon the previous phase. EV also provides a measure of testing effectiveness and fault introduction rate by development phase. The resource estimation model is based on a gradual refinement of an estimate, which is updated following each development phase as more reliable data is available. Used in conjunction with the process control model, which will ensure the correct testing phase is in operation, the estimation model will have accurate data for each testing phase as input. The proposed model and metrics have been developed and tested on a large-scale (4 million LOC) industrial telecommunications product written in C and C++ running within a Unix environment. It should be possible to extend this work to suit other environments and other development life-cycles.
2

Hybrid approaches for measuring use, users, and usage behaviors: A paper submitted to the NSF NSDL Webmetrics Workshop, Costa Mesa, CA, Aug. 2-3, 2004.

Coleman, Anita Sundaram, Budhu, Muniram 07 1900 (has links)
This paper was submitted as part of the requirements and statement of interest for participation in the NSF funded NSDL Webmetrics Workshop in Aug. 2004. It documents GROW's experience with regards to development of webmetrics software and intention to include webmetrics strategies as a part of evaluation. GROWâ s evaluation strategy was articulated in conjunction with the library design and development framework (Budhu & Coleman, 2002). A digital library is a complex thing to evaluate and the â interactivesâ evaluation framework we proposed uses hybrid methods to study distinct layers and objects in the digital library (resource itself, the interface, the search engine, etc.) and understand users and evaluate educational impact. Our Interactives Evaluation strategy has been shared with users and stakeholders at various venues such as the Harvill conference and the NSDL Participant Interaction Digital Workshop, February 2004
3

Web Metrics Bibliography

Coleman, Anita Sundaram, Neuhaus, Chris January 2004 (has links)
A review of the literature reveals that web metrics is a complex topic that can be found under many different terms and phrases: e-metrics, web site traffic measurement, web usage, web mining, online consumer/usage behavior, are just a few. Data mining, web analytics, knowledge data discovery, informetrics (bibliometrics and web-o-metrics) and business analytics are also relevant. â Metricsâ are measures and â analyticsâ are measurements of information, processes and data analysis from processes but web analytics is also becoming synonymous for e-metrics and web metrics. â Verticalizationâ is one of the newest trends in business web analytics/metrics; this means not just web traffic logging and analysis but has been broadened to include understanding and predicting customer behavior and customer relationship management. â Personalizationâ is considered fundamental to improving customer motivation, satisfaction and retention. What is the potential of web metrics for understanding educational use of the NSDL and measuring educational impact? As Einstein said, â Not everything that can be counted counts, and not everything that counts can be counted.â What do we want to count for the NSDL how, and why? These are the questions that have motivated the creation of this bibliography. We hope it will be a useful starting point and reference document as we begin framing a plan of action for the NSDL in this important area. Status: This bibliography is a work in progress. When it is completed (target date: 08/30/04) it will be a selective, annotated bibliography on web metrics. Currently, the abstracts in this bibliography are often taken directly from the source articles or websites and are not annotations. Books and journals dealing with this topic are not yet included (with one exception); we plan to include at least other texts and journals in the final version. Acknowledgments: Chris Neuhaus jumpstarted this bibliography and Anita conducted a literature search in databases such as the ACM Digital Library besides editing it. In addition, we found the statements of the Webmetrics Workshop participants most helpful in preparing this bibliography and some of the references in the statements have been included here. We also acknowledge the labor of Shawn Nelson, SIRLS, University of Arizona, in locating the abstracts and articles/items listed in this bibliography. Your feedback and comments (especially critical comments and reader annotations about any of the items) will help to improve this selective, annotative bibliography and are greatly encouraged. Version: This is version 2 of the bibliography prepared by volunteers of the Educational Impact and Evaluation Standing Committee for the NSDL Webmetrics Workshop ( a joint workshop of the Technology Standing Committee and the EIESC), Aug. 2-3, Costa Mesa, CA. This version adds two tools mentioned at the workshop and includes citations to two papers that were distributed at the workshop as well. Version 1 of the bibliography, also a volunteer effort, was distributed as a paper copy to the 26 participants of the workshop. For details about the workshop, visit http://webmetrics.comm.nsdl.org/. This bibliography is being made available through DLIST, http://dlist.sir.arizona.edu/.
4

Empirical Validation of the Usefulness of Information Theory-Based Software Metrics

Gottipati, Sampath 10 May 2003 (has links)
Software designs consist of software components and their relationships. Graphs are abstraction of software designs. Graphs composed of nodes and hyperedges are attractive for depicting software designs. Measurement of abstractions quantify relationships that exist among components. Most conventional metrics are based on counting. In contrast, this work adopts information theory because design decisions are information. The goal of this research is to show that information theory-based metrics proposed by Allen, namely size, complexity, coupling, and cohesion, can be useful in real-world software development projects, compared to the counting-based metrics. The thesis includes three case studies with the use of global variables as the abstraction. It is observed that one can use the counting metrics for the size and coupling measures and the information metrics for the complexity and cohesion measures.
5

Benchmarking tests on recovery oriented computing

Raman, Nandita 09 July 2012 (has links)
Benchmarks have played a very important role in guiding the progress of computer science systems in various ways. Specifically, in Autonomous environments it has a major role to play. System crashes and software failures are a basic part of a software system’s life-cycle and to overcome or rather make it as less vulnerable as possible is the main purpose of recovery oriented computing. This is usually done by trying to reduce the downtime by automatically and efficiently recovering from a broad class of transient software failures without having to modify applications. There have been various types of benchmarks for recovering from a failure, but in this paper we intend to create a benchmark framework called the warning benchmarks to measure and evaluate the recovery oriented systems. It consists of the known and the unknown failures and few benchmark techniques which the warning benchmarks handle with the help of various other techniques in software fault analysis. / text
6

Applications of computer algebra systems to general relativity theory

Joly, Gordon Charles January 1986 (has links)
No description available.
7

Cohesion prediction using information flow

Moses, John January 1997 (has links)
No description available.
8

On the fidelity of software

Counsell, Stephen J. January 2002 (has links)
No description available.
9

Marketing metrics use in South Africa

Mathare, Waweru 06 May 2010 (has links)
The marketing function has been under immense pressure to be more document it contribution to the performance of the firm, this pressure comes from shareholders seeking a return on their funds, CEO’s seeking savings and from their peers as they seek to become more relevant in the organisation. Efforts to track marketing have been hindered by among other issues a lack of numeracy by marketers, the primacy of financial measurements and a laundry list of metrics from research and practice that makes it hard to chose, few and pertinent ones. The use of marketing metrics has proven to contribute to better business performance, and during recessions when budgets are tight, it becomes even more urgent that the marketing function have and understand marketing metrics. This study aimed to evaluate the extent of marketing metrics use in South Africa, determine the levels and frequency of review, examine whether use of metrics changes due to severe economic conditions and evaluate whether the change in use of metrics contributes to better firm performance. The study found that use, review and collection of metrics is at par with other countries, but there is no change in the level and frequency of review during a recession. Evidence was found of better firm performance that is linked to the change of use of metrics. / Dissertation (MBA)--University of Pretoria, 2010. / Gordon Institute of Business Science (GIBS) / unrestricted
10

Software Architectural Metrics for the Scania Internet of Things Platform : From a Microservice Perspectiv

Ulander, David January 2017 (has links)
There are limited tools to evaluate a microservice architecture and no common definition of how the architecture should be designed. Moreover, developing systems with microservices introduces additional complexity to the software architecture. That, together with the fact the systems are becoming more complex has led to a desire for architecture evaluation methods. In this thesis a set of quality attributes measured by structural metrics are used to evaluate Scania's IoT Offboard platform. By implementing a metrics evaluation program the quality of the software architecture can be improved. Also, metrics can assist developers and architects while they are becoming more efficient since they better understand how performance is measured, i.e. which quality attributes are the most important and how these are measured. For Scania's IoT Offboard platform the studied quality attributes are listed in decreasing importance: flexibility, reusability and understandability. All the microservices are loosely coupled in the platform, which results in a loosely coupled architecture. This indicates a flexible, reusable and understandable system, in terms of coupling. Furthermore, the architecture is decentralized, i.e. the system is unflexible and difficult to change. The other metrics were lacking a reference scale, hence they will act as a point of reference for future measurements as the architecture evolves. To improve the flexibility, reusability and understandability of the architecture the large microservices should be divided into several smaller microservices. Also aggregators should be utilized more to make the system more flexible.

Page generated in 0.0777 seconds