• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 5
  • 4
  • 1
  • Tagged with
  • 33
  • 33
  • 33
  • 33
  • 12
  • 9
  • 8
  • 8
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Software quality assurance in a remote client/contractor context

Black, Angus Hugh January 2006 (has links)
With the reliance on information technology and the software that this technology utilizes increasing every day, it is of paramount importance that software developed be of an acceptable quality. This quality can be achieved through the utilization of various software engineering standards and guidelines. The question is, to what extent do these standards and guidelines need to be utilized and how are these standards and guidelines implemented? This research focuses on how guidelines developed by standardization bodies and the unified process developed by Rational can be integrated to achieve a suitable process and version control system within the context of a remote client/contractor small team environment.
12

Software process management and case studies in Hong Kong.

January 2003 (has links)
by Ling Ho-Wan Howard, Ryoo Byung-Hoon. / Thesis (M.B.A.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 72-74). / ABSTRACT --- p.ii / TABLE OF CONTENTS --- p.iii / LIST OF ILLUSTRATIONS --- p.vi / LIST OF TABLES --- p.vii / PREFACE --- p.viii / Chapter / Chapter I. --- IT PROFILE OF HONG KONG --- p.1 / IT Penetration in2002 --- p.1 / Government Initiatives --- p.2 / Software Industry of Hong Kong --- p.2 / Chapter II. --- IT STRATEGY --- p.5 / IT Strategy - 3 Check Points --- p.5 / Flexible Platform --- p.5 / Strategy vs. ROI --- p.8 / Outsourcing or Internal Development --- p.9 / Quality Management System ´ؤ Instituting Best Practices --- p.10 / Deming's 14 Points --- p.11 / The Juran Trilogy --- p.12 / Crosby's 14 Quality Steps --- p.13 / Chapter III. --- SOFTWARE QUALITY MANAGEMENT - CMM --- p.16 / Software Development Project --- p.16 / Software Project Process Model --- p.17 / Software Quality Management --- p.19 / Capability Maturity Model (CMM) --- p.20 / Bootstrap 3.2 --- p.23 / Trillium --- p.25 / ISO 9001/TickIT --- p.26 / SPICE --- p.27 / Chapter IV. --- CMM PRACTICES IN THE WORLD --- p.29 / The CMM Practices - Worldwide --- p.29 / Two studies on Software Process Management in Taiwan --- p.32 / Software Process Management in Taiwan: A Longitudinal Study of Top 1000 Companies --- p.32 / Software Project Process Management Maturity and Project Performance --- p.34 / Chapter V. --- SOFTWARE PROCESS MANAGEMENT IN HONG KONG --- p.36 / The CMM in Hong Kong --- p.37 / Case Studies on the SPM in Hong Kong --- p.41 / Case 1: Dow Chemical --- p.41 / Case 2: Oracle Hong Kong --- p.44 / Case 3: Bentley Systems Inc. (Hong Kong) --- p.48 / Case 4: i-Cable --- p.50 / Case 5: SinoPac Securities (Asia) Ltd --- p.53 / Implications of the Statistics --- p.55 / Factor comparison of mean value --- p.56 / Implications --- p.58 / Chapter VI. --- CONCLUSION --- p.60 / APPENDIX --- p.62 / BIBLOGRAPHY --- p.72
13

Design metrics analysis of the Harris ROCC project

Perera, Dinesh Sirimal January 1995 (has links)
The Design Metrics Research Team at Ball State University has developed a quality design metric D(G), which consists of an internal design metric Di, and an external design metric De. This thesis discusses applying design metrics to the ROCC-Radar On-line Command Control project received from Harris Corporation. Thus, the main objective of this thesis is to analyze the behavior of D(G), and the primitive components of this metric.Error and change history reports are vital inputs to the validation of design metrics' performance. Since correct identification of types of changes/errors is critical for our evaluation, several different types of analyses were performed in an attempt to qualify the metric performance in each case.This thesis covers the analysis of 666 FORTRAN modules with approximately 142,296 lines of code. / Department of Computer Science
14

Software testing tools and productivity

Moschoglou, Georgios Moschos January 1996 (has links)
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department. / Department of Computer Science
15

Neural networks and their application to metrics research

Lin, Burch January 1996 (has links)
In the development of software, time and resources are limited. As a result, developers collect metrics in order to more effectively allocate resources to meet time constraints. For example, if one could collect metrics to determine, with accuracy, which modules were error-prone and which were error-free, one could allocate personnel to work only on those error-prone modules.There are three items of concern when using metrics. First, with the many different metrics that have been defined, one may not know which metrics to collect. Secondly, the amount of metrics data collected can be staggering. Thirdly, interpretation of multiple metrics may provide a better indication of error-proneness than any single metric.This thesis researched the accuracy of a neural network, an unconventional model, in building a model that can determine whether a module is error-prone from an input of a suite of metrics. The accuracy of the neural network model was compared with the accuracy of a linear regression model, a standard statistical model, that has the same input and output. In other words, we attempted to find whether metrics correlated with error-proneness. The metrics were gathered from three different software projects. The suite of metrics that was used to build the models was a subset of a larger collection of metrics that was reduced using factor analysis.The conclusion of this thesis is that, from the projects analyzed, neither the neural network model nor the logistic regression model provide acceptable accuracies for real use. We cannot conclude whether one model provides better accuracy than the other. / Department of Computer Science
16

An examination of the application of design metrics to the development of testing strategies in large-scale SDL models

West, James F. January 2000 (has links)
There exist a number of well-known and validated design metrics, and the fault prediction available through these metrics has been well documented for systems developed in languages such as C and Ada. However, the mapping and application of these metrics to SDL systems has not been thoroughly explored. The aim of this project is to test the applicability of these metrics in classifying components for testing purposes in a large-scale SDL system. A new model has been developed for this purpose. This research was conducted using a number of SDL systems, most notably actual production models provided by Motorola Corporation. / Department of Computer Science
17

A knowledge approach to software testing

Mohamed, Essack 12 1900 (has links)
Thesis (MPhil)--University of Stellenbosch, 2004. / ENGLISH ABSTRACT: The effort to achieve quality is the largest component of software cost. Software testing is costly - ranging from 50% to 80% of the cost of producing a first working version. It is resource intensive and an intensely time consuming activity in the overall Systems Development Life Cycle (SDLC) and hence could arguably be the most important phase of the process. Software testing is pervasive. It starts at the initiation of a product with nonexecution type testing and continues to the retirement of the product life cycle beyond the post-implementation phase. Software testing is the currency of quality delivery. To understand testing and to improve testing practice, it is essential to see the software testing process in its broadest terms – as the means by which people, methodology, tools, measurement and leadership are integrated to test a software product. A knowledge approach recognises knowledge management (KM) enablers such as leadership, culture, technology and measurements that act in a dynamic relationship with KM processes, namely, creating, identifying, collecting, adapting, organizing, applying, and sharing. Enabling a knowledge approach is a worthy goal to encourage sharing, blending of experiences, discipline and expertise to achieve improvements in quality and adding value to the software testing process. This research was developed to establish whether specific knowledge such as domain subject matter or business expertise, application or technical skills, software testing competency, and whether the interaction of the testing team influences the degree of quality in the delivery of the application under test, or if one is the dominant critical knowledge area within software testing. This research also set out to establish whether there are personal or situational factors that will predispose the test engineer to knowledge sharing, again, with the view of using these factors to increase the quality and success of the ‘testing phase’ of the SDLC. KM, although relatively youthful, is entering its fourth generation with evidence of two paradigms emerging - that of mainstream thinking and that of the complex adaptive system theory. This research uses pertinent and relevant extracts from both paradigms appropriate to gain quality/success in software testing. / AFRIKAANSE OPSOMMING: By verre die grootste komponent van sagte ware koste is dié verwant aan kwaliteitsversekering. Toetsing van sagte ware is koste intensief en verteenwoordig tussen 50% en 80% van die kostes om ‘n beta weergawe vry te stel. Die toetsing van sagte ware is nie alleenlik duursaam nie, maar ook arbeidintensief en ‘n tydrowende aktiwteit in die sagte ware ontwikkelings lewensiklus en kan derhalwe gereken word as die mees belangrike fase. Toetsing is deurdringend – dit begin by die inisiëring van ‘n produk deur middel van nie-uitvoerende tipe toetsing en eindig by die voleinding van die produklewensiklus na die implementeringsfase. Sagte ware toetsing word beskou as die geldwaarde van kwalitatiewe aflewering. Om toetsing ten volle te begryp en die toepassing daarvan te verbeter, is dit noodsaaklik om die toetsproses holisties te beskou – as die medium en mate waartoe mense, metodologie, tegnieke, meting en leierskap integreer om ‘n sagte ware produk te toets. ‘n Benadering gekenmerk deur kennis erken die dinamiese verhouding waarbinne bestuurselemente van kundigheid, soos leierskap, kultuur, tegnologie en maatstawwe reageer en korrespondeer met prosesse van kundigheid, naamlik skep, identifiseer, versamel, aanpas, organiseer, toepas en meedeel. Die fasilitering van ‘n benadering gekenmerk deur kennis is ‘n waardige doelwit om meedeling, vermenging van ervaringe, dissipline en kundigheid aan te moedig ten einde kwaliteit te verbeter en waarde toe te voeg tot die proses van safte ware toetsing. Die doel van hierdie navorsing is om te bepaal of die kennis van ‘n spesifieke onderwerp, besigheidskundigheid, tegniese vaardighede of die toepassing daarvan, kundigheid van sagte ware toetsing, en/of die interaksie van die toetsspan die mate van kwaliteit beïnvloed, of een van voorgenoemde die dominante kritieke area van kennis is binne die konteks van sagte ware toetsing. Die navorsing beoog ook om te bepaal of daar persoonlike of situasiegebonde fakfore bestaan wat die toetstegnikus vooropstel om kennis te deel, weer eens, met die oog om deur middel van hierdie faktore kwaliteit te verbeter en die toetsfase binne die sagte ware ontwikkelingsiklus suksesvol af te lewer. Ten spyte van die relatiewe jeudgigheid van die bestuur van kennis, betree dit die vierde generasie waaruit twee denkwyses na vore kom – dié van hoofstroom denke en dié van ingewikkelde aangepaste stelselsdenke. Hierdie navorsing illustreer belangrike en toepaslike insette van beide denkwyses wat geskik is vir meedeling van kennis en vir die bereiking van verbeterde kwaliteit / sukses in sagte ware toetsing.
18

Statistical causal analysis for fault localization

Baah, George Kofi 08 August 2012 (has links)
The ubiquitous nature of software demands that software is released without faults. However, software developers inadvertently introduce faults into software during development. To remove the faults in software, one of the tasks developers perform is debugging. However, debugging is a difficult, tedious, and time-consuming process. Several semi-automated techniques have been developed to reduce the burden on the developer during debugging. These techniques consist of experimental, statistical, and program-structure based techniques. Most of the debugging techniques address the part of the debugging process that relates to finding the location of the fault, which is referred to as fault localization. The current fault-localization techniques have several limitations. Some of the limitations of the techniques include (1) problems with program semantics, (2) the requirement for automated oracles, which in practice are difficult if not impossible to develop, and (3) the lack of theoretical basis for addressing the fault-localization problem. The thesis of this dissertation is that statistical causal analysis combined with program analysis is a feasible and effective approach to finding the causes of software failures. The overall goal of this research is to significantly extend the state of the art in fault localization. To extend the state-of-the-art, a novel probabilistic model that combines program-analysis information with statistical information in a principled manner is developed. The model known as the probabilistic program dependence graph (PPDG) is applied to the fault-localization problem. The insights gained from applying the PPDG to fault localization fuels the development of a novel theoretical framework for fault localization based on established causal inference methodology. The development of the framework enables current statistical fault-localization metrics to be analyzed from a causal perspective. The analysis of the metrics show that the metrics are related to each other thereby allowing the unification of the metrics. Also, the analysis of metrics from a causal perspective reveal that the current statistical techniques do not find the causes of program failures instead the techniques find the program elements most associated with failures. However, the fault-localization problem is a causal problem and statistical association does not imply causation. Several empirical studies are conducted on several software subjects and the results (1) confirm our analytical results, (2) demonstrate the efficacy of our causal technique for fault localization. The results demonstrate the research in this dissertation significantly improves on the state-of-the-art in fault localization.
19

Analysis of multiple software releases of AFATDS using design metrics

Bhargava, Manjari January 1991 (has links)
The development of high quality software the first time, greatly depends upon the ability to judge the potential quality of the software early in the life cycle. The Software Engineering Research Center design metrics research team at Ball State University has developed a metrics approach for analyzing software designs. Given a design, these metrics highlight stress points and determine overall design quality.The purpose of this study is to analyze multiple software releases of the Advanced Field Artillery Tactical Data System (AFATDS) using design metrics. The focus is on examining the transformations of design metrics at each of three releases of AFATDS to determine the relationship of design metrics to the complexity and quality of a maturing system. The software selected as a test case for this research is the Human Interface code from Concept Evaluation Phase releases 2, 3, and 4 of AFATDS. To automate the metric collection process, a metric tool called the Design Metric Analyzer was developed.Further analysis of design metrics data indicated that the standard deviation and mean for the metric was higher for release 2, relatively lower for release 3, and again higher for release 4. Interpreting this means that there was a decrease in complexity and an improvement in the quality of the software from release 2 to release 3 and an increase in complexity in release 4. Dialog with project personnel regarding design metrics confirmed most of these observations. / Department of Computer Science
20

Continuing professional education for software quality assurance / CPE for SQA

Hammons, Rebecca L. January 2009 (has links)
This case study examined the self-directed and team-based learning activities of a software quality assurance organization in central Indiana. The skills required to assure a high level of software quality evolve rapidly and software quality professionals must embrace ongoing technology and process changes. The thirty focus group participants performed a variety of quality assurance tasks including configuration management, research, automated test development, test planning and execution, and team leadership. The case study was based on semi-structured interviews of four focus groups of software quality professionals, and explored the learning styles, preferences, and activities deployed to learn new technologies and solve complex software problems. Software products are becoming increasingly pervasive in our culture. The study of continuing education for the software quality profession is important due to our increased reliance on this profession to meet customer expectations for high-quality software products. The proliferation of software products in our culture has also increased the demand for software quality professionals. Those professionals who have access to continuing professional education to improve and maintain skills have the opportunity to meet customer expectations. There is no mandated certification or licensing for this profession therefore professionals are left to chart their own course of learning. This study sought to understand how these software quality professionals meet their continuing professional educational needs. As well, the study identified key resources required to support such continuing professional education both within the workplace and off the job. Future study of the role of critical self-reflection in establishing learning objectives could enhance our understanding of how software quality professionals identify and plan their learning activities. Further investigation of the value of computer programming and logic knowledge to the software quality professional would benefit our understanding of baseline skill requirements for the various roles performed in the profession. There are also opportunities to engage in future action research projects on co-location of teams, mentoring, and job rotation strategies, as employees were found to learn effectively from peers. / Department of Educational Studies

Page generated in 0.1973 seconds