Spelling suggestions: "subject:"5oftware quality."" "subject:"1software quality.""
81 |
Διεξαγωγή μετρήσεων ποιότητας με στόχο τη βελτίωση της συντηρησιμότητας σε λογισμικό αλληλεπίδρασης με Βάση Δεδομένων / Applying metrics to an object-oriented software interacting with a database to ensure its maintainabilityΠέρδικα, Πολυτίμη 16 May 2007 (has links)
Η ποιότητα του λογισμικού είναι μία πολυσυζητημένη έννοια στις μέρες μας. Παρόλο που δεν υπάρχει ένας και μόνο ορισμός που να την περιγράφει, όλοι αντιλαμβάνονται την έννοια της ποιότητας λογισμικού, ιδιαίτερα μέσω της απουσίας της. Η διασφάλιση της ποιότητας του λογισμικού συνδέεται άμεσα με την έννοια της μετρικής, που είναι μία διαδικασία απαραίτητη για τη εκτίμηση της κατάστασης των προϊόντων, των διαδικασιών και των πόρων παραγωγής λογισμικού. Με την εφαρμογή των μετρικών σε ένα λογισμικό, μετρώνται εκείνα τα χαρακτηριστικά του που συμβάλλουν σημαντικά στην ποιότητά του. Έτσι, είναι δυνατό να εξαχθούν συμπεράσματα για το κατά πόσο το λογισμικό πληροί τα κριτήρια ποιότητας. Αντικείμενο της παρούσας διπλωματικής εργασίας είναι η παρουσίαση μεθοδολογίας διεξαγωγής μετρήσεων ποιότητας σε λογισμικό αντικειμενοστραφούς προγραμματισμού που υλοποιεί την αλληλεπίδραση με μία Βάση Δεδομένων, ώστε να εξαχθούν συμπεράσματα κυρίως για τη συντηρησιμότητά του και κατ’ επέκταση για τη δυνατότητα επαναχρησιμοποίησής του. / Although there is not a unique definition for ‘software quality’, its value is clearly understood, especially through its absence. Software quality reassurance is related with the concept of ‘metrics’. Metrics are considered essential to estimate the state of the product, the procedures and the resources for the software production. Through the application of metrics to a software product, the characteristics that contribute to its quality can be measured. In this way, conclusions can be drawn regarding the degree of fulfillment for the criteria of quality. This thesis presents a methodology of applying metrics to an object-oriented software, which is responsible for interacting with a database. The results of measuring the most important characteristics of the software lead to conclusions about the software’s maintainability and reusability.
|
82 |
Closing the Defect Reduction Gap between Software Inspection and Test-Driven Development: Applying Mutation Analysis to Iterative, Test-First ProgrammingWilkerson, Jerod W. January 2008 (has links)
The main objective of this dissertation is to assist in reducing the chaotic state of the software engineering discipline by providing insights into both the effectiveness of software defect reduction methods and ways these methods can be improved. The dissertation is divided into two main parts. The first is a quasi-experiment comparing the software defect rates and initial development costs of two methods of software defect reduction: software inspection and test-driven development (TDD). Participants, consisting of computer science students at the University of Arizona, were divided into four treatment groups and were asked to complete the same programming assignment using either TDD, software inspection, both, or neither. Resulting defect counts and initial development costs were compared across groups. The study found that software inspection is more effective than TDD at reducing defects, but that it also has a higher initial cost of development. The study establishes the existence of a defect-reduction gap between software inspection and TDD and highlights the need to improve TDD because of its other benefits.The second part of the dissertation explores a method of applying mutation analysis to TDD to reduce the defect reduction gap between the two methods and to make TDD more reliable and predictable. A new change impact analysis algorithm (CHA-AS) based on CHA is presented and evaluated for applications of software change impact analysis where a predetermined set of program entry points is not available or is not known. An estimated average case complexity analysis indicates that the algorithm's time and space complexity is linear in the size of the program under analysis, and a simulation experiment indicates that the algorithm can capitalize on the iterative nature of TDD to produce a cost savings in mutation analysis applied to TDD projects. The algorithm should also be useful for other change impact analysis situations with undefined program entry points such as code library and framework development.An enhanced TDD method is proposed that incorporates mutation analysis, and a set of future research directions are proposed for developing tools to support mutation analysis enhanced TDD and to continue to improve the TDD method.
|
83 |
Design metrics analysis of the Harris ROCC projectPerera, Dinesh Sirimal January 1995 (has links)
The Design Metrics Research Team at Ball State University has developed a quality design metric D(G), which consists of an internal design metric Di, and an external design metric De. This thesis discusses applying design metrics to the ROCC-Radar On-line Command Control project received from Harris Corporation. Thus, the main objective of this thesis is to analyze the behavior of D(G), and the primitive components of this metric.Error and change history reports are vital inputs to the validation of design metrics' performance. Since correct identification of types of changes/errors is critical for our evaluation, several different types of analyses were performed in an attempt to qualify the metric performance in each case.This thesis covers the analysis of 666 FORTRAN modules with approximately 142,296 lines of code. / Department of Computer Science
|
84 |
Software testing tools and productivityMoschoglou, Georgios Moschos January 1996 (has links)
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department. / Department of Computer Science
|
85 |
Neural networks and their application to metrics researchLin, Burch January 1996 (has links)
In the development of software, time and resources are limited. As a result, developers collect metrics in order to more effectively allocate resources to meet time constraints. For example, if one could collect metrics to determine, with accuracy, which modules were error-prone and which were error-free, one could allocate personnel to work only on those error-prone modules.There are three items of concern when using metrics. First, with the many different metrics that have been defined, one may not know which metrics to collect. Secondly, the amount of metrics data collected can be staggering. Thirdly, interpretation of multiple metrics may provide a better indication of error-proneness than any single metric.This thesis researched the accuracy of a neural network, an unconventional model, in building a model that can determine whether a module is error-prone from an input of a suite of metrics. The accuracy of the neural network model was compared with the accuracy of a linear regression model, a standard statistical model, that has the same input and output. In other words, we attempted to find whether metrics correlated with error-proneness. The metrics were gathered from three different software projects. The suite of metrics that was used to build the models was a subset of a larger collection of metrics that was reduced using factor analysis.The conclusion of this thesis is that, from the projects analyzed, neither the neural network model nor the logistic regression model provide acceptable accuracies for real use. We cannot conclude whether one model provides better accuracy than the other. / Department of Computer Science
|
86 |
An examination of the application of design metrics to the development of testing strategies in large-scale SDL modelsWest, James F. January 2000 (has links)
There exist a number of well-known and validated design metrics, and the fault prediction available through these metrics has been well documented for systems developed in languages such as C and Ada. However, the mapping and application of these metrics to SDL systems has not been thoroughly explored. The aim of this project is to test the applicability of these metrics in classifying components for testing purposes in a large-scale SDL system. A new model has been developed for this purpose. This research was conducted using a number of SDL systems, most notably actual production models provided by Motorola Corporation. / Department of Computer Science
|
87 |
STUDYING THE IMPACT OF DEVELOPER COMMUNICATION ON THE QUALITY AND EVOLUTION OF A SOFTWARE SYSTEMBettenburg, Nicolas 22 May 2014 (has links)
Software development is a largely collaborative effort, of which the actual encoding of program logic in source code is a relatively small part. Software developers have to collaborate effectively and communicate with their peers in order to avoid coordination problems. To date, little is known how developer communication during software development activities impacts the quality and evolution of a software.
In this thesis, we present and evaluate tools and techniques to recover communication data from traces of the software development activities. With this data, we study the impact of developer communication on the quality and evolution of the software through an in-depth investigation of the role of developer communication during software development activities. Through multiple case-studies on a broad spectrum of open-source software projects, we find that communication between developers stands in a direct relationship to the quality of the software. Our findings demonstrate that our models based on developer communication explain software defects as well as state-of-the art models that are based on technical information such as code and process metrics, and that social information metrics are orthogonal to these traditional metrics, leading to a more complete and integrated view on software defects. In addition, we find that communication between developers plays a important role in maintaining a healthy contribution management process, which is one of the key factors to the successful evolution of the software. Source code contributors who are part of the community surrounding open-source projects are available for limited times, and long communication times can lead to the loss of valuable contributions.
Our thesis illustrates that software development is an intricate and complex process that is strongly influenced by the social interactions between the stakeholders involved in the development activities. A traditional view based solely on technical aspects of software development such as source code size and complexity, while valuable, limits our understanding of software development activities. The research presented in this thesis consists of a first step towards gaining a more holistic view on software development activities. / Thesis (Ph.D, Computing) -- Queen's University, 2014-05-22 12:07:13.823
|
88 |
Αξιολόγηση σταθερότητας open source με χρήση μετρικώνΚαλύβα, Δήμητρα 20 September 2010 (has links)
Το τελευταίο διάστημα, ο όρος «ποιότητα λογισμικού» γίνεται ολοένα και πιο
δημοφιλής. Όλο και μεγαλύτερη σημασία δίνεται στο τι είναι ποιότητα
λογισμικού, αν μπορεί να μετρηθεί και με ποιους τρόπους κι επίσης το αν αξίζει να
γνωρίζει κανείς στη φάση ανάπτυξης λογισμικού πόσο ποιοτικό είναι ένα
πρόγραμμα. Επιπλέον, η ανάπτυξη λογισμικού ανοιχτού κώδικα βελτιώνεται και
εξελίσσεται με γρήγορους ρυθμούς.
Η παρούσα διπλωματική εργασία έχει ως στόχο την εξαγωγή
συμπερασμάτων, ώστε να αποτιμηθεί η σταθερότητα ενός προγράμματος ανοιχτού
λογισμικού με χρήση μετρικών. Το πρόγραμμα το οποίο μελετήθηκε ήταν το Win
Merge και οι μετρικές των ρουτινών του υπολογίστηκαν με τη βοήθεια του
προγράμματος Source Monitor. Αρχικά, ταξινομήθηκαν οι ρουτίνες σε κατηγορίες
ανάλογα με τον αριθμό των εκδόσεων στις οποίες είχαν τροποποιηθεί. Στη
συνέχεια, υπολογίστηκαν οι μέσοι όροι των ρουτινών για κάθε κατηγορία και
προέκυψαν τα αντίστοιχα διαγράμματα (ένα για κάθε μετρική). / Nowadays, the term “software quality” becomes more and more popular. In
addition, more and more people are interested in what it is quality of software, if
and how it can be measured and whether it is worth knowing the quality of your
program in the phase of software development. Moreover, the development of
open source is improved with rapid rythm.
This project aims at the export of conclusions, so that the stability of a
program of open source is evaluated by using metrics. The program we used is
Win Merge and metrics were calculated by using Source Monitor program.
Initially, the routines were categorized in categories depending on the number of
versions in which they had been modified. Afterwards, we calculated the averages
of routines for each category and we resulted in the corresponding diagrams (for
each metric).
|
89 |
A knowledge approach to software testingMohamed, Essack 12 1900 (has links)
Thesis (MPhil)--University of Stellenbosch, 2004. / ENGLISH ABSTRACT: The effort to achieve quality is the largest component of software cost. Software testing is
costly - ranging from 50% to 80% of the cost of producing a first working version. It is
resource intensive and an intensely time consuming activity in the overall Systems
Development Life Cycle (SDLC) and hence could arguably be the most important phase of
the process. Software testing is pervasive. It starts at the initiation of a product with nonexecution
type testing and continues to the retirement of the product life cycle beyond the
post-implementation phase.
Software testing is the currency of quality delivery. To understand testing and to improve
testing practice, it is essential to see the software testing process in its broadest terms – as
the means by which people, methodology, tools, measurement and leadership are integrated
to test a software product.
A knowledge approach recognises knowledge management (KM) enablers such as
leadership, culture, technology and measurements that act in a dynamic relationship with KM
processes, namely, creating, identifying, collecting, adapting, organizing, applying, and
sharing. Enabling a knowledge approach is a worthy goal to encourage sharing, blending of
experiences, discipline and expertise to achieve improvements in quality and adding value to
the software testing process.
This research was developed to establish whether specific knowledge such as domain
subject matter or business expertise, application or technical skills, software testing
competency, and whether the interaction of the testing team influences the degree of quality
in the delivery of the application under test, or if one is the dominant critical knowledge area
within software testing. This research also set out to establish whether there are personal or
situational factors that will predispose the test engineer to knowledge sharing, again, with the
view of using these factors to increase the quality and success of the ‘testing phase’ of the
SDLC. KM, although relatively youthful, is entering its fourth generation with evidence of two
paradigms emerging - that of mainstream thinking and that of the complex adaptive system
theory. This research uses pertinent and relevant extracts from both paradigms appropriate
to gain quality/success in software testing. / AFRIKAANSE OPSOMMING: By verre die grootste komponent van sagte ware koste is dié verwant aan
kwaliteitsversekering. Toetsing van sagte ware is koste intensief en verteenwoordig tussen
50% en 80% van die kostes om ‘n beta weergawe vry te stel.
Die toetsing van sagte ware is nie alleenlik duursaam nie, maar ook arbeidintensief en ‘n
tydrowende aktiwteit in die sagte ware ontwikkelings lewensiklus en kan derhalwe gereken
word as die mees belangrike fase. Toetsing is deurdringend – dit begin by die inisiëring van
‘n produk deur middel van nie-uitvoerende tipe toetsing en eindig by die voleinding van die
produklewensiklus na die implementeringsfase.
Sagte ware toetsing word beskou as die geldwaarde van kwalitatiewe aflewering. Om
toetsing ten volle te begryp en die toepassing daarvan te verbeter, is dit noodsaaklik om die
toetsproses holisties te beskou – as die medium en mate waartoe mense, metodologie,
tegnieke, meting en leierskap integreer om ‘n sagte ware produk te toets.
‘n Benadering gekenmerk deur kennis erken die dinamiese verhouding waarbinne
bestuurselemente van kundigheid, soos leierskap, kultuur, tegnologie en maatstawwe
reageer en korrespondeer met prosesse van kundigheid, naamlik skep, identifiseer,
versamel, aanpas, organiseer, toepas en meedeel. Die fasilitering van ‘n benadering
gekenmerk deur kennis is ‘n waardige doelwit om meedeling, vermenging van ervaringe,
dissipline en kundigheid aan te moedig ten einde kwaliteit te verbeter en waarde toe te voeg
tot die proses van safte ware toetsing.
Die doel van hierdie navorsing is om te bepaal of die kennis van ‘n spesifieke onderwerp,
besigheidskundigheid, tegniese vaardighede of die toepassing daarvan, kundigheid van
sagte ware toetsing, en/of die interaksie van die toetsspan die mate van kwaliteit beïnvloed,
of een van voorgenoemde die dominante kritieke area van kennis is binne die konteks van
sagte ware toetsing. Die navorsing beoog ook om te bepaal of daar persoonlike of
situasiegebonde fakfore bestaan wat die toetstegnikus vooropstel om kennis te deel, weer eens, met die oog om deur middel van hierdie faktore kwaliteit te verbeter en die toetsfase
binne die sagte ware ontwikkelingsiklus suksesvol af te lewer.
Ten spyte van die relatiewe jeudgigheid van die bestuur van kennis, betree dit die vierde
generasie waaruit twee denkwyses na vore kom – dié van hoofstroom denke en dié van
ingewikkelde aangepaste stelselsdenke. Hierdie navorsing illustreer belangrike en toepaslike
insette van beide denkwyses wat geskik is vir meedeling van kennis en vir die bereiking van
verbeterde kwaliteit / sukses in sagte ware toetsing.
|
90 |
Utilisation de méthodes formelles pour garantir des propriétés de logiciels au sein d'une distribution : exemple du noyau Linux. / Using formal methods to give quarantees on software properties inside a distribution : the Linux kernel exempleLissy, Alexandre 26 March 2014 (has links)
Dans cette thèse nous nous intéressons à intégrer dans la distribution Linux produite par Mandriva une assurance qualité permettant de proposer des garanties de propriétés sur le code exécuté. Le processus de création d’une distribution implique l’utilisation de logiciels de provenances diverses pour proposer un assemblage cohérent et présentant une valeur ajoutée pour l’utilisateur. Ceci engendre une moindre maîtrise potentielle sur le code. Un audit manuel permet de s’assurer que celui-Ci présente de bonnes propriétés, par exemple, en matière de sécurité. Le nombre croissant de composants à intégrer, et la croissance de la quantité de code de chacun amènent à avoir besoin d’outils pour permettre une assurance qualité. Après une étude de la distribution nous choisissons de nous concentrer sur un paquet critique, le noyau Linux : nous proposons un état de l’art des méthodes de vérifications appliquées à ce contexte particulier, et identifions le besoin d’améliorer la compréhension de la structure du code source, la question de l’explosion combinatoire et le manque d’intégration des outils d’analyse de l’état de l’art. Pour répondre à ces besoins nous proposons une représentation du code source sous la forme d’un graphe, et l’utilisons pour aider à la documentation et à la compréhension de l’architecture du code. Des méthodes de détection de communautés sont évaluées sur ce cas pour répondre au besoin de l’explosion combinatoire. Enfin nous proposons une architecture intégrée dans le système de construction de la distribution permettant d’intégrer des outils d’analyse et de vérification de code. / In this thesis we are interested in integrating to the Linux distribution produced by Mandriva quality assurance level that allows ensuring user-Defined properties on the source code used. The core work of a distribution and its producer is to create a meaningful aggregate from software available. Those softwares are free and open source, hence it is possible to adapt it to improve end user’s experience. Hence, there is less control over the source code. Manual audit can of course be used to make sure it has good properties. Examples of such properties are often referring to security, but one could think of others. However, more and more software are getting integrated into distributions and each is showing an increase in source code volume: tools are needed to make quality assurance achievable. We start by providing a study of the distribution itself to document the current status. We use it to select some packages that we consider critical, and for which we can improve things with the condition that packages which are similar enough to the rest of the distribution will be considered first. This leads us to concentrating on the Linux kernel: we provide a state of the art overview of code verification applied to this piece of the distribution. We identify a need for a better understanding of the structure of the source code. To address those needs we propose to use a graph as a representation of the source code and use it to help document and understand its structure. Specifically we study applying some state of the art community detection algorithm to help handle the combinatory explosion. We also propose a distribution’s build system-Integrated architecture for executing, collecting and handling the analysis of data produced by verifications tools.
|
Page generated in 0.0764 seconds