• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 76
  • 11
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 230
  • 230
  • 81
  • 74
  • 60
  • 43
  • 37
  • 34
  • 33
  • 33
  • 33
  • 28
  • 26
  • 26
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Closing the Defect Reduction Gap between Software Inspection and Test-Driven Development: Applying Mutation Analysis to Iterative, Test-First Programming

Wilkerson, Jerod W. January 2008 (has links)
The main objective of this dissertation is to assist in reducing the chaotic state of the software engineering discipline by providing insights into both the effectiveness of software defect reduction methods and ways these methods can be improved. The dissertation is divided into two main parts. The first is a quasi-experiment comparing the software defect rates and initial development costs of two methods of software defect reduction: software inspection and test-driven development (TDD). Participants, consisting of computer science students at the University of Arizona, were divided into four treatment groups and were asked to complete the same programming assignment using either TDD, software inspection, both, or neither. Resulting defect counts and initial development costs were compared across groups. The study found that software inspection is more effective than TDD at reducing defects, but that it also has a higher initial cost of development. The study establishes the existence of a defect-reduction gap between software inspection and TDD and highlights the need to improve TDD because of its other benefits.The second part of the dissertation explores a method of applying mutation analysis to TDD to reduce the defect reduction gap between the two methods and to make TDD more reliable and predictable. A new change impact analysis algorithm (CHA-AS) based on CHA is presented and evaluated for applications of software change impact analysis where a predetermined set of program entry points is not available or is not known. An estimated average case complexity analysis indicates that the algorithm's time and space complexity is linear in the size of the program under analysis, and a simulation experiment indicates that the algorithm can capitalize on the iterative nature of TDD to produce a cost savings in mutation analysis applied to TDD projects. The algorithm should also be useful for other change impact analysis situations with undefined program entry points such as code library and framework development.An enhanced TDD method is proposed that incorporates mutation analysis, and a set of future research directions are proposed for developing tools to support mutation analysis enhanced TDD and to continue to improve the TDD method.
72

Design metrics analysis of the Harris ROCC project

Perera, Dinesh Sirimal January 1995 (has links)
The Design Metrics Research Team at Ball State University has developed a quality design metric D(G), which consists of an internal design metric Di, and an external design metric De. This thesis discusses applying design metrics to the ROCC-Radar On-line Command Control project received from Harris Corporation. Thus, the main objective of this thesis is to analyze the behavior of D(G), and the primitive components of this metric.Error and change history reports are vital inputs to the validation of design metrics' performance. Since correct identification of types of changes/errors is critical for our evaluation, several different types of analyses were performed in an attempt to qualify the metric performance in each case.This thesis covers the analysis of 666 FORTRAN modules with approximately 142,296 lines of code. / Department of Computer Science
73

Software testing tools and productivity

Moschoglou, Georgios Moschos January 1996 (has links)
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department. / Department of Computer Science
74

Neural networks and their application to metrics research

Lin, Burch January 1996 (has links)
In the development of software, time and resources are limited. As a result, developers collect metrics in order to more effectively allocate resources to meet time constraints. For example, if one could collect metrics to determine, with accuracy, which modules were error-prone and which were error-free, one could allocate personnel to work only on those error-prone modules.There are three items of concern when using metrics. First, with the many different metrics that have been defined, one may not know which metrics to collect. Secondly, the amount of metrics data collected can be staggering. Thirdly, interpretation of multiple metrics may provide a better indication of error-proneness than any single metric.This thesis researched the accuracy of a neural network, an unconventional model, in building a model that can determine whether a module is error-prone from an input of a suite of metrics. The accuracy of the neural network model was compared with the accuracy of a linear regression model, a standard statistical model, that has the same input and output. In other words, we attempted to find whether metrics correlated with error-proneness. The metrics were gathered from three different software projects. The suite of metrics that was used to build the models was a subset of a larger collection of metrics that was reduced using factor analysis.The conclusion of this thesis is that, from the projects analyzed, neither the neural network model nor the logistic regression model provide acceptable accuracies for real use. We cannot conclude whether one model provides better accuracy than the other. / Department of Computer Science
75

An examination of the application of design metrics to the development of testing strategies in large-scale SDL models

West, James F. January 2000 (has links)
There exist a number of well-known and validated design metrics, and the fault prediction available through these metrics has been well documented for systems developed in languages such as C and Ada. However, the mapping and application of these metrics to SDL systems has not been thoroughly explored. The aim of this project is to test the applicability of these metrics in classifying components for testing purposes in a large-scale SDL system. A new model has been developed for this purpose. This research was conducted using a number of SDL systems, most notably actual production models provided by Motorola Corporation. / Department of Computer Science
76

STUDYING THE IMPACT OF DEVELOPER COMMUNICATION ON THE QUALITY AND EVOLUTION OF A SOFTWARE SYSTEM

Bettenburg, Nicolas 22 May 2014 (has links)
Software development is a largely collaborative effort, of which the actual encoding of program logic in source code is a relatively small part. Software developers have to collaborate effectively and communicate with their peers in order to avoid coordination problems. To date, little is known how developer communication during software development activities impacts the quality and evolution of a software. In this thesis, we present and evaluate tools and techniques to recover communication data from traces of the software development activities. With this data, we study the impact of developer communication on the quality and evolution of the software through an in-depth investigation of the role of developer communication during software development activities. Through multiple case-studies on a broad spectrum of open-source software projects, we find that communication between developers stands in a direct relationship to the quality of the software. Our findings demonstrate that our models based on developer communication explain software defects as well as state-of-the art models that are based on technical information such as code and process metrics, and that social information metrics are orthogonal to these traditional metrics, leading to a more complete and integrated view on software defects. In addition, we find that communication between developers plays a important role in maintaining a healthy contribution management process, which is one of the key factors to the successful evolution of the software. Source code contributors who are part of the community surrounding open-source projects are available for limited times, and long communication times can lead to the loss of valuable contributions. Our thesis illustrates that software development is an intricate and complex process that is strongly influenced by the social interactions between the stakeholders involved in the development activities. A traditional view based solely on technical aspects of software development such as source code size and complexity, while valuable, limits our understanding of software development activities. The research presented in this thesis consists of a first step towards gaining a more holistic view on software development activities. / Thesis (Ph.D, Computing) -- Queen's University, 2014-05-22 12:07:13.823
77

Αξιολόγηση σταθερότητας open source με χρήση μετρικών

Καλύβα, Δήμητρα 20 September 2010 (has links)
Το τελευταίο διάστημα, ο όρος «ποιότητα λογισμικού» γίνεται ολοένα και πιο δημοφιλής. Όλο και μεγαλύτερη σημασία δίνεται στο τι είναι ποιότητα λογισμικού, αν μπορεί να μετρηθεί και με ποιους τρόπους κι επίσης το αν αξίζει να γνωρίζει κανείς στη φάση ανάπτυξης λογισμικού πόσο ποιοτικό είναι ένα πρόγραμμα. Επιπλέον, η ανάπτυξη λογισμικού ανοιχτού κώδικα βελτιώνεται και εξελίσσεται με γρήγορους ρυθμούς. Η παρούσα διπλωματική εργασία έχει ως στόχο την εξαγωγή συμπερασμάτων, ώστε να αποτιμηθεί η σταθερότητα ενός προγράμματος ανοιχτού λογισμικού με χρήση μετρικών. Το πρόγραμμα το οποίο μελετήθηκε ήταν το Win Merge και οι μετρικές των ρουτινών του υπολογίστηκαν με τη βοήθεια του προγράμματος Source Monitor. Αρχικά, ταξινομήθηκαν οι ρουτίνες σε κατηγορίες ανάλογα με τον αριθμό των εκδόσεων στις οποίες είχαν τροποποιηθεί. Στη συνέχεια, υπολογίστηκαν οι μέσοι όροι των ρουτινών για κάθε κατηγορία και προέκυψαν τα αντίστοιχα διαγράμματα (ένα για κάθε μετρική). / Nowadays, the term “software quality” becomes more and more popular. In addition, more and more people are interested in what it is quality of software, if and how it can be measured and whether it is worth knowing the quality of your program in the phase of software development. Moreover, the development of open source is improved with rapid rythm. This project aims at the export of conclusions, so that the stability of a program of open source is evaluated by using metrics. The program we used is Win Merge and metrics were calculated by using Source Monitor program. Initially, the routines were categorized in categories depending on the number of versions in which they had been modified. Afterwards, we calculated the averages of routines for each category and we resulted in the corresponding diagrams (for each metric).
78

A knowledge approach to software testing

Mohamed, Essack 12 1900 (has links)
Thesis (MPhil)--University of Stellenbosch, 2004. / ENGLISH ABSTRACT: The effort to achieve quality is the largest component of software cost. Software testing is costly - ranging from 50% to 80% of the cost of producing a first working version. It is resource intensive and an intensely time consuming activity in the overall Systems Development Life Cycle (SDLC) and hence could arguably be the most important phase of the process. Software testing is pervasive. It starts at the initiation of a product with nonexecution type testing and continues to the retirement of the product life cycle beyond the post-implementation phase. Software testing is the currency of quality delivery. To understand testing and to improve testing practice, it is essential to see the software testing process in its broadest terms – as the means by which people, methodology, tools, measurement and leadership are integrated to test a software product. A knowledge approach recognises knowledge management (KM) enablers such as leadership, culture, technology and measurements that act in a dynamic relationship with KM processes, namely, creating, identifying, collecting, adapting, organizing, applying, and sharing. Enabling a knowledge approach is a worthy goal to encourage sharing, blending of experiences, discipline and expertise to achieve improvements in quality and adding value to the software testing process. This research was developed to establish whether specific knowledge such as domain subject matter or business expertise, application or technical skills, software testing competency, and whether the interaction of the testing team influences the degree of quality in the delivery of the application under test, or if one is the dominant critical knowledge area within software testing. This research also set out to establish whether there are personal or situational factors that will predispose the test engineer to knowledge sharing, again, with the view of using these factors to increase the quality and success of the ‘testing phase’ of the SDLC. KM, although relatively youthful, is entering its fourth generation with evidence of two paradigms emerging - that of mainstream thinking and that of the complex adaptive system theory. This research uses pertinent and relevant extracts from both paradigms appropriate to gain quality/success in software testing. / AFRIKAANSE OPSOMMING: By verre die grootste komponent van sagte ware koste is dié verwant aan kwaliteitsversekering. Toetsing van sagte ware is koste intensief en verteenwoordig tussen 50% en 80% van die kostes om ‘n beta weergawe vry te stel. Die toetsing van sagte ware is nie alleenlik duursaam nie, maar ook arbeidintensief en ‘n tydrowende aktiwteit in die sagte ware ontwikkelings lewensiklus en kan derhalwe gereken word as die mees belangrike fase. Toetsing is deurdringend – dit begin by die inisiëring van ‘n produk deur middel van nie-uitvoerende tipe toetsing en eindig by die voleinding van die produklewensiklus na die implementeringsfase. Sagte ware toetsing word beskou as die geldwaarde van kwalitatiewe aflewering. Om toetsing ten volle te begryp en die toepassing daarvan te verbeter, is dit noodsaaklik om die toetsproses holisties te beskou – as die medium en mate waartoe mense, metodologie, tegnieke, meting en leierskap integreer om ‘n sagte ware produk te toets. ‘n Benadering gekenmerk deur kennis erken die dinamiese verhouding waarbinne bestuurselemente van kundigheid, soos leierskap, kultuur, tegnologie en maatstawwe reageer en korrespondeer met prosesse van kundigheid, naamlik skep, identifiseer, versamel, aanpas, organiseer, toepas en meedeel. Die fasilitering van ‘n benadering gekenmerk deur kennis is ‘n waardige doelwit om meedeling, vermenging van ervaringe, dissipline en kundigheid aan te moedig ten einde kwaliteit te verbeter en waarde toe te voeg tot die proses van safte ware toetsing. Die doel van hierdie navorsing is om te bepaal of die kennis van ‘n spesifieke onderwerp, besigheidskundigheid, tegniese vaardighede of die toepassing daarvan, kundigheid van sagte ware toetsing, en/of die interaksie van die toetsspan die mate van kwaliteit beïnvloed, of een van voorgenoemde die dominante kritieke area van kennis is binne die konteks van sagte ware toetsing. Die navorsing beoog ook om te bepaal of daar persoonlike of situasiegebonde fakfore bestaan wat die toetstegnikus vooropstel om kennis te deel, weer eens, met die oog om deur middel van hierdie faktore kwaliteit te verbeter en die toetsfase binne die sagte ware ontwikkelingsiklus suksesvol af te lewer. Ten spyte van die relatiewe jeudgigheid van die bestuur van kennis, betree dit die vierde generasie waaruit twee denkwyses na vore kom – dié van hoofstroom denke en dié van ingewikkelde aangepaste stelselsdenke. Hierdie navorsing illustreer belangrike en toepaslike insette van beide denkwyses wat geskik is vir meedeling van kennis en vir die bereiking van verbeterde kwaliteit / sukses in sagte ware toetsing.
79

Utilisation de méthodes formelles pour garantir des propriétés de logiciels au sein d'une distribution : exemple du noyau Linux. / Using formal methods to give quarantees on software properties inside a distribution : the Linux kernel exemple

Lissy, Alexandre 26 March 2014 (has links)
Dans cette thèse nous nous intéressons à intégrer dans la distribution Linux produite par Mandriva une assurance qualité permettant de proposer des garanties de propriétés sur le code exécuté. Le processus de création d’une distribution implique l’utilisation de logiciels de provenances diverses pour proposer un assemblage cohérent et présentant une valeur ajoutée pour l’utilisateur. Ceci engendre une moindre maîtrise potentielle sur le code. Un audit manuel permet de s’assurer que celui-Ci présente de bonnes propriétés, par exemple, en matière de sécurité. Le nombre croissant de composants à intégrer, et la croissance de la quantité de code de chacun amènent à avoir besoin d’outils pour permettre une assurance qualité. Après une étude de la distribution nous choisissons de nous concentrer sur un paquet critique, le noyau Linux : nous proposons un état de l’art des méthodes de vérifications appliquées à ce contexte particulier, et identifions le besoin d’améliorer la compréhension de la structure du code source, la question de l’explosion combinatoire et le manque d’intégration des outils d’analyse de l’état de l’art. Pour répondre à ces besoins nous proposons une représentation du code source sous la forme d’un graphe, et l’utilisons pour aider à la documentation et à la compréhension de l’architecture du code. Des méthodes de détection de communautés sont évaluées sur ce cas pour répondre au besoin de l’explosion combinatoire. Enfin nous proposons une architecture intégrée dans le système de construction de la distribution permettant d’intégrer des outils d’analyse et de vérification de code. / In this thesis we are interested in integrating to the Linux distribution produced by Mandriva quality assurance level that allows ensuring user-Defined properties on the source code used. The core work of a distribution and its producer is to create a meaningful aggregate from software available. Those softwares are free and open source, hence it is possible to adapt it to improve end user’s experience. Hence, there is less control over the source code. Manual audit can of course be used to make sure it has good properties. Examples of such properties are often referring to security, but one could think of others. However, more and more software are getting integrated into distributions and each is showing an increase in source code volume: tools are needed to make quality assurance achievable. We start by providing a study of the distribution itself to document the current status. We use it to select some packages that we consider critical, and for which we can improve things with the condition that packages which are similar enough to the rest of the distribution will be considered first. This leads us to concentrating on the Linux kernel: we provide a state of the art overview of code verification applied to this piece of the distribution. We identify a need for a better understanding of the structure of the source code. To address those needs we propose to use a graph as a representation of the source code and use it to help document and understand its structure. Specifically we study applying some state of the art community detection algorithm to help handle the combinatory explosion. We also propose a distribution’s build system-Integrated architecture for executing, collecting and handling the analysis of data produced by verifications tools.
80

Automatização de testes em equipes ágeis: um estudo qualitativo usando teoria fundamentada.

ALVES, Gabriella Mayara Tavares. 08 August 2018 (has links)
Submitted by Maria Medeiros (maria.dilva1@ufcg.edu.br) on 2018-08-08T13:41:48Z No. of bitstreams: 1 GABRIELLA MAYARA TAVARES ALVES - DISSERTAÇÃO (PPGCC) 2017.pdf: 721866 bytes, checksum: 1804c46ebbf2ab8091f8ac9197014812 (MD5) / Made available in DSpace on 2018-08-08T13:41:48Z (GMT). No. of bitstreams: 1 GABRIELLA MAYARA TAVARES ALVES - DISSERTAÇÃO (PPGCC) 2017.pdf: 721866 bytes, checksum: 1804c46ebbf2ab8091f8ac9197014812 (MD5) Previous issue date: 2017-08-10 / Com o crescimento da utilização de práticas ágeis, as atividades de teste devem adaptar-se à agilidade na absorção e implementação antecipada dos requisitos. Com base nisso, normalmente a automação de testes de sistema para aplicações web, desktop e mobile, é amplamente utilizada e desenvolvida para melhorar a qualidade do software, permitindo executar mais testes de forma mais frequente, quando comparado com a execução manual. No entanto, o custo da manutenção dos scripts de testes automatizados é considerado alto e, normalmente, as equipes não possuem pessoas especializadas em automação de testes de sistema. Sendo assim, existem poucos relatos na literatura referentes às lacunas que impedem a utilização das vantagens oferecidas na automação de testes de sistema em sua plenitude no contexto de equipes que utilizam métodos ágeis. Desta forma, este trabalho, através de um estudo empírico com entrevistas semi-estruturadas e da Teoria Fundamentada, busca coletar e analisar dados acerca de práticas utilizadas em equipes ágeis na automação de testes de sistema para elencar práticas que indiquem o melhor momento para iniciar a criação dos scripts de automação de testes de sistema. Além disso, busca contribuir com a literatura e consequentemente, possuir uma base teórica para que propostas de melhorias sejam realizadas futuramente. Como resultado, foram identificadas práticas comuns de automação de teste de sistema utilizadas nas equipes de desenvolvimento, como: iniciar a criação dos scripts de teste automáticos após algumas execuções manuais dos casos de teste, e até o requisito funcional tornar-se estável; a criação dos scripts de teste automáticos são planejados para iniciar a partir das funcionalidades que possuem os casos de testes manuais executados no Sprint anterior; e gerenciamento das alterações solicitadas pelo cliente para replanejar caso a solicitação tenha impacto nas funcionalidades já implementadas. Para estruturar os resultados obtidos, utilizamos os princípios da Teoria Fundamentada através da análise das entrevistas realizadas para coleta de dados. / With the increasing popularity of agile practices, test activities must adapt to agility in special test automation and anticipated implementation of requirements. Based on this, typically the automation of system tests for web, desktop and mobile applications is largely used to improve software quality, allowing for more frequent testing, when compared to manual execution. However, the maintainance cost of automated tests is high, and teams normally do not have specialized people in test automation. Therefore, the literature lacks reports related to the gaps that prevent the use of the advantages offered by the system testing automation in its fullness in agile teams. This work, through an empirical study with semi-structured interviews and the Fundamentated Theory, aims to collect and to analyze data about practices used in agile teams in the system testing automation to list practices that indicate the best moment to start the creation of the system test automation scripts. In addition, it seeks to contribute to the literature and consequently a theoretical basis, so that suggestions for improvements can be made in the future. The collected data allowed us to identify system testing automation practices used in typical agile teams, such as starting the creation of automated test scripts after some manual executions of the test cases, until the functional requirement becomes stable; the activities of creationing automatic test scripts should be planned to start from the features that have the manual test cases executed in the previous Sprint; and the management of the changes requested by the client to replanning quickly if the request causes a major impact on the features in validation status. To structure the obtained results, the principles of the Grounded Theory were used through the analysis of the interviews conducted for data collection.

Page generated in 0.0349 seconds