• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 13
  • 12
  • 2
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 66
  • 24
  • 15
  • 12
  • 12
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Analise de mutantes em aplicações SQL de banco de dados / Mutation analysis for SQL database applications

Cabeça, Andrea Gonçalves 15 August 2018 (has links)
Orientador: Mario Jino, Plinio de Sa Leitão Junior / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-15T03:59:29Z (GMT). No. of bitstreams: 1 Cabeca_AndreaGoncalves_M.pdf: 8778522 bytes, checksum: c968246a4fb6a8fb41b47192a1d8cb15 (MD5) Previous issue date: 2009 / Resumo: O teste de aplicações de banco de dados é crucial para assegurar a alta qualidade do software, pois defeitos não detectados podem resultar em corrupção irrecuperável dos dados. SQL é a mais amplamente utilizada interface para sistemas de banco de dados. Nossa abordagem visa a alcançar testes efetivos pela seleção de bases de dados reveladoras de defeitos. Usamos a análise de mutantes em comandos SQL e discutimos dois cenários para aplicar as técnicas de mutação forte e fraca. Uma ferramenta para auxiliar na automatização da técnica foi desenvolvida e implementada. Experimentos usando aplicações reais, defeitos reais e dados reais foram conduzidos para: (i) avaliar a aplicabilidade da abordagem; e (ii) comparar bases de dados de entrada quanto à habilidade para detectar defeitos / Abstract: Testing database applications is crucial for ensuring high quality software as undetected faults can result in unrecoverable data corruption. SQL is the most widely used interface language for relational database systems. Our approach aims to achieve better tests by selecting fault-revealing databases. We use mutation analysis on SQL statements and discuss two scenarios for applying strong and weak mutation techniques. A tool to support the automatization of the technique has been developed and implemented. Experiments using real applications, real faults and real data were performed to: (i) evaluate the applicability of the approach, and (ii) compare fault-revealing abilities of input databases / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
52

An assessment of open source promotion in addressing ICT acceptance challenges in Tanzania

Kinyondo, Josephat 02 1900 (has links)
Developing countries like Tanzania experience challenges towards utilization and acceptance of ICT; calling for a need to further research on the concept. Open Source (OS) usage is a potential strategy for addressing such challenges. However, the success of this strategy strongly relies on the strength of the promotional efforts. The study, therefore aims at assessing the OS promotional efforts in relation to ICT acceptance challenges in Tanzania. This study entailed a descriptive, mixed-methods research. A literature analysis, document analysis and observations of OS community activities were conducted in order to list the ICT acceptance challenges. The results formed a basis for survey and interview questions. The findings obtained were triangulated to determine the existing OS promotional activities and assess the effectiveness of the promotional efforts in addressing ICT acceptance challenges in Tanzania. The study also makes recommendations on how OS promotional efforts should be changed to improve their effectiveness. / Computing / M.Sc. (Information Systems)
53

Detect and Repair Errors for DNN-based Software

Tian, Yuchi January 2021 (has links)
Nowadays, deep neural networks based software have been widely applied in many areas including safety-critical areas such as traffic control, medical diagnosis and malware detection, etc. However, the software engineering techniques, which are supposed to guarantee the functionality, safety as well as fairness, are not well studied. For example, some serious crashes of DNN based autonomous cars have been reported. These crashes could have been avoided if these DNN based software were well tested. Traditional software testing, debugging or repairing techniques do not work well on DNN based software because there is no control flow, data flow or AST(Abstract Syntax Tree) in deep neural networks. Proposing software engineering techniques targeted on DNN based software are imperative. In this thesis, we first introduced the development of SE(Software Engineering) for AI(Artificial Intelligence) area and how our works have influenced the advancement of this new area. Then we summarized related works and some important concepts in SE for AI area. Finally, we discussed four important works of ours. Our first project DeepTest is one of the first few papers proposing systematic software testing techniques for DNN based software. We proposed neuron coverage guided image synthesis techniques for DNN based autonomous cars and leveraged domain specific metamorphic relation to generate oracle for new generated test cases to automatically test DNN based software. We applied DeepTest to testing three top performing self-driving car models in Udacity self-driving car challenge and our tool has identified thousands of erroneous behaviors that may lead to potential fatal crash. In DeepTest project, we found that the natural variation such as spatial transformations or rain/fog effects have led to problematic corner cases for DNN based self-driving cars. In the follow-up project DeepRobust, we studied per-point robustness of deep neural network under natural variation. We found that for a DNN model, some specific weak points are more likely to cause erroneous outputs than others under natural variation. We proposed a white-box approach and a black-box approach to identify these weak data points. We implemented and evaluated our approaches on 9 DNN based image classifiers and 3 DNN based self-driving car models. Our approaches can successfully detect weak points with good precision and recall for both DNN based image classifiers and self-driving cars. Most of existing works in SE for AI area including our DeepTest and DeepRobust focus on instance-wise errors, which are single inputs that result in a DNN model's erroneous outputs. Different from instance-wise errors, group-level errors reflect a DNN model's weak performance on differentiating among certain classes or inconsistent performance across classes. This type of errors is very concerning since it has been found to be related to many real-world notorious errors without malicious attackers. In our third project DeepInspect, we first introduced the group-level errors for DNN based software and categorized them into confusion errors and bias errors based on real-world reports. Then we proposed neuron coverage based distance metric to detect group-level errors for DNN based software without requiring labels. We applied DeepInspect to testing 8 pretrained DNN models trained in 6 popular image classification datasets, including three adversarial trained models. We showed that DeepInspect can successfully detect group-level violations for both single-label and multi-label classification models with high precision. As a follow-up and more challenging research project, we proposed five WR(weighted regularization) techniques to repair group-level errors for DNN based software. These five different weighted regularization techniques function at different stages of retraining or inference of DNNs including input phase, layer phase, loss phase and output phase. We compared and evaluated these five different WR techniques in both single-label and multi-label classifications including five combinations of four DNN architectures on four datasets. We showed that WR can effectively fix confusion and bias errors and these methods all have their pros, cons and applicable scenario. All our four projects discussed in this thesis have solved important problems in ensuring the functionality, safety as well as fairness for DNN based software and had significant influence in the advancement of SE for AI area.
54

Automated GUI Tests Generation for Android Apps Using Q-learning

Koppula, Sreedevi 05 1900 (has links)
Mobile applications are growing in popularity and pose new problems in the area of software testing. In particular, mobile applications heavily depend upon user interactions and a dynamically changing environment of system events. In this thesis, we focus on user-driven events and use Q-learning, a reinforcement machine learning algorithm, to generate tests for Android applications under test (AUT). We implement a framework that automates the generation of GUI test cases by using our Q-learning approach and compare it to a uniform random (UR) implementation. A novel feature of our approach is that we generate user-driven event sequences through the GUI, without the source code or the model of the AUT. Hence, considerable amount of cost and time are saved by avoiding the need for model generation for generating the tests. Our results show that the systematic path exploration used by Q-learning results in higher average code coverage in comparison to the uniform random approach.
55

An Empirical Study of Software Debugging Games with Introductory Students

Reynolds, Lisa Marie 08 1900 (has links)
Bug Fixer is a web-based application that complements lectures with hands-on exercises that encourage students to think about the logic in programs. Bug Fixer presents students with code that has several bugs that they must fix. The process of fixing the bugs forces students to conceptually think about the code and reinforces their understanding of the logic behind algorithms. In this work, we conducted a study using Bug Fixer with undergraduate students in the CSCE1040 course at University of North Texas to evaluate whether the system increases their conceptual understanding of the algorithms and improves their Software Testing skills. Students participated in weekly activities to fix bugs in code. Most students enjoyed Bug Fixer and recommend the system for future use. Students typically reported a better understanding of the algorithms used in class. We observed a slight increase of passing grades for students who participated in our study compared to students in other sections of the course with the same instructor who did not participate in our study. The students who did not report a positive experience provide comments for future improvements that we plan to address in future work.
56

Using Explicit State Space Enumeration For Specification Based Regression Testing

Chakrabarti, Sujit Kumar 01 1900 (has links)
Regression testing of an evolving software system may involve significant challenges. While, there would be a requirement of maximising the probability of finding out if the latest changes to the system has broken some existing feature, it needs to be done as economically as possible. A particularly important class of software systems are API libraries. Such libraries would typically constitute a very important component of many software systems. High quality requirements make it imperative to continually optimise the internal implementation of such libraries without affecting the external interface. Therefore, it is preferred to guide the regression testing by some kind of formal specification of the library. The testing problem comprises of three parts: computation of test data, execution of test, and analysis of test results. Current research mostly focuses on the first part. The objective of test data computation is to maximise the probability of uncovering bugs, and to do it with as few test cases as possible. The problem of test data computation for regression testing is to select a subset of the original test suite running which would suffice to test for bugs probably inserted in the modifications done after the last round of testing. A variant of this problem is that of regression testing of API libraries. The regression testing of an API is usually done by making function calls in such a way that the sequence of function calls thus made suffices a test specification. The test specification in turn embodies some concept of completeness. In this thesis, we focus on the problem of test sequence computation for the regression testing of API libraries. At the heart of this method lies the creation of a state space model of the API library by reverse engineering it by executing the system, with guidance from an formal API specification. Once the state space graph is obtained, it is used to compute test sequences for satisfying some test specification. We analyse the theoretical complexity of the problem of test sequence computation and provide various heuristic algorithms for the same. State space explosion is a classical problem encountered whenever there is an attempt of creating a finite state model of a program. Our method also faces this limitation. We explore a simple and intuitive method of ameliorating this problem – by simply reducing the size of the state vector. We develop the theoretical insights into this method. Also, we present experimental results indicating the practical effectiveness of this method. Finally, we bring all this together into the design and implementation of a tool called Modest.
57

Influência da revisão de atividades executadas para melhoria da acurácia na estimativa de software utilizando planning poker / Influence of the reviewing of executed activities to improve accuracy using planning poker

Tissot, André Augusto 21 August 2015 (has links)
Introdução – A área de pesquisa de estimativa de esforço de software busca melhorar a acurácia das estimativas de projetos e atividades de software. Objetivo – Este trabalho descreve o desenvolvimento e uso de uma ferramenta web de coleta de dados gerados durante a execução da técnica de estimativa Planning Poker e a análise dos dados coletados para investigação do impacto da revisão de dados históricos de esforço. Método – Foram realizadas estimativas com e sem revisão, em experimentos com alunos de computação da Universidade Tecnológica Federal do Paraná, coletando os dados relacionados à tomada de decisão em uma ferramenta web. Após isso, foi analisado o impacto causado pelas revisões na acurácia da estimativa de esforço de software utilizando Planning Poker. Resultados Obtidos – Foi analisado o comportamento de 14 grupos de estimativas. Dentre esses times, 8 deles tiveram uma melhora na acurácia maior que 50% das estimativas analisadas. Em 3 deles, a soma das estimativas que tiveram melhora com as estimativas que permaneceram estáveis ultrapassou os 50%. Em apenas 3 deles, as estimativas tiveram redução de acurácia maior que 50%. Conclusões – A Revisão de Atividades Executadas, utilizando Planning Poker, melhorou a estimativa de esforço na maioria dos casos analisados, podendo ser um importante método para aprimorar o processo de desenvolvimento de software. / Abstract – Background – The software effort estimation research area aims to improve the accuracy of this estimation in software projects and activities. Aims – This study describes the development and usage of a web application tocollect data generated from the Planning Poker estimation process and the analysis of the collected data to investigate the impact of revising previous estimates when conducting similar estimates in a Planning Poker context. Method – Software activities were estimated by Universidade Tecnológica Federal do Paraná (UTFPR) computer students, using Planning Poker, with and without revising previous similar activities, storing data regarding the decision-making process. And the collected data was used to investigate the impact that revising similar executed activities have in the software effort estimates' accuracy.Obtained Results – The UTFPR computer students were divided into 14 groups. Eight of them showed accuracy increase in more than half of their estimates. Three of them had almost the same accuracy in more than half of their estimates. And only three of them had loss of accuracy in more than half of their estimates. Conclusion – Reviewing the similar executed software activities, when using Planning Poker, led to more accurate software estimates in most cases, and, because of that, can improve the software development process.
58

Influência da revisão de atividades executadas para melhoria da acurácia na estimativa de software utilizando planning poker / Influence of the reviewing of executed activities to improve accuracy using planning poker

Tissot, André Augusto 21 August 2015 (has links)
Introdução – A área de pesquisa de estimativa de esforço de software busca melhorar a acurácia das estimativas de projetos e atividades de software. Objetivo – Este trabalho descreve o desenvolvimento e uso de uma ferramenta web de coleta de dados gerados durante a execução da técnica de estimativa Planning Poker e a análise dos dados coletados para investigação do impacto da revisão de dados históricos de esforço. Método – Foram realizadas estimativas com e sem revisão, em experimentos com alunos de computação da Universidade Tecnológica Federal do Paraná, coletando os dados relacionados à tomada de decisão em uma ferramenta web. Após isso, foi analisado o impacto causado pelas revisões na acurácia da estimativa de esforço de software utilizando Planning Poker. Resultados Obtidos – Foi analisado o comportamento de 14 grupos de estimativas. Dentre esses times, 8 deles tiveram uma melhora na acurácia maior que 50% das estimativas analisadas. Em 3 deles, a soma das estimativas que tiveram melhora com as estimativas que permaneceram estáveis ultrapassou os 50%. Em apenas 3 deles, as estimativas tiveram redução de acurácia maior que 50%. Conclusões – A Revisão de Atividades Executadas, utilizando Planning Poker, melhorou a estimativa de esforço na maioria dos casos analisados, podendo ser um importante método para aprimorar o processo de desenvolvimento de software. / Abstract – Background – The software effort estimation research area aims to improve the accuracy of this estimation in software projects and activities. Aims – This study describes the development and usage of a web application tocollect data generated from the Planning Poker estimation process and the analysis of the collected data to investigate the impact of revising previous estimates when conducting similar estimates in a Planning Poker context. Method – Software activities were estimated by Universidade Tecnológica Federal do Paraná (UTFPR) computer students, using Planning Poker, with and without revising previous similar activities, storing data regarding the decision-making process. And the collected data was used to investigate the impact that revising similar executed activities have in the software effort estimates' accuracy.Obtained Results – The UTFPR computer students were divided into 14 groups. Eight of them showed accuracy increase in more than half of their estimates. Three of them had almost the same accuracy in more than half of their estimates. And only three of them had loss of accuracy in more than half of their estimates. Conclusion – Reviewing the similar executed software activities, when using Planning Poker, led to more accurate software estimates in most cases, and, because of that, can improve the software development process.
59

Integration testing of object-oriented software

Skelton, Gordon William 08 1900 (has links)
This thesis examines integration testing of object-oriented software. The process of integrating and testing procedural programs is reviewed as foundation for testing object-oriented software. The complexity of object-oriented software is examined. The relationship of integration testing and the software development life cycle is presented. Scenarios are discussed which account for the introduction of defects into the software. The Unified Modeling Language (UML) is chosen for representing pre-implementation and post-implementation models of the software. A demonstration of the technique of using post-implementation models representing the logical and physical views as an aid in integration and system testing of the software is presented. The use of UML diagrams developed from the software is suggested as a technique for integration testing of object-oriented software. The need for automating the data collection and model building is recognized. The technique is integrated into the Revised Spiral Model for Object-Oriented Software Development developed by du Plessis and van der Walt. / Computing / D.Phil. (Computer Science)
60

Integration testing of object-oriented software

Skelton, Gordon William 08 1900 (has links)
This thesis examines integration testing of object-oriented software. The process of integrating and testing procedural programs is reviewed as foundation for testing object-oriented software. The complexity of object-oriented software is examined. The relationship of integration testing and the software development life cycle is presented. Scenarios are discussed which account for the introduction of defects into the software. The Unified Modeling Language (UML) is chosen for representing pre-implementation and post-implementation models of the software. A demonstration of the technique of using post-implementation models representing the logical and physical views as an aid in integration and system testing of the software is presented. The use of UML diagrams developed from the software is suggested as a technique for integration testing of object-oriented software. The need for automating the data collection and model building is recognized. The technique is integrated into the Revised Spiral Model for Object-Oriented Software Development developed by du Plessis and van der Walt. / Computing / D.Phil. (Computer Science)

Page generated in 0.1138 seconds