• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 248
  • 200
  • 36
  • 19
  • 8
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 596
  • 596
  • 182
  • 163
  • 160
  • 145
  • 65
  • 65
  • 64
  • 62
  • 59
  • 58
  • 58
  • 54
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Analysis of Perturbation-based Testing Methodology as applied to a Real-Time Control System Problem

Stutler, Richard A 01 January 2005 (has links)
Perturbation analysis is a software analysis technique used to study the tail function of a program by inserting an error into an executing program using data state mutation. The impact of this induced error on the output is then measured. This methodology can be used to evaluate the effectiveness of a given test set and in fact can be used as a means to derive a test set which provides coverage for a given program. Previous research has shown that there is a "coupling effect" such that test sets that identify simple errors will also identify more complex errors. Thus the research would indicate that this methodology would facilitate the generation of test sets that would detect a wide range of possible faults. This research applies a perturbation analysis technique to the Cell Pre-selection algorithm as used in the Tomahawk Weapons Control System.
62

Porovnání komerčních a open source nástrojů pro testování softwaru / Comercial and open source software testing tool comparison

Štolc, Robin January 2010 (has links)
The subject of this thesis are software testing tools, specifically tools for manual testing, automatic testing, bug tracking and test management. The aim of this thesis is to introduce reader to several testing tools from each category and to compare these tools. This objective relates to the secondary aim of creating set of criteria for testing tools comparison. The contribution of this thesis is the description and comparison of chosen testing tools and the creation of a set of kriteria, that can be used to Compaq any other testing tools. The Thesis is dividend into the four main parts. The first part briefly describes the theoretical foundations of software testing, the second part deals with descriptions of various categories of testing tools and their role in the testing process, the third part defines the method of comparison and comparison kriteria and in the last, fourth, part the selected testing tools are described and compared.
63

A framework for specifying business rules based on logic with a syntax close to natural language

Roettenbacher, Christian Wolfgang January 2017 (has links)
The systematic interaction of software developers with the business domain experts that are usually no software developers is crucial to software system maintenance and creation and has surfaced as the big challenge of modern software engineering. Existing frameworks promoting the typical programming languages with artificial syntax are suitable to be processed by computers but do not cater to domain experts, who are used to documents written in natural language as a means of interaction. Other frameworks that claim to be fully automated, such as those using natural language processing, are too imprecise to handle the typical requirements documents written in heterogeneous natural language flavours. In this thesis, a framework is proposed that can support the specification of business rules that is, on the one hand, understandable for nonprogrammers and on the other hand semantically founded, which enables computer processability. This is achieved by the novel language Adaptive Business Process and Rule Integration Language (APRIL). Specifications in APRIL can be written in a style close to natural language and are thus suitable for humans, which was empirically evaluated with a representative group of test persons. A useful and uncommon feature of APRIL is the ability to define reusable abstract mixfix operators as sentence patterns, that can mimic natural language. The semantic underpinning of the mixfix operators is achieved by customizable atomic formulas, allowing to tailor APRIL to specific domains. Atomic formulas are underpinned by a denotational semantics, which is based on Tempura (executable subset of Interval Temporal Logic (ITL)) to describe behaviour and the Object Constraint Language (OCL) to describe invariants and pre- and postconditions. APRIL statements can be used as the basis for automatically generating test code for software systems. An additional aspect of enhancing the quality of specification documents comes with a novel formal method technique (ISEPI) applicable to behavioural business rules semantically based on Propositional Interval Temporal Logic (PITL) and complying with the newly discovered 2-to-1 property. This work discovers how the ISE subset of ISEPI can be used to express complex behavioural business rules in a more concise and understandable way. The evaluation of ISE is done by an example specification taken from the car industry describing system behaviour, using the tools MONA and PITL2MONA. Finally, a methodology is presented that helps to guide a continuous transformation starting from purely natural language business rule specification to the APRIL specification which can then be transformed to test code. The methodologies, language concepts, algorithms, tools and techniques devised in this work are part of the APRIL-framework.
64

Making Software More Reliable by Uncovering Hidden Dependencies

Bell, Jonathan Schaffer January 2016 (has links)
As software grows in size and complexity, it also becomes more interdependent. Multiple internal components often share state and data. Whether these dependencies are intentional or not, we have found that their mismanagement often poses several challenges to testing. This thesis seeks to make it easier to create reliable software by making testing more efficient and more effective through explicit knowledge of these hidden dependencies. The first problem that this thesis addresses, reducing testing time, directly impacts the day-to-day work of every software developer. The frequency with which code can be built (compiled, tested, and package) directly impacts the productivity of developers: longer build times mean a longer wait before determining if a change to the application being build was successful. We have discovered that in the case of some languages, such as Java, the vast majority of build time is spent running tests. Therefore, it's incredibly important to focus on approaches to accelerating testing, while simultaneously making sure that we do not inadvertently cause tests to erratically fail (i.e. become flaky). Typical techniques for accelerating tests (like running only a subset of them, or running them in parallel) often can't be applied soundly, since there may be hidden dependencies between tests. While we might think that each test should be independent (i.e. that a test's outcome isn't influenced by the execution of another test), we and others have found many examples in real software projects where tests truly have these dependencies: some tests require others to run first, or else their outcome will change. Previous work has shown that these dependencies are often complicated, unintentional, and hidden from developers. We have built several systems, VMVM and ElectricTest, that detect different sorts of dependencies between tests and use that information to soundly reduce testing time by several orders of magnitude. In our first approach, Unit Test Virtualization, we reduce the overhead of isolating each unit test with a lightweight, virtualization-like container, preventing these dependencies from manifesting. Our realization of Unit Test Virtualization for Java, VMVM eliminates the need to run each test in its own process, reducing test suite execution time by an average of 62% in our evaluation (compared to execution time when running each test in its own process). However, not all test suites isolate their tests: in some, dependencies are allowed to occur between tests. In these cases, common test acceleration techniques such as test selection or test parallelization are unsound in the absence of dependency information. When dependencies go unnoticed, tests can unexpectedly fail when executed out of order, causing unreliable builds. Our second approach, ElectricTest, soundly identifies data dependencies between test cases, allowing for sound test acceleration. To enable more broad use of general dependency information for testing and other analyses, we created Phosphor, the first and only portable and performant dynamic taint tracking system for the JVM. Dynamic taint tracking is a form of data flow analysis that applies labels to variables, and tracks all other variables derived from those tagged variables, propagating those tags. Taint tracking has many applications to software engineering and software testing, and in addition to our own work, researchers across the world are using Phosphor to build their own systems. Towards making testing more effective, we also created Pebbles, which makes it easy for developers to specify data-related test oracles on mobile devices by thinking in terms of high level objects such as emails, notes or pictures.
65

Compiler-assisted Adaptive Software Testing

Petsios, Theofilos January 2018 (has links)
Modern software is becoming increasingly complex and is plagued with vulnerabilities that are constantly exploited by attackers. The vast numbers of bugs found in security-critical systems and the diversity of errors presented in commercial off-the-shelf software require effective, scalable testing frameworks. Unfortunately, the current testing ecosystem is heavily fragmented, with the majority of toolchains targeting limited classes of errors and applications without offering provably strong guarantees. With software codebases continuously becoming more diverse and complex, the large-scale deployment of monolithic, non-adaptive analysis engines is likely to increase the aforementioned fragmentation. Instead, modern software testing requires adaptive, hybrid techniques that target errors selectively. This dissertation argues that adopting context-aware analyses will enable us to set the foundations for retargetable testing frameworks while further increasing the accuracy and extensibility of existing toolchains. To this end, we initially examine how compiler analyses can become context-aware, prioritizing certain errors over others of the same type. As a use case of our proposed approach, we extend a state-of-the-art compiler's integer error detection pipeline to suppress reports of benign errors by up to 89% in real-world workloads, while allowing for reporting of serious errors. Subsequently, we demonstrate how compiler-based instrumentation can be utilized by feedback-driven evolutionary fuzzers to provide multifaceted analyses targeting broader classes of bugs. In this direction, we present differential diversity (δ-diversity), we propose a generic methodology for offering state-aware guidance in feedback-driven frameworks, and we demonstrate how to retrofit state-of-the-art fuzzers to target broader classes of errors. We provide two such prototype implementations: NEZHA, the first differential generic fuzzer capable of handling logic bugs, as well as SlowFuzz, the first generic fuzzer targeting complexity vulnerabilities. We applied both prototypes on production software, and demonstrate their effectiveness. We found that NEZHA discovered hundreds of logic discrepancies across a wide variety of applications (SSL/TLS libraries, parsers, etc.), while SlowFuzz successfully generated inputs triggering slowdowns in complex, real-world software, including zip parsers, regular expression libraries, and hash table implementations.
66

Techniques for Efficient and Effective Mobile Testing

Hu, Gang January 2018 (has links)
The booming mobile app market attracts a large number of developers. As a result, the competition is extremely tough. This fierce competition leads to high standards required for mobile apps, which mandates efficient and effective testing. Efficient testing requires little effort to use, while effective testing checks that the app under test behaves as expected. Manual testing is highly effective, but it is costly. Automatic testing should come to the rescue, but current automatic methods are either ineffective or inefficient. Methods using implicit specifications – for instance, “an app should not crash” for catching fail-stop errors – are ineffective because they cannot find semantic problems. Methods using explicit specifications such as test scripts are inefficient because they require huge developer effort to create and maintain specifications. In this thesis, we present our two approaches for solving these challenges. We first built the AppDoctor system which efficiently tests mobile apps. It quickly explores an app then slowly but accurately verifies the potential problems to identify bugs without introducing false positives. It uses dependencies discovered between actions to simplify its reports. Our second approach, implemented in the AppFlow system, leverages the ample opportunity of reusing test cases between apps to gain efficiency without losing effectiveness. It allows common UI elements to be used in test scripts then recognizes these UI elements in real apps using a machine learning approach. The system also allows tests to be specified in reusable pieces, and provides a system to synthesize complete test cases from these reusable pieces. It enables robust tests to be created and reused across apps in the same category. The combination of these two approaches enables a developer to quickly test an app on a great number of combinations of actions for fail-stop problems, and effortlessly and efficiently test the app on most common scenarios for semantic problems. This combination covers most of her test requirements and greatly reduces her burden in testing the app.
67

Coverage-based testing strategies and reliability modeling for fault-tolerant software systems. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Finally, we formulate the relationship between code coverage and fault detection. Although our two current models are in simple mathematical formats, they can predict the percentage of fault detected by the code coverage achieved for a certain test set. We further incorporate such formulation into traditional reliability growth models, not only for fault-tolerant software, but also for general software system. Our empirical evaluations show that our new reliability model can achieve more accurate reliability assessment than the traditional Non-homogenous Poisson model. / Furthermore, to investigate some "variants" as well as "invariants" of fault-tolerant software, we perform an empirical investigation on evaluating reliability features by a comprehensive comparison between two projects: our project and NASA 4-University project. Based on the same specification for program development, these two projects encounter some common as well as different features. The testing results of two comprehensive operational testing procedures involving hundreds of thousands test cases are collected and compared. Similar as well as dissimilar faults are observed and analyzed, indicating common problems related to the same application in both projects. The small number of coincident failures in the two projects, nevertheless, provide a supportive evidence for N-version programming, while the observed reliability improvement implies some trends in the software development in the past twenty years. / Motivated by the lack of real-world project data for investigation on software testing and fault tolerance techniques together, we conduct a real-world project and engage multiple programming teams to independently develop program versions based on an industry-scale avionics application. Detailed experimentations are conducted to study the nature, source, type, detectability, and effect of faults uncovered in the program versions, and to learn the relationship among these faults and the correlation of their resulting failures. Coverage-based testing as well as mutation testing techniques are adopted to reproduce mutants with real faults, which facilitate the investigation on the effectiveness of data flow coverage, mutation coverage, and fault coverage for design diversity. / Next, we investigate the effect of code coverage on fault detection which is the underlying intuition of coverage-based testing strategies. From our experimental data, we find that code coverage is a moderate indicator for the capability of fault detection on the whole test set. But the effect of code coverage on fault detection varies under different testing profiles. The correlation between the two measures is high with exceptional test cases, but weak in normal testing. Moreover, our study shows that code coverage can be used as a good filter to reduce the size of the effective test set, although it is more evident for exceptional test cases. / Software permeates our modern society, and its complexity and criticality is ever increasing. Thus the capability to tolerate software faults, particularly for critical applications, is evident. While fault-tolerant software is seen as a necessity, it also remains as a controversial technique and there is a lack of conclusive assessment about its effectiveness. / Then, based on the preliminary experimental data, further experimentation and detailed analyses on the correlations among these faults and the relation to their resulting failures are studied. The results are further applied to the current reliability modeling techniques for fault-tolerant software to examine their effectiveness and accuracy. / This thesis aims at providing a quantitative assessment scheme for a comprehensive evaluation of fault-tolerant software including reliability model comparisons and trade-off studies with software testing techniques. First of all, we propose a comprehensive procedure in assessing fault-tolerant software for software reliability engineering, which is composed of four tasks: modeling, experimentation, evaluation and economics. Our ultimate objective is to construct a systematic approach to predicting the achievable reliability based on the software architecture and testing evidences, through an investigation of testing and modeling techniques for fault-tolerant software. / Cai Xia. / "September 2006." / Adviser: Rung Tsong Michael Lyu. / Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1715. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 165-181). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
68

Geração automática de casos de teste executáveis a partir de casos de teste abstratos para aplicações Web / Automatic generation of executable test cases based on abstract ones for Web applications

Almeida, Érika Regina Campos de, 1986- 11 September 2018 (has links)
Orientador: Eliane Martins / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-09-11T21:22:23Z (GMT). No. of bitstreams: 1 Almeida_ErikaReginaCamposde_M.pdf: 2024977 bytes, checksum: 283c040eb38feebecb81300c009c2567 (MD5) Previous issue date: 2012 / Resumo: A automação da geração e da execução de casos de teste está de acordo com o grande desafio na área de desenvolvimento de software que é fazer mais com menos recursos. Entretanto, para unir estes dois processos, ainda é necessário fazer a ponte entre o nível de especificação do Sistema em Teste (SeT) e sua respectiva implementação. O processo de geração de casos de teste usualmente requer uma representação formal do SeT (baseada nos documentos de especificação do SeT) e gera casos de teste abstratos, no sentido que eles estão no mesmo nível de detalhamento das especificações. Já a execução automática de casos de teste necessita de casos de teste executáveis, ou seja, aqueles que contêm detalhes de implementação do SeT para serem executados sem intervenção manual. Para usar automação desde a fase de projeto de teste até a fase de execução, é necessário preencher a lacuna entre o caso de teste abstrato (artefato de saída da geração automática de casos de teste) e o executável (artefato de entrada da execução automática de teste), pois eles estão em níveis de abstração diferentes. Usualmente, alguém com habilidades de programação realiza o processo de transformação do nível abstrato para o de implementação, despendendo muito esforço e tempo. Neste trabalho, avaliamos as propostas para mapeamento automático de casos de teste abstratos em executáveis existentes na literatura que estão de acordo com Teste Dirigido por Modelo (MDT), abordagem cujo intuito é gerar automaticamente artefatos de teste de software em diferentes níveis de abstração aplicando regras de transformação. Além de avaliar as propostas, escolhemos uma delas para instanciar para aplicações Web (as aplicações que mais cresceram nos últimos anos, tanto em uso, como em nível de complexidade), mostrando quais são os passos necessários para transformar os casos de teste abstratos em executáveis, levando em conta que existem bibliotecas especializadas no suporte à escrita deste último. Para avaliar a aplicabilidade da solução em aplicações Web reais, realizamos o estudo de caso usando aplicações Web de grande porte, duas delas de uso nacional e outra disseminada em escala mundial. Além disso, foi realizado o processo completo de teste (geração dos casos de teste abstratos; uso da proposta de instanciação, para aplicações Web, do método para transformar os casos de teste abstratos em executáveis; e execução dos testes), ilustrando a factibilidade da solução / Abstract: Automating the generation and execution of test cases meets the major challenge in software development that is doing more with fewer resources. However, to bring together these two processes, it is still necessary to bridge the gap between the specification level of the System under Test (SUT) and its respective implementation. The process of generating test cases usually requires a formal representation of the SUT (based on its specifications) and generates abstract test cases, in the sense that they are on the same detailing level of the specifications. On the other hand, the automatic execution of test cases needs executable test cases, i.e., those that contain implementation details of the SUT to run without manual intervention. So, to use automation from the test design phase to the test execution one, it is necessary to fill the gap between the abstract test case (output artifact of the automatic generation of test cases) and executable one (input artifact for the automatic test execution), once they are at different abstraction levels. Usually, someone with programming skills makes the process of transforming the abstract level to the implementation one, spending much effort and time. In this work, we evaluate the existing literature proposals for automatic mapping of abstract test cases into executable ones that were designed according to Model Driven Testing (MDT), one approach whose aim is to automatically generate software testing artifacts in different levels of abstraction by applying transformation rules. In addition to evaluating the proposals, we chose one to instantiate for Web applications (the kind of applications that have grown most in recent years, either in use as level of complexity), showing what are the steps needed to transform the abstract test cases into executable ones, taking into account that there are specialized libraries to support the writing of the latter one. To evaluate the solution applicability in real Web applications, we conducted case studies using large Web applications (two national ones and another which is worldwide used). In addition, we performed the whole test process (generation of abstract test cases; use of the proposed instantiation for web applications of a method to transform the abstract test cases into executable ones; and running the tests), illustrating the solution feasibility / Mestrado / Ciência da Computação / Mestra em Ciência da Computação
69

Apoio à automatização de oráculos de teste para programas com interfaces gráficas / Support for automated test oracles for programs with graphical interfaces

Oliveira, Rafael Alves Paes de 16 January 2012 (has links)
Estratégias para automatização de atividades de teste de software são bem aceitas tanto pela indústria quanto pela academia. Um elemento essencial para automatizações de teste são oráculos de teste. Oráculos, que podem ser mecanismos, funções, execuções paralelas, etc, são fundamentais por determinarem se as saídas de uma aplicação em teste estão corretas. A automatização de mecanismos de oráculos é um ponto crítico quando as saídas dos sistemas se manifestam por meio de formatos não triviais como, por exemplo, uma Interface Gráfica com o Usuário (GUI - do ingês Graphical User Interface). Para esses casos, estratégias tradicionais de teste costumam ser custosas e exigir esforços consideráveis dos testadores. Este trabalho de mestrado propõe um método alternativo para a automatização de oráculos de teste para sistemas com GUIs. Para tanto, exploram-se conceitos de Recuperação de Imagens Baseada em Conteúdo para a composição de um método de automatização chamado de oráculos gráficos (Gr-O - do inglês Graphical Oracle). Como contribuição, desenvolveram-se extratores de características visuais de GUIs. A condução e análise de estudos empíricos revelaram que o uso do Gr-O pode reduzir os custos para definições de oráculos de teste para sistemas com GUIs. Deste modo, o método proposto pode ser alternativo ou complementar às técnicas de teste tradicionais identificadas na literatura / Strategies for automated software testing activities are well accepted by both industry and the academy. Essential elements for automation of testing are test oracles. Oracles, which may be mechanisms, functions, parallel executions, etc., are crucial in determining whether the output of an application under test is correct. The automation of oracles is critical when the output system manifested by non-trivial formats, for example, a Graphical User Interface (GUI). For these cases, traditional testing strategies tend to be costly and require considerable efforts of the testers. This master thesis proposes an alternative method for the automation of test oracles for systems with GUIs. To this end, we explore the concepts of Content-Based Image Retrieval for the composition of an automated method called Graphical Oracles (Gr-O). As a contribution, we developed characteristics extractors of GUIs. The conduct and analysis of empirical studies have shown that using of Gr-O can reduce costs for definitions of test oracles for systems with GUIs. Thus, the proposed method may be alternative or complementary to traditional testing techniques found in the literature
70

On adaptive random testing

Kuo, Fei-Ching, n/a January 2006 (has links)
Adaptive random testing (ART) has been proposed as an enhancement to random testing for situations where failure-causing inputs are clustered together. The basic idea of ART is to evenly spread test cases throughout the input domain. It has been shown by simulations and empirical analysis that ART frequently outperforms random testing. However, there are some outstanding issues on the cost-effectiveness and practicality of ART, which are the main foci of this thesis. Firstly, this thesis examines the basic factors that have an impact on the faultdetection effectiveness of adaptive random testing, and identifies favourable and unfavourable conditions for ART. Our study concludes that favourable conditions for ART occur more frequently than unfavourable conditions. Secondly, since all previous studies allow duplicate test cases, there has been a concern whether adaptive random testing performs better than random testing because ART uses fewer duplicate test cases. This thesis confirms that it is the even spread rather than less duplication of test cases which makes ART perform better than RT. Given that the even spread is the main pillar of the success of ART, an investigation has been conducted to study the relevance and appropriateness of several existing metrics of even spreading. Thirdly, the practicality of ART has been challenged for nonnumeric or high dimensional input domains. This thesis provides solutions that address these concerns. Finally, a new problem solving technique, namely, mirroring, has been developed. The integration of mirroring with adaptive random testing has been empirically shown to significantly increase the cost-effectiveness of ART. In summary, this thesis significantly contributes to both the foundation and the practical applications of adaptive random testing.

Page generated in 0.0773 seconds