• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 246
  • 200
  • 36
  • 19
  • 8
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 594
  • 594
  • 180
  • 163
  • 160
  • 144
  • 65
  • 64
  • 63
  • 62
  • 58
  • 58
  • 58
  • 54
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Experiment to evaluate an Innovative Test Framework : Automation of non-functional testing

Eada, Priyanudeep January 2015 (has links)
Context. Performance testing, among other types of non-functional testing, is necessary to assess software quality. Most often, manual approach is employed to test a system for its performance. This approach has several setbacks. The existing body of knowledge lacks empirical evidence on automation of non-functional testing and is largely focused on functional testing. Objectives. The objective of the present study is to evaluate a test framework that automates performance testing. A large-scale distributed project is selected as the context to achieve this objective. The rationale for choosing such a project is that the proposed test framework was designed with an intention to adapt and tailor according to any project’s characteristics. Methods. An experiment was conducted with 15 participants at Ericsson R&D department, India to evaluate an automated test framework. Repeated measures design with counter balancing method was used to understand the accuracy and time taken while using the test framework. To assess the ease-of-use of the proposed framework, a questionnaire was distributed among the experiment participants. Statistical techniques were used to accept or reject the hypothesis. The data analysis was performed using Microsoft Excel. Results. It is observed that the automated test framework is superior to the traditional manual approach. There is a significant reduction in the average time taken to run a test case. Further, the number of errors resulting in a typical testing process is minimized. Also, the time spent by a tester during the actual test is phenomenally reduced while using the automated approach. Finally, as perceived by software testers, the automated approach is easier to use when compared to the manual test approach. Conclusions. It can be concluded that automation of non-functional testing will result in overall reduction in project costs and improves quality of software tested. This will address important performance aspects such as system availability, durability and uptime. It was observed that it is not sufficient if the software meets the functional requirements, but is also necessary to conform to the non-functional requirements.
102

Knowledge Management in Software Testing

Garrepalli, Thrinay January 2015 (has links)
Context: Software testing is a knowledge intensive process and the use of Knowledge Management (KM) methods and principles makes software testing even more beneficial. Thus there is a need of adapting KM into software testing core process and attain the benefits that it provides in terms of cost, quality etc. There has been an extensive literature published in the context of KM in software testing. But it is still unclear about the importance of KM with respect to testing techniques as well as testing aspects i.e. each activity that takes part during testing and the outcomes that they result such as test artifacts is considered as testing aspect. Thus there is a requisite for studies to focus on identifying the challenges faced due to lack of KM along with the importance of KM with respect to testing aspects, testing techniques and thus can provide recommendations to apply Knowledge Management to those that get benefited from it.   Objectives: In this thesis, we investigate the usage and implementation of KM in Software testing. The major objectives of current thesis include, To identify various software testing aspects that receive more attention while applying KM. To analyze the software testing techniques i.e. test design, test execution and test result analysis and evaluate them and highlight which of these have more involvement of KM. To identify the software testing techniques where tacit or explicit knowledge is currently used. To gather challenges faced by industry due to lack of KM initiatives in software testing.   Methods: We conducted a Systematic Literature Review (SLR) through a snowballing method based on the guidelines from Wohlin in order to identify various software testing aspects and testing techniques that have more involvement of KM and challenges that are faced due to lack of KM. A questionnaire intended for web-based survey was prepared from the gathered literature results to complement and further supplement them and to categorize the testing techniques based on the type of knowledge they utilize. The studies were analyzed in relation to their rigor and relevance to assess the quality of the results. The data obtained from survey were statistically analyzed using descriptive statistics and Chi-square test of significance.   Results: We identified 35 peer reviewed papers among which 31 were primary and 4 were secondary studies. The literature review results indicated 9 testing aspects being in focus when applying KM within various adaptation contexts. In addition, few testing techniques were found to get benefited from the application of KM. Several challenges were identified from the literature review such as improper selection and application of better suited techniques, low reuse rate of Software Testing knowledge, barriers in Software testing knowledge transfer, impossible to quickly achieve the most optimum distribution of human resources during testing etc. 54 full answers were received to the survey. The survey showed that Knowledge Management was being applied in software testing in most of the industries. It was observed that test result analysis, test case design, test planning and testing techniques stood out as the most important testing aspects being focused while KM is applied. Regarding software testing techniques, 17 test design techniques, 5 test execution techniques and 5 test result analysis techniques gain more attention in the context of KM. Moreover, the results suggest that tacit knowledge was utilized for most of these techniques. Several new challenges are obtained from the survey such as lacking quality in terms of testing results or outcomes, difficulties in finding relevant information and resources during testing, applying more effort than required during testing, having a huge loss of know-how by neglecting explicit and tacit knowledge during test design etc.   Conclusions. To conclude, various challenges are being faced due to the lack of KM. Our study also brings supporting evidence that applying KM in Software Testing is necessary i.e. to increase test effectiveness, selection and application of better suited techniques and so on. It was also observed that perceptions vary between the literature and the survey results obtained from the practitioners regarding testing aspects and testing techniques, as few aspects and techniques which are being categorized as the most important in the literature are not given the same priority by the respondents. Thus the final list of testing aspects and testing techniques is provided and empirical findings can likewise help practitioners to specifically apply KM more for those that are very much in need of it. Besides, it was found that most of the techniques require and utilize tacit knowledge to apply them and techniques such as shadowing, observing, training and recording sessions can help to store tacit knowledge for those that are in need of it. Thus researchers can recognize the advantages from this thesis and can further extend to various software life cycle models.
103

Alinhamento de sequências na avaliação de resultados de teste de robustez / Sequence alignment algorithms applied in the evaluation of robustness testing results

Lemos, Gizelle Sandrini, 1975- 12 November 2012 (has links)
Orientador: Eliane Martins / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-22T06:26:11Z (GMT). No. of bitstreams: 1 Lemos_GizelleSandrini_D.pdf: 2897622 bytes, checksum: 93eb35a9b69a8e36e90d0399422e6520 (MD5) Previous issue date: 2013 / Resumo: A robustez, que é a capacidade do sistema em funcionar de maneira adequada em situações inesperadas, é uma propriedade cada vez mais importante, em especial para sistemas críticos. Uma técnica bastante empregada para testar a robustez consiste em injetar falhas no sistema e observar seu comportamento. Um problema comum nos testes é a determinação de um oráculo, i.e., um mecanismo que decida se o comportamento do sistema é ou não aceitável. Oráculos como a comparação com padrão-ouro - execução sem injeção de falhas - consideram todo comportamento diferente do padrão como sendo erro no sistema em teste (SUT). Por exemplo, a ativação de rotinas de tratamento de exceção em presença de falhas, pode ser considerada como erros. Também utilizada como oráculo, à busca por propriedades de segurança (safety) pode mostrar a presença de não robustez no SUT. Caso haja no SUT eventos semanticamente similares aos da propriedade estes não são notados (a menos que sejam explicitamente definidos na propriedade). O objetivo deste trabalho é o desenvolvimento de oráculos específicos para teste de robustez visando à diminuição dos problemas existentes nas soluções atualmente empregadas. Desenvolvemos duas abordagens a serem utilizadas como oráculos. O principal diferencial de nossas soluções em relação aos oráculos atuais é a adoção de algoritmos de alinhamento de sequências comumente aplicados em bioinformática. Estes algoritmos trabalham com matching inexato, permitindo algumas variações entre as sequência comparadas. A primeira abordagem criada é baseada na comparação tradicional com o padrão-ouro, porém aplica o alinhamento global de sequências na comparação de traços de execução coletados durante a injeção de falhas e padrões coletados sem a injeção de falhas. Isto permite que traços com pequenas porções diferentes do padrão sejam também classificados como robustos possibilitando inclusive a utilização da abordagem como oráculo em sistemas não deterministas, o que não é possível atualmente. Já, a segunda abordagem busca propriedades de segurança (safety) em traços coletados durante a injeção de falhas por meio do uso do algoritmo de alinhamento local de sequências. Além das vantagens do fato de serem algoritmos de matching inexato, estes algoritmos utilizam um sistema de pontuação que se baseia em informações obtidas na especificação do sistema em teste para guiar o alinhamento das sequências. Mostramos o resultado da aplicação das abordagens em estudos de caso / Abstract: Robustness, which is the ability of a system to work properly in unexpected situations, is an important characteristic, especially for critical systems. A commonly used technique for robustness testing is to inject faults during the system execution and to observe its behavior. A frequent problem during the tests is to determine an oracle, i.e., a mechanism that decides if the system behavior is acceptable or not. Oracles such as the golden run comparison - system execution without injection of faults - consider all different behaviors from the golden run as errors in the system under test (SUT). For example, the activation of exception handlers in the presence of faults could be considered as errors. Safety property searching approach is also used as oracle and it can show the presence of non-robustness in the SUT. If there are events in the SUT execution that are semantically similar to the property they are not taken into account (unless they have been explicitly defined in the property). The objective of this work is to develop specific oracles to evaluate results of robustness testing in order to minimize the deficiencies in the current oracles. The main difference between our solutions and the existing approaches is the type of algorithm that we used to compare the sequences. We adopted sequence alignment algorithms commonly applied in Bioinformatics. These algorithms are a kind of inexact matching, allowing some variations between the compared sequences. First approach is based on the traditional golden run comparison, but applies global sequence alignment of sequences to compare traces collected during fault injection and traces collected without fault injection. The use of these algorithms allows that traces with some differences of the golden run also being classified as robust allowing it use in non-deterministic systems evaluation which is not possible currently. The second approach works with comparison of patterns derived from safety properties and traces collected during robustness testing. However, differently from the first approach, the second one use of local sequence alignment algorithm to search for subsequences. Besides the advantages of the inexact matching, these algorithms use a scoring system based on information obtained from SUT specification to guide the alignment of the sequences. We show the results of the approaches application through case studies / Doutorado / Ciência da Computação / Doutora em Ciência da Computação
104

Test Backlog : nova abordagem incremental de planejamento e execução de testes para Scrum / Test Backlog : new incremental approach for planning and execution of tests in Scrum

Patuci, Gabriela de Oliveira, 1985- 12 June 2013 (has links)
Orientador: Regina Lúcia de Oliveira Moraes / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-24T04:31:04Z (GMT). No. of bitstreams: 1 Patuci_GabrieladeOliveira_M.pdf: 1906048 bytes, checksum: 9358ce518b29b6d4829715586ee61d06 (MD5) Previous issue date: 2013 / Resumo: O uso crescente das metodologias ágeis em todo mercado nacional e internacional tem exigido uma atenção especial às necessidades de adequação das metodologias tradicionais e suas respectivas implementações, para atender um novo ambiente de desenvolvimento de software. Pelo fato de ser incerta a distribuição de tarefas nesse novo ambiente multifuncional, este trabalho apresenta o papel do analista de testes em ambientes ágeis e, principalmente, identifica como as atividades de testes poderão ser executadas de maneira a obter resultados de qualidade e total adequação aos "times ágeis". A abordagem proposta neste trabalho consiste em realizar todas as tarefas de teste no dia-a-dia do time de maneira ainda mais incremental se comparada com a que está sendo feita hoje. Além disso, é proposto um novo formato para as user stories e o backlog do projeto: o Test Backlog. Este formato é proposto como um formato mais completo, utilizado para documentar um requisito, que engloba não só a visão de negócio do cliente relacionado ao que o sistema deve atender, mas também a visão técnica do profissional de teste relacionado ao que deve ser verificado antes da estória ser considerada feita, ou no caso do Scrum, "pronta". O problema, para o qual está sendo apresentada uma nova abordagem, faz parte da realidade de muitas empresas de desenvolvimento de software, que estão fazendo a migração de um modelo de desenvolvimento tradicional, onde todos os papéis e o próprio processo são bem definidos, para o Scrum, a metodologia ágil que foi estudada. Os resultados encontrados nos estudos de caso apresentam um forte indicativo de que existem maneiras de executar testes de maneira mais incremental, mantendo a qualidade sempre buscada em todas as metodologias anteriores e, ainda sim, ser ágil / Abstract: The increasing use of agile methodologies across national and international markets has required special attention to traditional methodologies needs of adequacy and their respective implementations to meet a new software development environment. Due to the unclear distribution of tasks (that were previously delegated to specific roles) in this new multi-functional environment, this work presents the test analyst role in agile environments and also, identifies through research, how testing activities could be performed in order to obtain high quality results and overall suitability to "agile teams". The approach proposed in this work is to perform all the day by day test tasks in a more incremental way if compared with what is being done today. It also consists on creating the user stories and project backlog in a new format: the Test Backlog. This format is proposed as a more complete requirement format, which includes not only client's business vision of what the system must do, but also the tester technical point of view of what should be checked before considering an user story as done. The problem to which is being presented a new approach is part of the reality of many software development companies, which are making the migration from a traditional development model, where all the roles and the process are well defined, to Scrum, the agile methodology that was studied. Finally, the results found in the case studies show a strong indication that there are ways to run tests in a more incremental mode, always keeping the quality sought in all previous methods and still being agile / Mestrado / Tecnologia e Inovação / Mestra em Tecnologia
105

Effective Randomized Concurrency Testing with Partial Order Methods

Yuan, Xinhao January 2020 (has links)
Modern software systems have been pervasively concurrent to utilize parallel hardware and perform asynchronous tasks. The correctness of concurrent programming, however, has been challenging for real-world and large systems. As the concurrent events of a system can interleave arbitrarily, unexpected interleavings may lead the system to undefined states, resulting in denials of services, performance degradation, inconsistent data, security issues, etc. To detect such concurrency errors, concurrency testing repeatedly explores the interleavings of a system to find the ones that induce errors. Traditional systematic testing, however, suffers from the intractable number of interleavings due to the complexity in real-world systems. Moreover, each iteration in systematic testing adjusts the explored interleaving with a minimal change that swaps the ordering of two events. Such exploration may waste time in large homogeneous sub-spaces leading to the same testing result. Thus on real-world systems, systematic testing often performs poorly to reveal even simple errors within a limited time budget. On the other hand, randomized testing samples interleavings of the system to quickly surface simple errors with substantial chances, but it may as well explore equivalent interleavings that do not affect the testing results. Such redundancies weaken the probabilistic guarantees and performance of randomized testing to find any errors. Towards effective concurrency testing, this thesis leverages partial order semantics with randomized testing to find errors with strong probabilistic guarantees. First, we propose partial order sampling (POS), a new randomized testing framework to sample interleavings of a concurrent program with a novel partial order method. It effectively and simultaneously explores the orderings of all events of the program, and has high probabilities to manifest any errors of unexpected interleavings. We formally proved that our approach has exponentially better probabilistic guarantees to sample any partial orders of the program than state-of-the-art approaches. Our evaluation over 32 known concurrency errors in public benchmarks shows that our framework performed 2.6 times better than state-of-the-art approaches to find the errors. Secondly, we describe Morpheus, a new practical concurrency testing tool to apply POS to high-level distributed systems in Erlang. Morpheus leverages dynamic analysis to identify and predict critical events to reorder during testing, and significantly improves the exploration effectiveness of POS. We performed a case study to apply Morpheus on four popular distributed systems in Erlang, including Mnesia, the database system in standard Erlang distribution, and RabbitMQ, the message broker service. Morpheus found 11 previously unknown errors leading to unexpected crashes, deadlocks, and inconsistent states, demonstrating the effectiveness and practicalness of our approaches.
106

Understanding Test Case Design: An Exploratory Survey of the Testers’ Routine and Behavior

Esber, Jameel January 2022 (has links)
Testing is an important component of every software since it enables the delivery of reliable solutions that meet the needs of end-users. Valuable testing is represented by the results of the test cases, which may provide knowledge into the presence of software system flaws. Testers create test cases because it lets one check and ensures that all user requirements and situations that users can go through are fully covered. In addition, test cases enable to find out the design problems early and thus allow testers to find solutions as soon as possible. The main goal of this thesis is to understand the routine and behavior of human testers when performing testing and to gain a better understanding of the software testing field. This thesis also aims to discover the challenges testers face when testing. The report shows the results of a combined qualitative and quantitative survey, responded by 38 experienced software testers and developers. The survey explores testers' cognitive processes when performing testing by investigating the knowledge they bring, the activities they perform, and the challenges they face in their routine. By analyzing the survey's results, we identified several main themes (related to knowledge, activities, and challenges) and brought more knowledge on the course of the problem-solving process cycle from understanding the test goal, planning the test strategy, executing tests to checking of the test results. We report a more refined test design model. The results of this thesis suggest that testers use several sources of knowledge in their routine when creating and executing test cases such as documentation, code, and their experience. In addition, we found that the main activities of testers are related to specific tasks such as the comprehension of software requirements, learning as much as possible about the software, and discussing of the results with the developing team or other testers to get feedback about the outcomes. Finally, testers face many challenges in their routine when understanding, planning, executing, and checking tests: e.g., incomplete, or ambiguous requirements, complex or highly configurable scenarios that are hard to test, the lack of time and hard deadlines and unstable environments.
107

Implementation and evaluation of a tool for automated testing of performance and resource consumption in Java / Implementering och evaluering av ett verktyg för automatiserad testning av prestanda och resurskonsumtion i Java

Johansson, Nils January 2022 (has links)
Detecting performance issues in software systems is desirable and sometimes critical. In this thesis, a new tool is presented for the purpose of testing performance at a smaller scale. This tool, JFRNG, is implemented as an extension to the Java testing framework TestNG with the purpose of allowing developers to test the performance of Java code in unit and integration tests. The thesis also contains a study of the current methods and approaches to automating the process of detecting performance issues.  An evaluation of JFRNG is performed both in terms of performance and usability, where the tool is compared to JfrUnit, a similar tool that targets the JUnit framework. A usability survey took place during the usability evaluation where professional Java developers gave their opinions on the clarity of the code that is produced by JFRNG and JfrUnit. The survey participants found the code produced by JFRNG to be easier to read and understand and also rated JFRNG slightly higher on a number of specific aspects related to the clarity of the code. The added performance overhead that was observed for JFRNG during the performance evaluation is on par with JfrUnit and does add some run time for tests, suggesting that developers should be mindful when selecting tests that should utilize JFRNG. The functionality is relatively extensive, allowing developers to collect and test for a large number of metrics related to the tested code. Finally, the thesis is concluded with some suggestions for future work, such as integrating more complex methods for detecting performance regressions based on collected metrics.
108

Understanding the role of visual analytics for software testing

Eriksson, Nikolas, Örneholm, Max January 2021 (has links)
Software development is constantly evolving. This produces a lot of opportunities, but also confusion about what the best practices are. A rather unexplored area within software development is visual analytics for software testing. The goal of this thesis is to get an understanding of what role visual analytics can have within software testing. In this thesis, a literature review was used to gather information about analytical needs, tools, and other vital information about the subject. A survey towards practitioners was used to get information about the industry, the survey had questions about visual analytics, visualizations, and their potential roles.  We conclude that visual analytics of software testing results does have a role in software testing, mainly in a faster understanding of test results, the ability to produce big picture overviews, and supporting decision making.
109

Characterization and Generation of Streaming Video Traces

Shahbazian, John N 14 November 2003 (has links)
This thesis describes two methods collectively called Time Series Generation (TSG) that can be used to generate time series inputs modeling packet loss to test IP-based streaming video software. The TSG methods create packet loss models that recreate the mean, variance, and autocorrelation signatures of an actual trace. The synthetic packet loss traces can have their inherent statistics altered, thus allowing for thorough testing of video software in ways that could not be done on actual networks. The two methods comprising TSG, which are individually called the primary and secondary method, use the principle of iterated uniformity to create a time series that attempts to match mean, variance, and autocorrelation. The two methods differ in their approach to generating autocorrelation. This leads to trade-offs between the two. The TSG methods are embodied in a software program called TSGen. An evaluation of TSGen is conducted, including a comparison with the well-known Autoregressive-To-Anything Generation algorithm (ARTAGEN) method and tool. The details of capturing packets and parsing video frame counts from packet streams are explained and demonstrated. Sixteen video stream traces were collected from a variety of sources and used to evaluate TSGen. Synthetic traces are generated for the sixteen original traces and both their summary statistics and autocorrelation signatures are compared against the originals. One of the sixteen traces is also compared against a synthetic trace generated using the ARTAGEN tool. Twelve out of the sixteen synthetic traces when compared to the actual traces had Least Square Error (LSE) values under 0.1, three were under 0.4, and the remaining one was under 1.1. Nine synthetic traces had their percent error differences between the mean and variance of the synthetic and actual traces below 5%, one was below 7%, four were under 18%, and the two remaining were at 41%. TSGen is able to effectively model autocorrelation, mean, and variance. Additional intangible benefits of TSG include adjustable run time for the matching process, with longer run time equating to better accuracy, and a simple theoretical model that was easily implemented.
110

Testing Challenges of Mobile Augmented Reality Systems

Lehman, Sarah, 0000-0002-9466-0688 January 2022 (has links)
Augmented reality systems are ones which insert virtual content into a user’s view of the real world, in response to environmental conditions and the user’s behavior within that environment. This virtual content can take the form of visual elements such as 2D labels or 3D models, auditory cues, or even haptics; content is generated and updated based on user behavior and environmental conditions, such as the user’s location, movement patterns, and the results of computer vision or machine learning operations. AR systems are used to solve problems in a range of domains, from tourism and retail, education and healthcare, to industry and entertainment. For example, apps from Lowe’s [82] and Houzz [81] support retail transactions by scanning a user’s environment and placing product models into the space, thus allowing the user to preview what the product might look like in her home. AR systems have also proven helpful in such areas as aiding industrial assembly tasks [155, 175], helping users overcome phobias [35], and reviving interest in cultural heritage sites [163]. Mobile AR systems are ones which run on portable handheld or wearable devices, such that the user is free to move around their environment without restric- tion. Examples of such devices include smartphones, tablets, and head-mounted dis- plays. This freedom of movement and usage, in combination with the application’s reliance on computer vision and machine learning logic to provide core function- ality, make mobile AR applications very difficult to test. In addition, as demand and prevalence of machine learning logic increases, the availability and power of commercially available third-party vision libraries introduces new and easy ways for developers to violate usability and end-user privacy. The goal of this dissertation, therefore, is to understand and mitigate the challenges involved in testing mobile AR systems, given the capabilities of today’s commercially available vision and machine learning libraries. We consider three related challenge areas: application behavior during unconstrained usage conditions, general usability, and end-user privacy. To address these challenge areas, we present three research efforts. The first presents a framework for collecting application performance and usability data in the wild. The second explores how commercial vision libraries can be exploited to conduct machine learning operations without user knowledge. The third presents a framework for leveraging the environment itself to enforce privacy and access control policies for mobile AR applications. / Computer and Information Science

Page generated in 0.0698 seconds