Spelling suggestions: "subject:"bianual desting"" "subject:"bianual ingesting""
1 |
Analytical Scenario of Software Testing Using Simplistic Cost ModelBathla, Rajender, Kapil, Anil 15 February 2012 (has links)
Software testing is the process of executing a program with
the intention of finding errors in the code. It is the process
of exercising or evaluating a system or system component
by manual automatic means to verify that it satisfies
specified requirements or to identify differences between
expected and actual results [4]. Software Testing should not
be a distinct phase in System development but should be
applicable throughout the design development and
maintenance phases. ‘Software Testing is often used in
association with terms verification & validation ‘Software
testing is the process of executing software in a controlled
manner, in order to answer the question: Does the software
behave as specified. One way to ensure system‘s
responsibility is to extensively test the system. Since
software is a system component it requires a testing process
also. / Software can be tested either manually or automatically.
The two approaches are complementary: automated testing
can perform a huge number of tests in short time or period,
whereas manual testing uses the knowledge of the testing
engineer to target testing to the parts of the system that are
assumed to be more error-prone. Despite this contemporary,
tools for manual and automatic testing are usually different,
leading to decreased productivity and reliability of the
testing process. Auto Test is a testing tool that provides a
“best of both worlds” strategy: it integrates developers’ test
cases into an automated process of systematic contractdriven
testing.
This allows it to combine the benefits of both approaches
while keeping a simple interface, and to treat the two types
of tests in a unified fashion: evaluation of results is the
same, coverage measures are added up, and both types of
tests can be saved in the same format. The objective of this
paper is to discuss the Importance of Automation tool with
associate to software testing techniques in software
engineering. In this paper we provide introduction of
software testing and describe the CASE tools. The solution
of this problem leads to the new approach of software
development known as software testing in the IT world.
Software Test Automation is the process of automating the
steps of manual test cases using an automation tool or utility
to shorten the testing life cycle with respect to time.
|
2 |
Recommender systems for manual testingMIRANDA, Breno Alexandro Ferreira de 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T15:59:57Z (GMT). No. of bitstreams: 2
arquivo5808_1.pdf: 2179927 bytes, checksum: f047e667d6364f038f0a1fdd91b757b2 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2011 / A atividade de teste de software pode ser bastante árdua e custosa. No contexto de testes manuais,
todo o esforço com o objetivo de reduzir o tempo de execução dos testes e aumentar
a contenção de defeitos é bem-vindo. Uma possível estratégia é alocar os casos de teste de
acordo com o perfil do testador de forma a maximizar a produtividade. Entretanto, otimizar a
alocação de casos de teste não é uma tarefa trivial: em grandes companhias, gerentes de teste
são responsáveis por alocar centenas de casos de teste aos testadores disponíveis ao início de
uma nova execução. Neste trabalho nós propomos dois algoritmos para a alocação automática
de casos de teste e três perfis para os testadores baseados em sistemas de recomendação (o
mesmo tipo de sistema que recomenda, por exemplo, um livro na Amazon.com ou um filme
no Netflix.com). Cada um dos algoritmos de alocação pode ser combinado com os três perfis
de testador, resultando em seis sistemas de alocação possíveis: Exp-Manager, Exp-Blind,
MO-Manager, MO-Blind, Eff-Manager, e Eff-Blind. Nossos sistemas de alocação consideram
a efetividade (defeitos válidos encontrados no passado) e experiência do testador (habilidade
em executar testes com determinadas características). Com o objetivo de comparar os nossos
sistemas de alocação com a alocação do gerente e com alocações aleatórias, um experimento
controlado, utilizando 100 alocações com pelo menos 50 casos de teste cada uma, foi realizado
em um cenário industrial real. Os sistemas de alocação foram avaliados através das
métricas de precisão, recall e taxa de não-alocação (percentual de casos de teste não alocados).
Em nosso experimento, a aplicação da ANOVA (uma técnica estatística utilizada para
verificar se as amostras de dois ou mais grupos são oriundas de populações com médias iguais)
e do teste de Tukey (um procedimento de comparações múltiplas para identificar quais médias
são significativamente diferentes entre si) mostraram que o Exp-Manager supera os demais
sistemas de alocação com respeito às métricas de precisão e recall. Todos os sistemas de alocação
mostraram-se superiores ao algoritmo randômico. A precisão média (entre os sistemas
de alocação) variou de 39.32% a 64.83% enquanto o recall médio variou de 39.19% a 64.83%;
para a métrica de não-alocação, três sistemas de alocação (Exp-Manager, Exp-Blind e MOBlind)
apresentaram um melhor desempenho alcançando taxa zero de não-alocação para todas
as alocações de testes. A taxa média de não-alocação variou de 0% a 2.34% (para a métrica
não-alocação, quanto menor, melhor). No cenário industrial real onde o nosso trabalho foi
realizado, gerentes de teste gastam de 16 a 30 dias de trabalho por ano com a atividade de alocação
de casos de teste. Nossos sistemas de alocação podem ajudá-los a realizar esta atividade
de forma mais rápida e mais eficaz
|
3 |
Model-based Testing of supporting testing of PAC based controlling systems in industrial plants - A case study in printing plantsFu, Jiangbiao, Song, Jiaqi January 2018 (has links)
Context. Testing is a very critical process to evaluate whether a related function is correctly implemented in the control system. There is an upward trend to using PAC based control system in the automation production context. However, currently, most testing of PAC based controlling system is in manual testing, which has low efficiency and high complexity. Furthermore, there has been little research on the systematic testing of PAC in an industry environment. Objectives. Due to this problem, this study is to investigate whether a model-based testing method can overcome the challenge of manual testing and improve the testing effectiveness in PAC based controlling system. Methods. We use three steps to achieve the objective, and the first one is to implement a systematic mapping study to find existing model-based testing method that is using in the industrial area, what the process and the context. The second one is to implement a case study in a printing house, to see what the real challenge of manual testing, the third one is to find if exist MBT method could be used under such context to overcome challenges. Results. Through mapping study and case study, we found there are many testing methods and implement under diverse context, but none of them focus on the PAC based controlling system. And we found there exist MBT that also can be used in PAC based controlling system to mitigate some manual testing challenges. Conclusions. In our thesis, we implement a mapping study from 38 papers to collect data for existing model-based testing methods to have a deep understanding of this area. We found there are five main contexts that MBT are usually being used and we extract implement process and advantages and disadvantages for MBT methods which value to practitioners and researchers. And we conducted a case study in a printing house in Switzerland to observe the challenges of manual testing of PAC based controlling system. We found one exist MBT method that can be used in our context that makes the test case generation step more effective. And we proposed a simulation testing model that hopefully can address all the manual testing challenges by combined with the exist MBT method.
|
4 |
Measuring Combinatorial Coverage of Manual TestingFifo, Miraldi January 2018 (has links)
Introduction: Software testing is a very important activity which assures the quality of the software under test. It becomes crucial in safety-critical systems, where an unexpected behavior of the software can even cause loss of human life or environmental disasters. However, in such complex systems it becomes infeasible to test all possible software scenarios for possible faults. Experience shows that software faults, which can cause unexpected software behavior, are caused by the interactions of variables of the tests. Combinatorial testing is the technique which focuses on the variable interactions of the tests and aims to reduce the number of tests needed to cover all software scenarios while still preserving a high fault detection rate. Background: Manual testing is the technique used to assure software quality in Bombardier Transportation AB, a Swedish company whose focus is on rail transport and development of trains. Since this process depends on the skills of the engineers, it can result in a large portion of tests not created and consequently in a large number of scenarios uncovered with tests. Therefore, combinatorial testing technique is used to measure the combinatorial coverage of these tests created from experienced engineers. Many comparisons of manual testing and other testing techniques in terms of test coverage, code coverage or mutation analysis. To the best of our knowledge there are no other studies in literature that have measured the combinatorial coverage of manual tests designed from experienced engineers, for different strength interactions of the variables of the tests nor other available tools that generate the number of missing tests to achieve full combinatorial coverage for specific interactions. Aim: The goal of this thesis is to answer the two research questions: RQ1. What is the combinatorial coverage achieved by tests manually created by experienced engineers in industry? RQ2. Can the effectiveness of manually created tests be improved in terms of combinatorial coverage using combinatorial testing? Method: In this thesis we investigate the combinatorial coverage of manually created tests by engineers working in industry and the implications of using combinatorial testing in practice. The Combinatorial Coverage Measurement2(CCM) NIST tool is used to measure the test coverage achieved. The research questions are answered by the following steps: 1) Review the scientific literature for related work, 2) Refine thesis research questions based on the previous step, 3) Propose the case study design and perform the measurements needed for data analysis, 4) and the results are analyzed and discussed in terms of test efficiency (i.e., number of test cases) and effectiveness (i.e., achieved combinatorial coverage). Results: The 2-way interaction combinatorial coverage achieved by manual tests is 78.6% on average, 57% for 3-way combinatorial coverage, 40.2% for 4-way combinatorial coverage, 20.2% for 5-way combinatorial coverage and 13% for 6-way combinatorial coverage. Full combinatorial coverage can be achieved for 2-way and 3-way interactions by adding eight and 66 missing tests on average, respectively. For 4-way interactions full combinatorial coverage can achieved by adding 658 missing tests. For 5-way and 6-way interactions full combinatorial coverage can be achieved by adding 5163 and 6170 missing tests on average respectively. Conclusion: The combinatorial coverage decreases as the strength of interactions increases. The effectiveness of the tests can be improved for 2-way and 3-way interactions and fully or partially improved for 4-way interactions, depending on the decision of engineers to add all missing tests or part of them, since the number of missing tests is increasing significantly, thus resulting in a very large number of tests to be added. It is not particularly efficient to improve test effectiveness by augmenting manual tests cases using combinatorial testing for higher strength interactions, since in most of the tests suites we studied one would need to generate additional 10.000 missing tests. This is explained by the exponential growth of the number of variable combination for such interactions.
|
5 |
Performance Testing and Assessment of Various Network-Based ApplicationsKondepati, Divya Naga Krishna, Mallidi, Satish Kumar Reddy January 2019 (has links)
Performance Testing is one of the crucial parts of any software cycle process. In today’s world, there is any number of network-based applications. Manual Testing and Automated Testing are the two important ways to test any type of application. For Manual Testing a mobile application known as BlekingeTrafiken is used. For Automated Testing, a web application known as Edmodo is used. Selenium is the automated tool included for automated testing. But, for each application, there are several users and because of that, there might be a decrease in performance of the application as an increase in the number of users. Performance of an application also depends on response times, mean, stability, speed, capacity, accuracy. The performance also depends on the device (memory consumption, battery, software variation) and Server/API (less no of calls) and depends on the network performance (jitters, packet loss, network speed). There are several tools for performance testing. By using these tools, we can get accurate performance results of each request. In this thesis, we performed manual testing of a mobile application by increasing the number of users under similar network conditions, automated testing of a web application under various test cases and tested the performance of an iPad application (PLANETJAKTEN). It is a real-time gaming application used to learn mathematics for children. Apache JMeter is the tool used for performance testing. The interaction between the JMeter tool and the iPad is done through HTTP Proxy method. When any user starts using the application, we can measure the performance of each request sent by the user. Nagios is the tool used to monitor the various environments. Results show that for manual testing, the time taken for connecting to WI-FI is low compared to opening and using the application. For automated testing, it is found that the time taken to run each test case for the first time is high compared to the remaining trials. For performance testing, the experimental results show that the error percentage (the percentage of failed requests) is high for logging into the application compared to using the application.
|
6 |
An examination of automated testing and Xray as a test management toolBertlin, Simon January 2020 (has links)
Automated testing is a fast-growing requirement for many IT companies. The idea of testing is to create a better product for both the company and the customer. The goal of this study is to examine different aspects of au- tomated testing and Xray as a test management tool. The literature study considers information from several different scientific reports. It shows that the benefits of automated testing include increased productivity and reliable software but also pitfalls like a high initial cost and a maintenance cost. Research suggests that automated testing is complementary to man- ual testing. Manual testing is more suited for exploratory testing, while automated testing is better for regression testing. Using historical data manual tests can be placed into prioritised clusters. The coverage of each test within a cluster determines its priority, where a test with high cover- age has a high priority. A near-optimal solution for prioritising automated tests is the combination of two well-known strategies, the additional cov- erage strategy and the total coverage strategy. Tests are prioritised based on how much of the code they uniquely cover. Code coverage is mea- sured using statements, methods or complexity. Furthermore, this thesis demonstrates a proof of concept for how the unified algorithm can pri- oritise Xray tests. Xray is evaluated according to the ISO/IEC 25010:2011 standard together with a survey done on Xray practitioners. The evalua- tion for Xray shows that Xray provides the necessary tools and functions to manage a testing suite successfully. However, no official encryption exist for the Jira server, and Xray lacks integrated documentation.
|
7 |
Comparison of GUI test automation strategies in a Clearing System : A case study at Nasdaq Stockholm ABIdris, Asil January 2021 (has links)
The effectiveness of GUI-based automated tests is a topic that is often discussed in the testing community. As software systems and GUIs are getting more advanced, the testing of these systems may sometimes be both time-consuming and repetitive. Therefore, automated tests are used more frequently to create more agile working and to decrease the number of manual tests. There is an ongoing discussion about the benefits and shortcomings of implementing automated GUI test compared to performing manual GUI tests. Increasingly, companies today are implementing automated GUI tests, yet relatively few are analyzing enough when choosing their test strategy. The purpose of this thesis was to explore different test strategies in a system as complex as Nasdaq’s clearing system NRTC. The first strategy used manual GUI tests, while the second strategy used a test tool called Selenium WebDriver to automate the GUI tests. Hopefully, this research will contribute to the already accumulated knowledge about automated GUI tests and potentially help in choosing the right testing strategy. The findings of this research showed that the implementation time of manual tests was much faster, and that implementing automated test would be timeconsuming. The reason for this is that the GUI was not very testable, making it difficult to create the test scripts. In addition, the was a difference in runtime when using both strategies. The automated tests only took a couple of seconds to execute, compared to the manual tests that took a couple of minutes. After both strategies was investigated, an analysis was done by looking at the effectivity, maintainability, robustness, and the overall results to be able to find benefits and shortcomings. In conclusion, GUI automation cannot replace manual test completely. By not including the human capabilities, the testing process becomes very limited.
|
8 |
Predicting and Estimating Execution Time of Manual Test Cases - A Case Study in Railway DomainAmeerjan, Sharvathul Hasan January 2017 (has links)
Testing plays a vital role in the software development life cycle by verifying and validating the software's quality. Since software testing is considered as an expensive activity and due to thelimitations of budget and resources, it is necessary to know the execution time of the test cases for an efficient planning of test-related activities such as test scheduling, prioritizing test cases and monitoring the test progress. In this thesis, an approach is proposed to predict and estimate the execution time of manual test cases written in English natural language. The method uses test specifications and historical data that are available from previously executed test cases. Our approach works by obtaining timing information from each and every step of previously executed test cases. The collected data is used to estimate the execution time for non-executed test cases by mapping them using text from their test specifications. Using natural language processing, texts are extracted from the test specification document and mapped with the obtained timing information. After estimating the time from this mapping, a linear regression analysis is used to predict the execution time of non-executed test cases. A case study has been conducted in Bombardier Transportation (BT) where the proposed method is implemented and the results are validated. The obtained results show that the predicted execution time of studied test cases are close to their actual execution time.
|
9 |
Mjukvarutester : En studie om när manuella respektive automatiserade tester används i praktikenNami, Fereshta, Laurent, Lisa January 2021 (has links)
The focus of this study is on examining when employees in the IT industry experience that it is more favorable to use automated tests and manual tests, respectively. The purpose of this study is to investigate how different companies in practice, use, work with and think about the two different test methods. Four factors that influence the choice of test method have been developed as a workframe based on five articles, all of which discuss the requirements and criteria for the two different test methods. By conducting an interview study with semi-structured interviews, data has been retrieved from two different companies. The data has in turn been analyzed based on the four factors that have been developed, namely: the number of test cases/test runs, technical aspects, what functions that are to be tested and resources. Based on the analyzes, it has been clear that the opinions from the different respondents and from previous research often remain on the same track. Thus, the various criteria could be discussed and the motivation for when the respondents use each method could be outlined. However, it also becomes clear that in the end it is mainly resources, often the number of working hours and the monetary cost, that determines what practice of testing is to be used. / Denna studie riktar in sig på att undersöka när anställda inom IT-branschen upplever att det är mer gynnsamt att använda sig av automatiserade tester respektive manuella tester. Syftet med studien är att undersöka hur olika företag använder sig, arbetar med och ser på de två olika testmetoderna i praktiken. Fyra stycken faktorer som påverkar valet av testmetod har tagits fram som ramverk som är baserad på fem artiklar där samtliga diskuterar krav och kriterier för de två olika testmetoderna. Genom att utföra en intervjustudie med semi-strukturerade intervjuer har data samlats in från två olika företag. Den insamlade datan har i sin tur analyserats utifrån de olika faktorerna som tagits fram nämligen: Antalet testfall/testkörningar, tekniska aspekter, funktioner som ska testas samt resurser. Utifrån analysen har det varit tydligt att åsikterna från de olika respondenterna samt från tidigare forskning ofta är på samma spår. Därmed har de olika kriterierna kunnat diskuterats och motiveringarna till när respondenterna använder sig av respektive metod kunnat benas ut. Dock blir det också tydligt att i slutändan är det resurser, oftast antalet arbetstimmar och den monetära kostnaden, som styr vad som borde väljas.
|
10 |
Using Imitation Learning for Human Motion Control in a Virtual SimulationAkrin, Christoffer January 2022 (has links)
Test Automation is becoming a more vital part of the software development cycle, as it aims to lower the cost of testing and allow for higher test frequency. However, automating manual tests can be difficult as they tend to require complex human interaction. In this thesis, we aim to solve this by using Imitation Learning as a tool for automating manual software tests. The software under test consists of a virtual simulation, connected to a physical input device in the form of a sight. The sight can rotate on two axes, yaw and pitch, which require human motion control. Based on this, we use a Behavioral Cloning approach with a k-NN regressor trained on human demonstrations. Evaluation of model resemblance to the human is done by comparing the state path taken by the model and human. The model task performance is measured with a score based on the time taken to stabilize the sight pointing at a given object in the virtual world. The results show that a simple k-NN regression model using high-level states and actions, and with limited data, can imitate the human motion well. The model tends to be slightly faster than the human on the task while keeping realistic motion. It also shows signs of human errors, such as overshooting the object at higher angular velocities. Based on the results, we conclude that using Imitation Learning for Test Automation can be practical for specific tasks, where capturing human factors are of importance. However, further exploration is needed to identify the full potential of Imitation Learning in Test Automation.
|
Page generated in 0.0573 seconds