Spelling suggestions: "subject:"model based desting (MBT)"" "subject:"model based ingesting (MBT)""
1 |
Testování platformy JBoss Drools založené na modelu / Model-Based Testing of JBoss DroolsŠiroký, Petr January 2014 (has links)
Model-based testing (MBT) is using a model of expected behavior of the system to automatically generate a set of tests. It aims at reducing the testing cost when compared to the traditional testing techniques. This work focuses on testing a real-world software system using the selected MBT tool OSMO. The tested system is responsible for compiling business rules and it is one of the main components of the Drools platform, developed by Red Hat. The work describes the introduction of MBT considering the good reception from the community of developers, then the creation of compiler input models and evaluation of the newly created test suite. The usage of the MBT resulted in detection of five reported and three potential issues in the tested code. Using the Drools compiler example, the work summarizes the main strengths and also weaknesses of practical use of MBT techniques.
|
2 |
Automation of the creation and execution of system level hardware-in-loop tests through model-based testingAlmasri, Ahmed, Aronsson Karlsson, Viktor January 2022 (has links)
The automatic creation of test cases has been a well-researched area in recent years. Indeed, the industry’s testing procedure still uses the traditional way of manual practices. However, investigations are continued to deliver new methods, but research results have not been fully adopted. In this paper, the investigated method applies the model-based testing (MBT) method to evaluate the ability to automate the creation of hardware-in-the-loop (HIL) test cases, where generated tests are created using MBT tools. The tools’ properties were compared to each other through a literature study, with the result of selecting tools to be used in a case study, and the tools selected were GraphWalker and MoMuT. The generated test cases perform similarly to their manual counterparts regarding how the test cases achieved full requirements coverage. When comparing the effort needed for applying the methods, a comparable effort is required for creating the first iteration, while with every subsequent update, MBT will require less effort compared to the manual process. Both methods achieve 100% requirements coverage, and since manual testing is created and executed by humans, some requirements are favoured over others due to company demands, while MBT tests will be generated randomly.In addition, a comparison between the used tools showcased the differences in the models’ design and their test case generation. The comparison showed that GraphWalker has a more straightforward design method and is better suited for smaller systems, while’s MoMuT can handle more complex systems but has a more involved design method.The results of the thesis showed that using MBT tools proved helpful as it covers the system requirements, can be executed in HIL and helps discover faults within the requirements and HIL system. These facts satisfy the companies’ demands. This thesis shows a promising improvement in automating the test process within the vehicular domain.
|
3 |
Abordagens para avaliação experimental de testes baseado em modelos de aplicações reativas. / Approaches for experimental evaluation of tests based on reactive application modelsNASCIMENTO, Laísa Helena Oliveira do. 27 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-27T18:21:34Z
No. of bitstreams: 1
LAÍSA HELENA OLIVEIRA DO NASCIMENTO - DISS PPGCC 2008..pdf: 1446971 bytes, checksum: 3622ab4c366ab0a686c4f32d688c13b6 (MD5) / Made available in DSpace on 2018-08-27T18:21:34Z (GMT). No. of bitstreams: 1
LAÍSA HELENA OLIVEIRA DO NASCIMENTO - DISS PPGCC 2008..pdf: 1446971 bytes, checksum: 3622ab4c366ab0a686c4f32d688c13b6 (MD5)
Previous issue date: 2008-02-28 / Processos de teste de software vêm ganhando cada vez mais espaço na indústria. Empresas têm investindo na definição e formalização dos seus processos e em meio a essa mudança de comportamento, Model-Based Testing (MBT) apresenta-se como uma técnica promissora de teste. No entanto, a utilização de MBT ainda é baixa e pesquisadores têm focado em maneiras de superar as barreiras para que se obtenha uma adoção maior por parte da indústria. O mundo empresarial é movido a processos e resultados. Dessa forma, o uso de MBT precisa se adaptar aos processos existentes, e estudos de caso que evidenciem as vantagens de sua utilização precisam ser conduzidos. Neste trabalho, o paradigma Goal Question Metric é utilizado na definição de modelos de medição que têm como foco principal a avaliação e o acompanhamento do desempenho de MBT sem causar impacto ao processo de teste já existente. Os modelos de medição consideram métricas como esforço, percentual de requisitos testáveis cobertos, percentual de casos de teste modificados, percentual de falhas,dentre outros. Os modelos não estão atrelados ao processo de MBT apresentado, podendo ser aplicados
em qualquer processo que permita a coleta dos dados necessários para o cálculo das
métricas. Para validar os modelos, estudos de caso foram conduzidos dentro do ambiente de testes da Motorola. / Software testing processes have become more common in industry. Companies are investing on the definition and the formalization of their test processes and, in this context,
Model-Based Testing (MBT) appears as an interesting testing technique. However, industrial adoption of MBT remains low and researchers are also focusing on how to beat the barriers to wide adoption. Processes and results move the business world so, MBT processes must be adaptable to actual testing processes. For this, experiments to evaluate the results achieved with its use must be conduct. In this work, measurement models based on the Goal Question Metric methodology are proposed. The purpose is evaluating the use of MBT without increasing actual testing process costs. The models focus on aspects as effort, testable requirements coverage, modified test cases, failures, among others. The models are not associated with the MBT process presented. They can be applied with any process that allows metrics collection. In order to validate the measurement models, case studies were conducted into Motorola testing environment.
|
4 |
Investigation of similarity-based test case selection for specification-based regression testing.OLIVEIRA NETO, Francisco Gomes de. 10 April 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-04-10T20:00:05Z
No. of bitstreams: 1
FRANCISCO GOMES DE OLIVEIRA NETO - TESE PPGCC 2014..pdf: 5163454 bytes, checksum: 228c1fc4f2dc9aad01698011238cfde1 (MD5) / Made available in DSpace on 2018-04-10T20:00:05Z (GMT). No. of bitstreams: 1
FRANCISCO GOMES DE OLIVEIRA NETO - TESE PPGCC 2014..pdf: 5163454 bytes, checksum: 228c1fc4f2dc9aad01698011238cfde1 (MD5)
Previous issue date: 2014-07-30 / uring software maintenance, several modifications can be performed in a specification
model in order to satisfy new requirements. Perform regression testing on modified software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where specification models were modified. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modified regions of a software system’s specification model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modified elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator
tools to create a space of models based on statistics from real industrial models, and
eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SART’s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modification and revealing defects linked to model modifications. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment. / During software maintenance, several modifications can be performed in a specification model in order to satisfy new requirements. Perform regression testing on modified software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where specification models were modified. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modified regions of a software system’s specification model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modified elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator
tools to create a space of models based on statistics from real industrial models, and
eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SART’s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modification and revealing defects linked to model modifications. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment.
|
5 |
Similarity-based test suite reduction in the context of Model-Based Testing. / Sililaridade baseada em redução de suítes de teste no contexto do teste baseado em modelos.COUTINHO, Ana Emília Victor Barbosa . 04 May 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-04T22:31:28Z
No. of bitstreams: 1
ANA EMÍLIA VICTOR BARBOSA COUTINHO - TESE PPGCC 2015..pdf: 3805756 bytes, checksum: 2bee7d8777dfd753eb994680cd2bb6c5 (MD5) / Made available in DSpace on 2018-05-04T22:31:28Z (GMT). No. of bitstreams: 1
ANA EMÍLIA VICTOR BARBOSA COUTINHO - TESE PPGCC 2015..pdf: 3805756 bytes, checksum: 2bee7d8777dfd753eb994680cd2bb6c5 (MD5)
Previous issue date: 2015-03-20 / Capes
|
6 |
Software Testing : A Comparative Study Model Based Testing VS Test Case Based Testing / Software Testing : A Comparative Study Model Based Testing VS Test Case Based TestingPolamreddy, Rakesh Reddy, Irtaza, Syed Ail January 2012 (has links)
Software testing is considered as one of the key phases in the software-development life cycle (SDLC). The main objective of software testing is to detect the faults either through manual testing or with automated testing approach. The most commonly adopted software testing approach in industries is test case based testing (TCBT) which is usually done manually. TCBT is mainly used by the software testers to formalize and guide their testing activities and set theoretical principals for testing. On the other hand, model based testing (MBT) is widely used automation software testing technique to generate and execute the tests. Both techniques are showing their prominence in real time with some pros and cons. However, there is no formal comparison available between these two techniques. The main objective of this thesis work is to find out the difference in test cases in TCBT and MBT in terms of providing better test coverage ( Statement, Branch and Path), requirement traceability, cost and time. To fulfill the aims of the research we have conducted interviews for static validation, and later we did an experiment for validating those results dynamically. The analysis of experiment results showed that the requirement traceability in MBT generated test cases are very hard to make the test cases traceable to the requirements, particularly with the open-source tool Model J-Unit. However, this can be done by using other commercial tools like Microsoft Spec Explorer or Conformiq Qtronic. Furthermore, we found by conducting experiment, that MBT consumes less time thus it is cost-effective as compared to TCBT and also MBT show better test coverage than TCBT. Moreover, we found that, in our case, requirement traceability is better in traditional TCBT approach as compared to MBT. / +4746851975
|
7 |
Uma abordagem dirigida por modelos para geração automática de casos de teste de integração usando padrões de teste. / A model-driven approach for automatically generating integration test cases using test patterns.MACIEL, Camila de Luna. 16 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-16T14:52:23Z
No. of bitstreams: 1
CAMILA DE LUNA MACIEL - DISSERTAÇÃO PPGCC 2010..pdf: 7464974 bytes, checksum: 6d1dbd48857a3fa1c75047b6ab0a2016 (MD5) / Made available in DSpace on 2018-08-16T14:52:23Z (GMT). No. of bitstreams: 1
CAMILA DE LUNA MACIEL - DISSERTAÇÃO PPGCC 2010..pdf: 7464974 bytes, checksum: 6d1dbd48857a3fa1c75047b6ab0a2016 (MD5)
Previous issue date: 2010-08-06 / CNPq / Dentro da Engenharia de Software, novos paradigmas de desenvolvimento vêm surgindo
no intuito de oferecer uma maior produtividade sem perda de qualidade aos softwares
desenvolvidos. Um desses paradigmas é o MDD (Model-Driven Development), cuja
principal finalidade é a introdução de modelos rigorosos durante todo o processo de
desenvolvimento de software oferecendo, dentre outras vantagens, a geração automática de código a partir dos modelos. Contudo, mesmo em processos de desenvolvimento que seguem este paradigma, a atividade de teste de software ainda é fundamental, principalmente teste de integração, cujo objetivo é verificar que os componentes do software, implementados e testados individualmente, provêem a funcionalidade pretendida quando colocados para interagir uns com os outros. Embora classes individuais possam funcionar corretamente, várias novas faltas podem surgir quando os componentes são integrados. No entanto, em teste de integração, dependendo da complexidade do sistema, o número de casos de teste pode ser muito grande. Nesse contexto, o uso de padrões de teste, ou seja, estratégias que já foram utilizadas e se mostraram efetivas em teste de software, pode guiar a escolha de casos de teste mais efetivos e adequados dentre um número muito grande de possíveis casos de teste. Este trabalho tem como objetivo principal fornecer uma nova abordagem de teste de integração, definida dentro de um processo integrado de desenvolvimento e teste
dirigidos por modelos (MDD/MDT - Model-Driven Testing), para a geração automática de
casos de teste a partir de modelos, utilizando padrões de teste como base para o processo de geração. Para automatizar este processo, foi desenvolvida uma ferramenta baseada em transformações entre modelos segundo práticas da MDA (Model-DrivenArchitecture). Além disso, a abordagem proposta utiliza o perfil de teste da UML para a documentação de todos os artefatos de teste gerados. Adicionalmente, estudos experimentais preliminares foram realizados no intuito de avaliar a abordagem e, consequentemente, a ferramenta de suporte desenvolvida. / Within the Software Engineering, new development paradigms are emerging in order to
offer greater productivity without sacrificing quality to the developed software. MDD
(Model-Driven Development) is one of these paradigms, whose main purpose is to
introduce rigorous models along all the software development process offering, among
other advantages, automatic code generation from models. However, even in development
processes that follow this paradigm, the software testing activity is still essential, especially
integration testing, whose purpose is to verify that the software components, implemented
and tested separately, provide the desired functionality when placed to interact with each
other. While individual components may function correctly, several new faults can arise
when the components are integrated. However, in integration testing, depending on the
system complexity, the number of test cases can be very large. In this context, the use of
test patterns, i. e., strategies that have been used and proved effective in software testing,
can guide the user at choosing test cases more effective and appropriate among a very large number of possible cases test. The main objective of this work is to propose a new approach to integration testing, defined with in an integrated model driven development and test process (MDD/MDT - Model-Driven Testing) for automatically generating test case from models adopting test patterns as basis for the generation process. To automate this process, we have developed a tool based on model transformations according to MDA (Model-Driven Architecture) practices. Furthermore, the proposal approach uses the UML testing profile to document all generated test artifacts. Additionally, preliminary experimental case studies were performed in order to evaluate the proposed approach and hence the developed tool support.
|
Page generated in 0.0921 seconds