• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 19
  • 5
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 90
  • 90
  • 90
  • 43
  • 40
  • 26
  • 26
  • 26
  • 23
  • 18
  • 17
  • 16
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Filtering and reduction techniques of combinatorial tests / Filtrage et réduction de tests combinatoires

Triki, Taha 04 October 2013 (has links)
L'objectif principal de cette thèse est d'apporter des solutions à certaines problèmes du test combinatoire. Le test combinatoire consiste à générer des tests qui couvrent toutes les combinaisons de valeurs d'entrée définies. La première problème abordé, c'est que le test combinatoire peut générer un grand nombre de tests qui sont invalides par rapport à la spécification du système à tester (SUT). Ces tests invalides sont typiquement ceux qui échouent lors de la vérification de la pré-condition d'une opération de système. Ces tests invalides doivent être éliminés de l'ensemble des tests utilisé pour évaluer le SUT, car ils conduisent à des verdicts non concluants. Comme solution, nous proposons de coupler la technique de test combinatoire à une technique d'animation qui repose sur une spécification pour filtrer les tests invalides. Dans notre travail, les tests combinatoires sont générés à partir d'un patron de test. Ce patron est essentiellement défini comme une séquence d'appels d'opérations, en utilisant un ensemble de valeurs pour les paramètres. Le dépliage d'un patron de test complexe, où plusieurs valeurs d'entrée sont utilisées, peut être soumis à une explosion combinatoire, et il est impossible d'avoir des tests valides à partir du patron de test. Il s'agit d'une deuxième problématique de cette thèse. Comme solution, nous proposons un processus de dépliage et d'animation incrémental qui permet de filtrer à un stade précoce (dans la séquence d'appels d'opération) les tests invalides, et donc de maîtriser l'explosion combinatoire. D'autres mécanismes de filtrage sont proposés, pour filtrer les tests qui ne couvrent pas certains comportements d'opération ou ne remplissent pas une propriété donnée. Le nombre de tests générés à partir d'un patron de test peut être considérablement grand pour être exécuté sur un SUT avec ressources mémoires et processeurs limitées. Ce problème est connu sous le nom de problème de réduction de suites de tests, et il représente le troisième problème de cette thèse. Comme solution, nous proposons une nouvelle technique de réduction de suites de tests basée sur les annotations (appelés tags) insérées dans le code source ou la spécification du SUT. L'exécution / animation de tests génère une trace des annotations couverts. Basé sur cette trace d'exécution, une famille de relations d'équivalence est proposée, pour réduire une suite de tests, en utilisant les critères d'ordre et de nombre de répétition des tags couverts. / The main objective of this thesis is to provide solutions to some combinatorial testing issues. The combinatorial testing consists in generating tests that cover all combinations of defined input values. The first issue of this thesis is that combinatorial testing can generate a large number of tests that are invalid according to the specification of the System Under Test (SUT). These invalid tests are typically those which fail the verification of the precondition of system operation. These invalid tests must be discarded from the set of tests used to evaluate the SUT, because they lead to inconclusive verdicts. As a solution, we propose to couple the combinatorial testing technique to an animation technique that relies on a specification to filter out invalid tests. In our work, combinatorial tests are generated from a test pattern. This pattern is mainly defined as a sequence of operation calls, using a set of values for their parameters. The unfolding of a complex test pattern, where many operation calls and/or input values are used, may be subject to combinatorial explosion, and it is impossible to provide valid tests from the test pattern. This is a second issue of this thesis. As a solution, we propose an incremental unfolding and animation process that allows to filter out at early stage (in the operation sequence) invalid tests, and therefore to master the combinatorial explosion. Other mechanisms of filtering are proposed to filter out tests which do not cover some operation behaviors or do not fulfill a given property. The test suites generated from a test pattern can be very large to execute on the SUT due the limited memory or CPU resources. This problem is defined as the test suite reduction problem, and it is the third issue of this thesis. As a solution, we propose a new test suite reduction technique based on annotations (called tags) inserted in the source code or the specification of the SUT. The execution/animation of tests generates a trace of the covered annotations. Based on the trace, a family of equivalence relations is proposed, to reduce a test suite, using the criteria of order and number of repetition of covered tags.
32

Análise de cobertura de critérios de teste estruturais a partir de conjuntos derivados de especificações formais: um estudo comparativo no contexto de aplicações espaciais / Structural coverage analysis of test sets derived from formal specifications: a comparative study in the space applications context

Paula Fernanda Ramos Herculano 24 April 2007 (has links)
As técnicas de teste podem ser divididas, num primeiro nível, naquelas baseadas no código (caixa branca) e naquelas baseadas na especificação (caixa preta ou funcionais). Nenhuma delas é completa pois visam a identificar tipos diferentes de defeitos e a sua utilização em conjunto pode elevar o nível de confiabilidade das aplicações. Assim, tornam-se importantes estudos que contribuam para um melhor entendimento da relação existente entre técnicas funcionais e estruturais, como elas se complementam e como podem ser utilizadas em conjunto. Este trabalho está inserido no contexto do projeto PLAVIS (Plataforma para Validação e Integração de Software em Aplicações Espaciais), e tem como objetivo realizar um estudo comparativo entre as técnicas de geração de casos de teste funcionais (baseadas nas especificações formais) e os critérios estruturais baseados em fluxo de controle e fluxo de dados, aplicados nas implementações. Num contexto específico, esse estudo deve fornecer dados de como se relacionam essas duas técnicas (funcional e estrutural) gerando subsídios para sua utilização em conjunto. Num contexto mais amplo - o do projeto PLAVIS - visa a estabelecer uma estratégia de teste baseada em critérios funcionais e estruturais e que possam, juntamente com as ferramentas que dão suporte a eles, compor um ambiente de teste disponível à utilização em aplicações espaciais dentro do INPE / Testing techniques can be divided, in high level, in code-based ones (white box) and specification based ones (black box). None of them are complete as they intend to identify different kinds of faults. The use of them together can increase the application confidence level. Thus, it is important to investigate the relationship between structural testing techniques and functional testing techniques, how they complete themselves and how they can be used together. This paper was developed in the context of the Plavis (PLAtform of software Validation & Integration on Space systems) project. This project provides comparative studies between functional generation testing techniques (based on formal specifications) and structural generation testing techniques, such as control-flow and data-flow criteria, applied in the implementation. In a specific context, this study provides data about the relationship between these techniques and how they can be used together. In the context of the Plavis project, the goal is to provide a testing strategy, based on functional and structural criteria, and a set of tools, composing a testing environment to be used in Space Applications projects, at INPE
33

Técnicas de testes aplicadas a software embarcado em redes ópticas / Tests techniques applied to embedded software in optical networks

Fadel, Aline Cristine, 1984- 19 August 2018 (has links)
Orientadores: Regina Lúcia de Oliveira Moraes, Eliane Martins / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-19T14:09:37Z (GMT). No. of bitstreams: 1 Fadel_AlineCristine_M.pdf: 3259764 bytes, checksum: a287ca33254d027f23e2f2f818464ee1 (MD5) Previous issue date: 2011 / Resumo: Esse trabalho apresenta os detalhes e os resultados de testes automatizados e manuais que utilizaram a técnica de injeção de falhas e que foram aplicados em redes ópticas. No primeiro experimento o teste foi automatizado e utilizou a emulação de falhas físicas baseadas na máquina de estados do software embarcado dessa rede. Para esse teste foi utilizado uma chave óptica que é controlada por um robô de testes. O segundo experimento foi um teste manual, que injetou falhas nas mensagens de comunicação do protocolo dessa rede, a fim de validar os mecanismos de tolerância a falhas do software central dessa rede. Esse experimento utilizou a metodologia Conformance and Fault injection para preparar, executar e relatar os resultados dos casos de testes. Nos dois experimentos também foi utilizado um padrão de documentação de testes que visa facilitar a reprodução dos testes, a fim de que eles possam ser aplicados em outros ambientes. Com a aplicação desses testes, a rede óptica pode alcançar uma maior confiabilidade, disponibilidade e robustez, que são características essenciais para sistemas que requerem alta dependabilidade / Abstract: This work presents the details and the results of automatic and manual tests that used the fault injection technique and were applied on GPON network. In the first experiment the test was automated, and it performed the emulation of physical faults based on the state machine of the embedded software in this network. In this test is used an optical switch that is controlled by a test robot. The second experiment was a manual test, which injected faults on protocol communication message exchanged through the optical network, in order to validate the main software fault tolerance mechanisms. This experiment used a Conformance and Fault injection methodology to prepare, execute and report the results of the test cases. In both experiments, it was used a standard test documentation to facilitate the reproduction of the tests, so that they can be applied in other environments. With applying both tests, the optical networks reach greater reliability, availability and robustness. These attributes are essential for systems that require high dependability / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
34

Proof-of-concept of Model-based testing based on an UML-model of a water-level measurement system

Alshekhly, Zoubida, Gill, Namra January 2020 (has links)
Software testing is a very important phase in software development as it minimize risks ina software system, however, it consumes time and can be very expensive. With automatictest case generation time consumption and cost can be reduced. Model-based testing isa method to test a software system with a model of the systems behaviour. Automatictest case generation is often considered a favorable support in model-based testing. In thiswork, the concept of model-based testing is explored along with testing the embedded partof a water-level measurement system (WLM) to investigate the efficiency of model-basedtesting on a software system. As a result of this, a model-based testing tool, MoMut::UMLis used to generate the test-cases on the UML model of WLM system that is built ina UML modeling environment, Eclipse-Papyrus. However, MoMut::UML implements aspecial type of model-based testing technique, model-based mutation testing; that injectsfaults in the UML model, and generates test-data on the fault-based model. By this, thebehaviour of system-under-test, only the UML model of water-level measurement system,is tested.
35

A Test Framework for Executing Model-Based Testing in Embedded Systems

Iyenghar, Padma 25 September 2012 (has links)
Model Driven Development (MDD) and Model Based Testing (MBT) are gaining inroads individually for their application in embedded software engineering projects. However, their full-edged and integrated usage in real-life embedded software engineering projects (e.g. industrially relevant examples) and executing MBT in resource constrained embedded systems (e.g. 16 bit system/64 KiByte memory) are emerging fields. Addressing the aforementioned gaps, this thesis proposes an integrated model-based approach and test framework for executing the model-based test cases, with minimal overhead, in embedded systems. Given a chosen System Under Test (SUT) and the system design model, a test framework generation algorithm generates the necessary artifacts (i.e., the test framework) for executing the model-based test cases. The main goal of the test framework is to enable test automation and test case execution at the host computer (which executes the test harness), thereby only the test input data is executed at the target. Significant overhead involved in interpreting the test data at the target is eliminated, as the test framework makes use of a target debugger (communication and decoding agent) on the host and a target monitor (software-based runtime monitoring routine) in the embedded system. In the prototype implementation of the proposed approach, corresponding (standardized) languages such as the Unified Modeling Language (UML) and the UML Testing Profile (UTP) are used for the MDD and MBT phases respectively. The applicability of the proposed approach is demonstrated using an experimental evaluation (of the prototype) in real-life examples. The empirical results indicate that the total time spent for executing the test cases in the target (runtime-time complexity), comprises of only the time spent to decode the test input data by the target monitor and execute it in the embedded system. Similarly, the only memory requirement in the target for executing the model-based test cases in the target is that of the software-based target monitor. A quantitative comparison on the percentage change in the memory overhead (runtime-memory complexity) for the existing approach and the proposed approach indicates that the existing approach (e.g. in a MDD/MBT tool-Rhapsody), introduces approximately 150% to 350% increase in memory overhead for executing the test cases. On the other hand, in the proposed approach, the target monitor is independent of the number of test cases to be executed and their complexity. Hence, the percentage change in the memory overhead for the proposed approach shows a declining trend w.r.t the increasing code-size for equivalent application scenarios (approximately 17% to 2%). Thus, the proposed test automation approach provides the essential benefit of executing model- based tests, without downloading the test harness in the target. It is demonstrated that it is feasible to execute the test cases specified at higher abstraction levels (e.g. using UML sequence diagrams) in resource constrained embedded systems and how this may be realized using the proposed approach. Further, as the proposed runtime monitoring mechanism is time and memory-aware, the overhead parameters can be accommodated in the earlier phases of the embedded software development cycle (if necessary) and the target monitor can be included in the final production code. The aforementioned advantages highlight the scalability, applicability, reliability and superiority of the proposed approach over the existing methodologies for executing the model-based test cases in embedded systems.
36

Model-driven development of information systems

Wang, Chen-Wei January 2012 (has links)
The research presented in this thesis is aimed at developing reliable information systems through the application of model-driven and formal techniques. These are techniques in which a precise, formal model of system behaviour is exploited as source code. As such a model may be more abstract, and more concise, than source code written in a conventional programming language, it should be easier and more economical to create, to analyse, and to change. The quality of the model of the system can be ensured through certain kinds of formal analysis and fixed accordingly if necessary. Most valuably, the model serves as the basis for the automated generation or configuration of a working system. This thesis provides four research contributions. The first involves the analysis of a proposed modelling language targeted at the model-driven development of information systems. Logical properties of the language are derived, as are properties of its compiled form---a guarded substitution notation. The second involves the extension of this language, and its semantics, to permit the description of workflows on information systems. Workflows described in this way may be analysed to determine, in advance of execution, the extent to which their concurrent execution may introduce the possibility of deadlock or blocking: a condition that, in this context, is synonymous with a failure to achieve the specified outcome. The third contribution concerns the validation of models written in this language by adapting existing techniques of software testing to the analysis of design models. A methodology is presented for checking model consistency, on the basis of a generated test suite, against the intended requirements. The fourth and final contribution is the presentation of an implementation strategy for the language, targeted at standard, relational databases, and an argument for its correctness, based on a simple, set-theoretic semantics for structure and operations.
37

Domain-Centered Product Line Testing

Lackner, Hartmut 11 July 2017 (has links)
Die Ansprüche von Kunden an neue (Software-)Produkte wachsen stetig. Produkte sollen genau auf die einzelnen Kundenwünsche zugeschnitten sein, sodass der Kunde genau die Funktionalität erhält und bezahlt die er benötigt. Hersteller reagieren auf diese gestiegenen Ansprüche mit immer mehr Varianten in denen sie ihre Produkte ihren Kunden anbieten. Die Variantenvielfalt hat in solchem Maß zugenommen, dass selbst in Massen gefertigte Produkte heute als Unikate produziert werden können. Neue Methoden wie Produktlinienentwicklung unterstützen die Entwicklung solcher variantenreicher Systeme. Während der Aufwand für die Entwicklung neuer Varianten nun sinkt, profitiert die Qualitätssicherung nicht vom Effizienzgewinn der Entwicklung. Im Gegenteil: Insbesondere beim Test wird zunächst jede Variante wie ein einzelnes Produkt behandelt. Bei variantenreichen Systemen ist dies aufwandsbedingt jedoch nicht mehr möglich. Die in dieser Arbeit vorgestellten Testentwurfsmethoden berücksichtigen die Variantenvielfalt in besonderem Maße. Bisher wurden, nach einer Stichprobenauswahl zur Reduktion des Testaufwands, die Testfälle auf Basis der konkreten Produkte entworfen. Statt nun auf Basis konkreter Produkte werden in dieser Arbeit zwei Ansätze vorgestellt, die die Phase des Testentwurfs auf die Produktlinienebene heben. Die bei Anwendung dieser Methoden entstehenden Testfälle enthalten, je nach Inhalt, Freiheitsgrade bzgl. ihrer Anforderungen an eine Variante, sodass ein Testfall auf ein oder mehrere Varianten angewendet wird. Ausgehend von solchen Testfällen werden in dieser Arbeit neue Kriterien zur Stichprobenauswahl entwickelt. Mit diesen Kriterien kann der Umfang der Stichprobe, aber auch Eigenschaften der zu testenden Varianten bzgl. eines gegebenes Testziel optimiert werden. So ist es möglich, z.B. sehr wenige oder sehr unterschiedliche Varianten zum Test auszuwählen. Insgesamt werden in dieser Arbeit fünf Kriterien definiert und auf ihr Fehleraufdeckungspotenzial untersucht. Zu diesem Zweck werden neue Bewertungskriterien zur Fehleraufdeckungswahrscheinlichkeit von Produktlinientests etabliert. Somit ist erstmalig eine quantitative sowie qualitative Bewertung von Produktlinientests möglich. Die Ergebnisse der vorgestellten Methoden und Auswahlkriterien werden sowohl untereinander evaluiert, als auch konventionellen Testmethoden für Produktliniensysteme gegenübergestellt. An vier Beispielen unterschiedlicher Gro{\"ss}e werden die in dieser Arbeit vorgestellten Methoden evaluiert. / Consumer expectations of (software-)products are growing continuously. They demand products that fit their exact needs, so they pay only for necessary functionalities. Producers react to those demands by offering more variants of a product. Product customization has reached a level where classically mass produced goods, like cars, can be configured to unique items. New paradigms facilitate the engineering of such variant-rich systems and reduce costs for development and production. While development and production became more efficient, quality assurance suffers from treating each variant as a distinct product. In particular, test effort is affected, since each variant must be tested sufficiently prior to production. For variant-rich systems this testing approach is not feasible anymore. The methods for test design presented in this thesis overcome this issue by integrating variability into the test design process. The resulting test cases include requirements for variants, which must be fulfilled to execute the test successfully. Hence multiple variants may fulfill these requirements, each test case may be applicable to more than only one variant. Having test cases with requirements enables sampling subsets of variants for the purpose of testing. Under the assumption that each test case must be executed once, variants can be sampled to meet predefined test goals, like testing a minimal or diverse subset of variants. In this thesis, five goals are defined and evaluated by assessing the tests for their fault detection potential. For this purpose, new criteria for assessing the fault detection capability of product line tests are established. These criteria enable quantitative as well as qualitative assessment of such test cases for the first time. The results of the presented methods are compared with each other and furthermore with state of the art methods for product line testing. This comparison is carried out on four examples of different sizes, from small to industry-grade.
38

Evaluating finite state machine based testing methods on RBAC systems / Avaliação de métodos de teste baseado em máquinas de estados finitos em sistemas RBAC

Damasceno, Carlos Diego Nascimento 09 May 2016 (has links)
Access Control (AC) is a major pillar in software security. In short, AC ensures that only intended users can access resources and only the required access to accomplish some task will be given. In this context, Role Based Access Control (RBAC) has been established as one of the most important paradigms of access control. In an organization, users receive responsibilities and privileges through roles and, in AC systems implementing RBAC, permissions are granted through roles assigned to users. Despite the apparent simplicity, mistakes can occur during the development of RBAC systems and lead to faults or either security breaches. Therefore, a careful verification and validation process becomes necessary. Access control testing aims at showing divergences between the actual and the intended behavior of access control mechanisms. Model Based Testing (MBT) is a variant of testing that relies on explicit models, such as Finite State Machines (FSM), for automatizing test generation. MBT has been successfully used for testing functional requirements; however, there is still lacking investigations on testing non-functional requirements, such as access control, specially in test criteria. In this Master Dissertation, two aspects of MBT of RBAC were investigated: FSM-based testing methods on RBAC; and Test prioritization in the domain of RBAC. At first, one recent (SPY) and two traditional (W and HSI) FSM-based testing methods were compared on RBAC policies specified as FSM models. The characteristics (number of resets, average test case length and test suite length) and the effectiveness of test suites generated from the W, HSI and SPY methods to five different RBAC policies were analyzed at an experiment. Later, three test prioritization methods were compared using the test suites generated in the previous investigation. A prioritization criteria based on RBAC similarity was introduced and compared to random prioritization and simple similarity. The obtained results pointed out that the SPY method outperformed W and HSI methods on RBAC domain. The RBAC similarity also achieved an Average Percentage Faults Detected (APFD) higher than the other approaches. / Controle de Acesso (CA) é um dos principais pilares da segurança da informação. Em resumo, CA permite assegurar que somente usuários habilitados terão acesso aos recursos de um sistema, e somente o acesso necessário para a realização de uma dada tarefa será disponibilizado. Neste contexto, o controle de acesso baseado em papel (do inglês, Role Based Access Control - RBAC) tem se estabelecido como um dos mais importante paradigmas de controle de acesso. Em uma organização, usuários recebem responsabilidades por meio de cargos e papéis que eles exercem e, em sistemas RBAC, permissões são distribuídas por meio de papéis atribuídos aos usuários. Apesar da aparente simplicidade, enganos podem ocorrer no desenvolvimento de sistemas RBAC e gerar falhas ou até mesmo brechas de segurança. Dessa forma, processos de verificação e validação tornam-se necessários. Teste de CA visa identificar divergências entre a especificação e o comportamento apresentado por um mecanismo de CA. Teste Baseado em Modelos (TBM) é uma variante de teste de software que se baseia em modelos explícitos de especificação para automatizar a geração de casos testes. TBM tem sido aplicado com sucesso no teste funcional, entretanto, ainda existem lacunas de pesquisa no TBM de requisitos não funcionais, tais como controle de acesso, especialmente de critérios de teste. Nesta dissertação de mestrado, dois aspectos do TBM de RBAC são investigados: métodos de geração de teste baseados em Máquinas de Estados Finitos (MEF) para RBAC; e priorização de testes para RBAC. Inicialmente, dois métodos tradicionais de geração de teste, W e HSI, foram comparados ao método de teste mais recente, SPY, em um experimento usando políticas RBAC especificadas como MEFs. As características (número de resets, comprimento médio dos casos de teste e comprimento do conjunto de teste) e a efetividade dos conjuntos de teste gerados por cada método para cinco políticas RBAC foram analisadas. Posteriormente, três métodos de priorização de testes foram comparados usando os conjuntos de teste gerados no experimento anterior. Neste caso, um critério baseado em similaridade RBAC foi proposto e comparado com a priorização aleatória e baseada em similaridade simples. Os resultados obtidos mostraram que o método SPY conseguiu superar os métodos W e HSI no teste de sistemas RBAC. A similaridade RBAC também alcançou uma detecção de defeitos superior.
39

Automatic generation of configurable test-suites for software product lines / Geração automática de conjuntos de teste configuráveis para linhas de produto de software

Fragal, Vanderson Hafemann 28 November 2017 (has links)
Software Product Line Engineering (SPLE) is an approach used in the development of similar products, which explores the systematic reuse of software artifacts. The SPLE process has several activities executed to ensure software quality. Quality assurance is of vital importance for achieving and maintaining a high quality of all kinds of artifacts, such as products and processes. Testing activities are widely used in the industry for quality management. However, the effort for applying testing is usually high, and increasing the testing efficiency is a major concern of all systems engineering activities. A common means of increasing efficiency is automation of the test execution and the test design. Automated test design can be performed using approaches such as Model-Based Testing (MBT) in which the real behavior of a software system is compared to an abstract test model. Several techniques, processes, and strategies were developed for SPLE testing, but still many problems are open in this area of research. The challenge in focus is the reduction of the overall test effort required to test SPLE products. Test effort can be reduced by maximizing test reuse using models that take advantage of the similarity between products. The thesis goal is to automate the generation of small test-suites with high fault detection and low test redundancy between products. To achieve the goal, equivalent tests are identified for a set of products using complete and configurable test-suites. Two research directions are explored, one is product-based centered, and the other is product line-centered. For test design, test-suites that have full fault coverage were generated from state machines with and without feature constraints. A prototype implementation tool was developed for test design automation. In addition, the proposed approach was evaluated using examples, experimental studies, and an industrial case study for the automotive domain. The results indicates test effort reduction of 36% in the first research direction for a product line with 24 products, and in the second research direction increasing test effort reduction based on the number of products that require testing. For 6 products 15% reduction (from case study), and for 20 random products 50% reduction (from experimental studies). / Engenharia de Linha de Produto de Software (SPLE) é uma abordagem utilizada no desenvolvimento de produtos similares, que explora a reutilização sistemática de artefatos de software. O processo da SPLE executa várias atividades para garantir a qualidade do software. Atividades de garantia de qualidade são fundamentais para alcançar e manter altos níveis de qualidade em todos os tipos de artefatos de software, tais como produtos e processos. Atividades de teste são amplamente utilizadas na indústria para o gerenciamento de qualidade. No entanto, o esforço para a aplicação de testes geralmente é alto e melhorar a eficiência dos testes é um desafio relacionado a todas as atividades da engenharia de sistemas. Uma maneira de melhorar a eficiência da atividade de teste é automatizar a geração e execução dos testes. A geração automática de testes pode ser realizada por abordagens tais como o Teste Baseado em Modelos (TBM), em que o comportamento real do sistema de software é comparado a um modelo de teste abstrato. Várias técnicas, processos e estratégias foram desenvolvidas para o teste de SPLE, contudo, existem diversos desafios nessa área de pesquisa. O desafio em foco é a redução do esforço geral de teste necessário para testar produtos da SPLE. O esforço de teste pode ser reduzido maximizando o reuso de teste usando modelos que representam variabilidades entre os produtos. O objetivo da tese é automatizar a geração de compactos conjuntos de testes com alta capacidade de detecção de falhas e baixa redundância de teste entre produtos. Para alcançar tal objetivo, testes equivalentes são identificados para um conjunto de produtos usando conjuntos de teste completos e configuráveis. Duas direções de pesquisa são exploradas, uma centrada no produto e a outra centrada na linha de produto. Foram gerados conjuntos de teste que tenham cobertura de falhas completa a partir de máquinas de estado com e sem restrições de características. A implementação de uma ferramenta foi desenvolvida para automatizar a geração de teste. Além disso, a abordagem proposta foi avaliada usando exemplos, estudos experimentais e um estudo de caso industrial. Os resultados indicam uma redução de esforço de teste de 36% na primeira direção de pesquisa para uma linha com 24 produtos, e na segunda linha de pesquisa uma redução incremental com mais produtos a serem testados. Para 6 produtos uma redução de 15% (do estudo de caso), e para 20 produtos randomicos uma redução de 50% (dos estudos experimentais).
40

Verification of behaviourist multi-agent systems by means of formally guided simulations / Verificação de sistemas multi-agentes comportamentalistas através de simulações formalmente guiadas

Silva, Paulo Salem da 28 November 2011 (has links)
Multi-agent systems (MASs) can be used to model phenomena that can be decomposed into several interacting agents which exist within an environment. In particular, they can be used to model human and animal societies, for the purpose of analysing their properties by computational means. This thesis is concerned with the automated analysis of a particular kind of such social models, namely, those based on behaviourist principles, which contrasts with the more dominant cognitive approaches found in the MAS literature. The hallmark of behaviourist theories is the emphasis on the definition of behaviour in terms of the interaction between agents and their environment. In this manner, not merely re exive actions, but also learning, drives, and emotions can be defined. More specifically, in this thesis we introduce a formal agent architecture (specified with the Z Notation) based on the Behaviour Analysis theory of B. F. Skinner, and provide a suitable formal notion of environment (based on the pi-calculus process algebra) to bring such agents together as an MAS. Simulation is often used to analyse MASs. The techniques involved typically consist in implementing and then simulating a MAS several times to either collect statistics or see what happens through animation. However, simulations can be used in a more verification-oriented manner if one considers that they are actually explorations of large state-spaces. In this thesis we propose a novel verification technique based on this insight, which consists in simulating a MAS in a guided way in order to check whether some hypothesis about it holds or not. To this end, we leverage the prominent position that environments have in the MASs of this thesis: the formal specification of the environment of a MAS serves to compute the possible evolutions of the MAS as a transition system, thereby establishing the state-space to be investigated. In this computation, agents are taken into account by being simulated in order to determine, at each environmental state, what their actions are. Each simulation execution is a sequence of states in this state-space, which is computed on-the-fly, as the simulation progresses. The hypothesis to be investigated, in turn, is given as another transition system, called a simulation purpose, which defines the desirable and undesirable simulations (e.g., \"every time the agent does X, it will do Y later\"). It is then possible to check whether the MAS satisfies the simulation purpose according to a number of precisely defined notions of satisfiability. Algorithmically, this corresponds to building a synchronous product of these two transitions systems (i.e., the MAS\'s and the simulation purpose) on-the-fly and using it to operate a simulator. That is to say, the simulation purpose is used to guide the simulator, so that only the relevant states are actually simulated. By the end of such an algorithm, it delivers either a conclusive or an inconclusive verdict. If conclusive, it becomes known whether the MAS satisfies the simulation purpose with respect to the observations made during simulations. If inconclusive, it is possible to perform some adjustments and try again. In summary, then, in this thesis we provide four novel elements: (i) an agent architecture; (ii) a formal specification of the environment of these agents, so that they can be composed into an MAS; (iii) a structure to describe the property of interest, which we named simulation purpose; and (iv) a technique to formally analyse the resulting MAS with respect to a simulation purpose. These elements are implemented in a tool, called Formally Guided Simulator (FGS). Case studies executable in FGS are provided to illustrate the approach. / Sistemas multi-agentes (SMAs) podem ser usados para modelar fenômenos que podem ser decompostos em diversos agentes que interagem entre si dentro de um ambiente. Em particular, eles podem ser usados para modelar sociedades humanas e animais, com a finalidade de se analisar as suas propriedades computacionalmente. Esta tese trata da análise automatizada de um tipo particular de tais modelos sociais, a saber, aqueles baseados em princípios behavioristas, o que contrasta com as abordagens cognitivas mais dominante na literatura de SMAs. A principal característica das teorias behaviorista é a ênfase na descrição do comportamento em termos da interação entre agentes e seu ambiente. Desta forma, não apenas ações refl exivas, mas também de aprendizado, motivações, e as emoções podem ser definidas. Mais especificamente, nesta tese apresentamos uma arquitetura de agentes formal (especificada através da Notação Z) baseada na teoria da Análise do Comportamento de B. F. Skinner, e fornecemos uma noção adequada e formal de ambiente (com base na álgebra de processos pi-calculus) para colocar tais agentes juntos em um SMA. Simulações são freqüentemente utilizadas para se analisar SMAs. As técnicas envolvidas tipicamente consistem em simular um SMA diversas vezes, seja para coletar estatísticas, seja para observar o que acontece através de animações. Contudo, simulações podem ser usadas de forma a pertmitir a realização de verificações automatizadas do SMA caso sejam entendidas como explorações de grandes espaços-de-estados. Nesta tese propomos uma técnica de verificação baseada nessa observação, que consiste em simular um SMA de uma forma guiada, a fim de se determinar se uma dada hipótese sobre ele é verdadeira ou não. Para tal fim, tiramos proveito da importância que os ambientes têm nesta tese: a especificação formal do ambiente de um SMA serve para calcular as evoluções possíveis do SMA como um sistema de transição, estabelecendo assim o espaço-de-estados a ser investigado. Neste cálculo, os agentes são levados em conta simulando-os, a fim de determinar, em cada estado do ambiente, quais são suas ações. Cada execução da simulação é uma seqüência de estados nesse espaço-de-estados, que é calculado em tempo de execução, conforme a simulação progride. A hipótese a ser investigada, por sua vez, é dada como um outro sistema de transição, chamado propósito de simulação, o qual define as simulações desejáveis e indesejáveis (e.g., \"sempre que o agente fizer X, ele fará Y depois\"). Em seguida, é possível verificar se o SMA satisfaz o propósito de simulação de acordo com uma série de relações de satisfatibilidade precisamente definidas. Algoritmicamente, isso corresponde a construir um produto síncrono desses dois sistemas de transições (i.e., o do SMA e o do propósito de simulação) em tempo de execução e usá-lo para operar um simulador. Ou seja, o propósito de simulação é usado para guiar o simulador, de modo que somente os estados relevantes sejam efetivamente simulados. Ao terminar, um tal algoritmo pode fornecer um veredito conclusivo ou inconclusivo. Se conclusivo, descobre-se se o SMA satisfaz ou não o propósito de simulação com relação às observações feitas durante as simulações. Se inconclusivo, é possível realizar alguns ajustes e tentar novamente. em resumo, portanto, nesta tese propomos quatro novos elementos: (i) uma arquitetura de agente, (ii) uma especificação formal do ambiente desses agentes, de modo que possam ser compostos em um SMA, (iii) uma estrutura para descrever a propriedade de interesse, a qual chamamos de propósito de simulação, e (iv) uma técnica para se analisar formalmente o SMA resultante com relação a um propósito de simulação. Esses elementos estão implementados em uma ferramenta, denominada Simulador Formalmente Guiado (FGS, do inglês Formally Guided Simulator). Estudos de caso executáveis no FGS são fornecidos para ilustrar a abordagem.

Page generated in 0.0357 seconds