• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic test vector generation and coverage analysis in model-based software development

Andersson, Jonny January 2005 (has links)
<p>Thorough testing of software is necessary to assure the quality of a product before it is released. The testing process requires substantial resources in software development. Model-based software development provides new possibilities to automate parts of the testing process. By automating tests, valuable time can be saved. This thesis focuses on different ways to utilize models for automatic generation of test vectors and how test coverage analysis can be used to assure the quality of a test suite or to find "dead code" in a model. Different test-automation techniques have been investigated and applied to a model of an adaptive cruise control system (ACC) used at Scania. Source code has been generated automatically from the model, model coverage and code coverage has therefore been compared. The work with this thesis resulted in a new method to create test vectors for models based on a combinatorial test technique.</p>
2

Performance Analysis and Deployment Techniques forWireless Sensor Networks

She, Huimin January 2012 (has links)
Recently, wireless sensor network (WSN) has become a promising technology with a wide range of applications such as supply chain monitoring and environment surveillance. It is typically composed of multiple tiny devices equipped with limited sensing, computing and wireless communication capabilities. Design of such networks presents several technique challenges while dealing with various requirements and diverse constraints. Performance analysis and deployment techniquesare required to provide insight on design parameters and system behaviors. Based on network calculus, a deterministic analysis method is presented for evaluating the worst-case delay and buffer cost of sensor networks. To this end,traffic splitting and multiplexing models are proposed and their delay and buffer bounds are derived. These models can be used in combination to characterize complex traffic flowing scenarios. Furthermore, the method integrates a variable duty cycle to allow the sensor nodes to operate at low rates thus saving power. In an attempt to balance traffic load and improve resource utilization and performance,traffic splitting mechanisms are introduced for sensor networks with general topologies. To provide reliable data delivery in sensor networks, retransmission has been one of the most popular schemes. We propose an analytical method to evaluate the maximum data transmission delay and energy consumption of two types of retransmission schemes: hop-by-hop retransmission and end-to-end retransmission.In order to validate the tightness of the bounds obtained by the analysis method, the simulation results and analytical results are compared with various input traffic loads. The results show that the analytic bounds are correct and tight. Stochastic network calculus has been developed as a useful tool for Qualityof Service (QoS) analysis of wireless networks. We propose a stochastic servicecurve model for the Rayleigh fading channel and then provide formulas to derive the probabilistic delay and backlog bounds in the cases of deterministic and stochastic arrival curves. The simulation results verify that the tightness of the bounds are good. Moreover, a detailed mechanism for bandwidth estimation of random wireless channels is developed. The bandwidth is derived from the measurement of statistical backlogs based on probe packet trains. It is expressed by statistical service curves that are allowed to violate a service guarantee with a certain probability. The theoretic foundation and the detailed step-by-step procedure of the estimation method are presented. One fundamental application of WSNs is event detection in a Field of Interest(FoI), where a set of sensors are deployed to monitor any ongoing events. To satisfy a certain level of detection quality in such applications, it is desirable that events in the region can be detected by a required number of sensors. Hence, an important problem is how to conduct sensor deployment for achieving certain coverage requirements. In this thesis, a probabilistic event coverage analysis methodis proposed for evaluating the coverage performance of heterogeneous sensor networks with randomly deployed sensors and stochastic event occurrences. Moreover,we present a framework for analyzing node deployment schemes in terms of three performance metrics: coverage, lifetime, and cost. The method can be used to evaluate the benefits and trade-offs of different deployment schemes and thus provide guidelines for network designers. / <p>QC 20120906</p>
3

Automated Navigation Model Extraction For Web Load Testing

Kara, Ismihan Refika 01 December 2011 (has links) (PDF)
Web pages serve a huge number of internet users in nearly every area. An adequate testing is needed to address the problems of web domains for more efficient and accurate services. We present an automated tool to test web applications against execution errors and the errors occured when many users connect the same server concurrently. Our tool, called NaMoX, attains the clickables of the web pages, creates a model exerting depth first search algorithm. NaMoX simulates a number of users, parses the developed model, and tests the model by branch coverage analysis. We have performed experiments on five web sites. We have reported the response times when a click operation is eventuated. We have found 188 errors in total. Quality metrics are extracted and this is applied to the case studies.
4

Automatic test vector generation and coverage analysis in model-based software development

Andersson, Jonny January 2005 (has links)
Thorough testing of software is necessary to assure the quality of a product before it is released. The testing process requires substantial resources in software development. Model-based software development provides new possibilities to automate parts of the testing process. By automating tests, valuable time can be saved. This thesis focuses on different ways to utilize models for automatic generation of test vectors and how test coverage analysis can be used to assure the quality of a test suite or to find "dead code" in a model. Different test-automation techniques have been investigated and applied to a model of an adaptive cruise control system (ACC) used at Scania. Source code has been generated automatically from the model, model coverage and code coverage has therefore been compared. The work with this thesis resulted in a new method to create test vectors for models based on a combinatorial test technique.
5

Análise da influência do uso de domínios de parâmetros sobre a eficiência da verificação funcional baseada em estimulação aleatória. / Analysis of the influence of using parameter domains on ramdom-stimulation-based functional verification efficiency.

Castro Marquez, Carlos Ivan 10 February 2009 (has links)
Uma das maiores restrições que existe atualmente no fluxo de projeto de CIs é a necessidade de um ciclo menor de desenvolvimento. Devido às grandes dimensões dos sistemas atuais, é muito provável encontrar no projeto de blocos IP, erros ou bugs originados na passagem de uma dada especificação inicial para seus correspondentes modelos de descrição de hardware. Isto faz com que seja necessário verificar tais modelos para garantir aplicações cem por cento funcionais. Uma das técnicas de verificação que tem adquirido bastante popularidade recentemente é a verificação funcional, uma vez que é uma alternativa que ajuda a manter baixos custos de validação dos modelos HDL ao longo do projeto completo do circuito. Na verificação funcional, que está baseada em ambientes de simulação, a funcionalidade completa (ou relevante) do modelo é explorada, aplicando-se casos de teste, um após o outro. Isto permite examinar o modelo em todas as seqüências e combinações de entradas desejadas. Na verificação funcional, existe a possibilidade de simular o modelo estimulando-o com casos de teste aleatórios, o qual ajuda a cobrir um amplo número de estados. Para facilitar a aplicação de estímulos em simulação de circuitos, é comum que espaços definidos por parâmetros de entrada sejam limitados em sua abrangência e agrupados de tal forma que subespaços sejam formados. No desenvolvimento de testbenches, os geradores de estímulos aleatórios podem ser criados de forma a conter subespaços que se sobrepõem (resultando em estímulos redundantes) ou subespaços que contenham condições que não sejam de interesse (resultando em estímulos inválidos). É possível eliminar ou diminuir, os casos de teste redundantes e inválidos através da aplicação de metodologias de modificação do espaço de estímulos de entrada, e assim, diminuir o tempo requerido para completar a simulação de modelos HDL. No presente trabalho, é realizada uma análise da aplicação da técnica de organização do espaço de entrada através de domínios de parâmetros do IP, e uma metodologia é desenvolvida para tal, incluindo-se, aí, uma ferramenta de codificação automática de geradores de estímulos aleatórios em linguagem SyatemC: o GET_PRG. Resultados com a aplicação da metodologia é comparada a casos de aplicação de estímulos aleatórios gerados a partir de um espaço de estímulos de entrada sem modificações.Como esperado, o número de casos de teste redundantes e inválidos aplicados aos testbenches foi sempre maior para o caso de estimulação aleatória a partir do espaço de estímulos de entrada completo com um tempo de execução mais longo. / One of the strongest restrictions that exist throughout ICs design flow is the need for shorter development cycles. This, along with the constant demand for more functionalities, has been the main cause for the appearance of the so-called System-on-Chip (SOC) architectures, consisting of systems that contain dozens of reusable hardware blocks (Intellectual Properties, or IPs). The increasing complexity makes it necessary to thoroughly verify such models in order to guarantee 100% functional applications. Among the current verification techniques, functional verification has received important attention, since it represents an alternative that keeps HDL validation costs low throughout the circuits design cycle. Functional verification is based in testbenches, and it works by exploring the whole (or relevant) models functionality, applying test cases in a sequential fashion. This allows the testing of the model in all desired input sequences and combinations. There are different techniques concerning testbench design, being the random stimulation an important approach, by which a huge number of test cases can be automatically created. In order to ease the stimuli application in circuit simulation, it is common to limit the range of the space defined by input parameters and to group such restricted parameters in sub-spaces. In testbench development, it may occur the creation of random stimuli generators containing overlapping sub-spaces (resulting in redundant stimuli) or sub-spaces containing conditions of no interest (resulting in invalid stimuli). It is possible to eliminate, or at least reduce redundant and invalid test cases by modifying the input stimuli space, thus, diminishing the time required to complete the HDL models simulation. In this work, the application of a technique aimed to organize the input stimuli space, by means of IP parameter domains, is analyzed. A verification methodology based on that is developed, including a tool for automatic coding of random stimuli generators using SystemC: GET_PRG. Results on applying such a methodology are compared to cases where test vectors from the complete verification space are generated. As expected, the number of redundant test cases applied to the testbenches was always greater for the case of random stimulation on the whole (unreduced, unorganized) input stimuli space, with a larger testbench execution time.
6

Scalable data-flow testing / Teste de fluxo de dados escalável

Araujo, Roberto Paulo Andrioli de 15 September 2014 (has links)
Data-flow (DF) testing was introduced more than thirty years ago aiming at verifying a program by extensively exploring its structure. It requires tests that traverse paths in which the assignment of a value to a variable (a definition) and its subsequent reference (a use) is verified. This relationship is called definition-use association (dua). While control-flow (CF) testing tools have being able to tackle systems composed of large and long running programs, DF testing tools have failed to do so. This situation is in part due to the costs associated with tracking duas at run-time. Recently, an algorithm, called Bitwise Algorithm (BA), which uses bit vectors and bitwise operations for tracking intra-procedural duas at run-time, was proposed. This research presents the implementation of BA for programs compiled into Java bytecodes. Previous DF approaches were able to deal with small to medium size programs with high penalties in terms of execution and memory. Our experimental results show that by using BA we are able to tackle large systems with more than 250 KLOCs and 300K required duas. Furthermore, for several programs the execution penalty was comparable with that imposed by a popular CF testing tool. / Teste de fluxo de dados (TFD) foi introduzido há mais de trinta anos com o objetivo de criar uma avaliação mais abrangente da estrutura dos programas. TFD exige testes que percorrem caminhos nos quais a atribuição de valor a uma variável (definição) e a subsequente referência a esse valor (uso) são verificados. Essa relação é denominada associação definição-uso. Enquanto as ferramentas de teste de fluxo de controle são capazes de lidar com sistemas compostos de programas grandes e que executam durante bastante tempo, as ferramentas de TFD não têm obtido o mesmo sucesso. Esta situação é, em parte, devida aos custos associados ao rastreamento de associações definição-uso em tempo de execução. Recentemente, foi proposto um algoritmo --- chamado \\textit (BA) --- que usa vetores de bits e operações bit a bit para monitorar associações definição-uso em tempo de execução. Esta pesquisa apresenta a implementação de BA para programas compilados em Java. Abordagens anteriores são capazes de lidar com programas pequenos e de médio porte com altas penalidades em termos de execução e memória. Os resultados experimentais mostram que, usando BA, é possível utilizar TFD para verificar sistemas com mais de 250 mil linhas de código e 300 mil associações definição-uso. Além disso, para vários programas, a penalidade de execução imposta por BA é comparável àquela imposta por uma popular ferramenta de teste de fluxo de controle.
7

Análise da influência do uso de domínios de parâmetros sobre a eficiência da verificação funcional baseada em estimulação aleatória. / Analysis of the influence of using parameter domains on ramdom-stimulation-based functional verification efficiency.

Carlos Ivan Castro Marquez 10 February 2009 (has links)
Uma das maiores restrições que existe atualmente no fluxo de projeto de CIs é a necessidade de um ciclo menor de desenvolvimento. Devido às grandes dimensões dos sistemas atuais, é muito provável encontrar no projeto de blocos IP, erros ou bugs originados na passagem de uma dada especificação inicial para seus correspondentes modelos de descrição de hardware. Isto faz com que seja necessário verificar tais modelos para garantir aplicações cem por cento funcionais. Uma das técnicas de verificação que tem adquirido bastante popularidade recentemente é a verificação funcional, uma vez que é uma alternativa que ajuda a manter baixos custos de validação dos modelos HDL ao longo do projeto completo do circuito. Na verificação funcional, que está baseada em ambientes de simulação, a funcionalidade completa (ou relevante) do modelo é explorada, aplicando-se casos de teste, um após o outro. Isto permite examinar o modelo em todas as seqüências e combinações de entradas desejadas. Na verificação funcional, existe a possibilidade de simular o modelo estimulando-o com casos de teste aleatórios, o qual ajuda a cobrir um amplo número de estados. Para facilitar a aplicação de estímulos em simulação de circuitos, é comum que espaços definidos por parâmetros de entrada sejam limitados em sua abrangência e agrupados de tal forma que subespaços sejam formados. No desenvolvimento de testbenches, os geradores de estímulos aleatórios podem ser criados de forma a conter subespaços que se sobrepõem (resultando em estímulos redundantes) ou subespaços que contenham condições que não sejam de interesse (resultando em estímulos inválidos). É possível eliminar ou diminuir, os casos de teste redundantes e inválidos através da aplicação de metodologias de modificação do espaço de estímulos de entrada, e assim, diminuir o tempo requerido para completar a simulação de modelos HDL. No presente trabalho, é realizada uma análise da aplicação da técnica de organização do espaço de entrada através de domínios de parâmetros do IP, e uma metodologia é desenvolvida para tal, incluindo-se, aí, uma ferramenta de codificação automática de geradores de estímulos aleatórios em linguagem SyatemC: o GET_PRG. Resultados com a aplicação da metodologia é comparada a casos de aplicação de estímulos aleatórios gerados a partir de um espaço de estímulos de entrada sem modificações.Como esperado, o número de casos de teste redundantes e inválidos aplicados aos testbenches foi sempre maior para o caso de estimulação aleatória a partir do espaço de estímulos de entrada completo com um tempo de execução mais longo. / One of the strongest restrictions that exist throughout ICs design flow is the need for shorter development cycles. This, along with the constant demand for more functionalities, has been the main cause for the appearance of the so-called System-on-Chip (SOC) architectures, consisting of systems that contain dozens of reusable hardware blocks (Intellectual Properties, or IPs). The increasing complexity makes it necessary to thoroughly verify such models in order to guarantee 100% functional applications. Among the current verification techniques, functional verification has received important attention, since it represents an alternative that keeps HDL validation costs low throughout the circuits design cycle. Functional verification is based in testbenches, and it works by exploring the whole (or relevant) models functionality, applying test cases in a sequential fashion. This allows the testing of the model in all desired input sequences and combinations. There are different techniques concerning testbench design, being the random stimulation an important approach, by which a huge number of test cases can be automatically created. In order to ease the stimuli application in circuit simulation, it is common to limit the range of the space defined by input parameters and to group such restricted parameters in sub-spaces. In testbench development, it may occur the creation of random stimuli generators containing overlapping sub-spaces (resulting in redundant stimuli) or sub-spaces containing conditions of no interest (resulting in invalid stimuli). It is possible to eliminate, or at least reduce redundant and invalid test cases by modifying the input stimuli space, thus, diminishing the time required to complete the HDL models simulation. In this work, the application of a technique aimed to organize the input stimuli space, by means of IP parameter domains, is analyzed. A verification methodology based on that is developed, including a tool for automatic coding of random stimuli generators using SystemC: GET_PRG. Results on applying such a methodology are compared to cases where test vectors from the complete verification space are generated. As expected, the number of redundant test cases applied to the testbenches was always greater for the case of random stimulation on the whole (unreduced, unorganized) input stimuli space, with a larger testbench execution time.
8

Scalable data-flow testing / Teste de fluxo de dados escalável

Roberto Paulo Andrioli de Araujo 15 September 2014 (has links)
Data-flow (DF) testing was introduced more than thirty years ago aiming at verifying a program by extensively exploring its structure. It requires tests that traverse paths in which the assignment of a value to a variable (a definition) and its subsequent reference (a use) is verified. This relationship is called definition-use association (dua). While control-flow (CF) testing tools have being able to tackle systems composed of large and long running programs, DF testing tools have failed to do so. This situation is in part due to the costs associated with tracking duas at run-time. Recently, an algorithm, called Bitwise Algorithm (BA), which uses bit vectors and bitwise operations for tracking intra-procedural duas at run-time, was proposed. This research presents the implementation of BA for programs compiled into Java bytecodes. Previous DF approaches were able to deal with small to medium size programs with high penalties in terms of execution and memory. Our experimental results show that by using BA we are able to tackle large systems with more than 250 KLOCs and 300K required duas. Furthermore, for several programs the execution penalty was comparable with that imposed by a popular CF testing tool. / Teste de fluxo de dados (TFD) foi introduzido há mais de trinta anos com o objetivo de criar uma avaliação mais abrangente da estrutura dos programas. TFD exige testes que percorrem caminhos nos quais a atribuição de valor a uma variável (definição) e a subsequente referência a esse valor (uso) são verificados. Essa relação é denominada associação definição-uso. Enquanto as ferramentas de teste de fluxo de controle são capazes de lidar com sistemas compostos de programas grandes e que executam durante bastante tempo, as ferramentas de TFD não têm obtido o mesmo sucesso. Esta situação é, em parte, devida aos custos associados ao rastreamento de associações definição-uso em tempo de execução. Recentemente, foi proposto um algoritmo --- chamado \\textit (BA) --- que usa vetores de bits e operações bit a bit para monitorar associações definição-uso em tempo de execução. Esta pesquisa apresenta a implementação de BA para programas compilados em Java. Abordagens anteriores são capazes de lidar com programas pequenos e de médio porte com altas penalidades em termos de execução e memória. Os resultados experimentais mostram que, usando BA, é possível utilizar TFD para verificar sistemas com mais de 250 mil linhas de código e 300 mil associações definição-uso. Além disso, para vários programas, a penalidade de execução imposta por BA é comparável àquela imposta por uma popular ferramenta de teste de fluxo de controle.
9

ON EFFICIENT AUTOMATED METHODS FOR SIMULATION OUTPUT DATA ANALYSIS

Brozovic, Martin January 2014 (has links)
With the increase in computing power and software engineering in the past years computer based stochastic discrete-event simulations have become very commonly used tool to evaluate performance of various, complex stochastic systems (such as telecommunication networks). It is used if analytical methods are too complex to solve, or cannot be used at all. Stochastic simulation has also become a tool, which is often used instead of experimentation in order to save money and time by the researchers. In this thesis, we focus on the statistical correctness of the final estimated results in the context of steady-state simulations performed for the mean analysis of performance measures of stable stochastic processes. Due to various approximations the final experimental coverage can differ greatly from the assumed theoretical level, where the final confidence intervals cover the theoretical mean at much lower frequency than it was expected by the preset theoretical confidence level. We present the results of coverage analysis for the methods of dynamic partially-overlapping batch means, spectral analysis and mean squared error optimal dynamic partially-overlapping batch means. The results show that the variants of dynamic partially-overlapping batch means, that we propose as their modification under Akaroa2, perform acceptably well for the queueing processes, but perform very badly for auto-regressive process. We compare the results of modified mean squared error optimal dynamic partially-overlapping batch means method to the spectral analysis and show that the methods perform equally well. / +420 723 771 283
10

Recherche de vulnérabilités logicielles par combinaison d'analyses de code binaire et de frelatage (Fuzzing) / Software vulnerability research combining fuzz testing and binary code analysis

Bekrar, Sofia 10 October 2013 (has links)
Le frelatage (ou fuzzing) est l'une des approches les plus efficaces pour la détection de vulnérabilités dans les logiciels de tailles importantes et dont le code source n'est pas disponible. Malgré une utilisation très répandue dans l'industrie, les techniques de frelatage "classique" peuvent avoir des résultats assez limités, et pas toujours probants. Ceci est dû notamment à une faible couverture des programmes testés, ce qui entraîne une augmentation du nombre de faux-négatifs; et un manque de connaissances sur le fonctionnement interne de la cible, ce qui limite la qualité des entrées générées. Nous présentons dans ce travail une approche automatique de recherche de vulnérabilités logicielles par des processus de test combinant analyses avancées de code binaire et frelatage. Cette approche comprend : une technique de minimisation de suite de tests, pour optimiser le rapport entre la quantité de code testé et le temps d'exécution ; une technique d'analyse de couverture optimisée et rapide, pour évaluer l'efficacité du frelatage ; une technique d'analyse statique, pour localiser les séquences de codes potentiellement sensibles par rapport à des patrons de vulnérabilités; une technique dynamique d'analyse de teinte, pour identifier avec précision les zones de l'entrée qui peuvent être à l'origine de déclenchements de vulnérabilités; et finalement une technique évolutionniste de génération de test qui s'appuie sur les résultats des autres analyses, afin d'affiner les critères de décision et d'améliorer la qualité des entrées produites. Cette approche a été mise en œuvre à travers une chaîne d'outils intégrés et évalués sur de nombreuses études de cas fournies par l'entreprise. Les résultats obtenus montrent son efficacité dans la détection automatique de vulnérabilités affectant des applications majeures et sans accès au code source. / Fuzz testing (a.k.a. fuzzing) is one of the most effective approaches for discovering security vulnerabilities in large and closed-source software. Despite their wide use in the software industry, traditional fuzzing techniques suffer from a poor coverage, which results in a large number of false negatives. The other common drawback is the lack of knowledge about the application internals. This limits their ability to generate high quality inputs. Thus such techniques have limited fault detection capabilities. We present an automated smart fuzzing approach which combines advanced binary code analysis techniques. Our approach has five components. A test suite reduction technique, to optimize the ratio between the amount of covered code and the execution time. A fast and optimized code coverage measurement technique, to evaluate the fuzzing effectiveness. A static analysis technique, to locate potentially sensitive sequences of code with respect to vulnerability patterns. An origin-aware dynamic taint analysis technique, to precisely identify the input fields that may trigger potential vulnerabilities. Finally, an evolutionary based test generation technique, to produce relevant inputs. We implemented our approach as an integrated tool chain, and we evaluated it on numerous industrial case studies. The obtained results demonstrate its effectiveness in automatically discovering zero-day vulnerabilities in major closed-source applications. Our approach is relevant to both defensive and offensive security purposes.

Page generated in 0.0545 seconds