• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 34
  • 27
  • 27
  • 22
  • 14
  • 14
  • 13
  • 12
  • 11
  • 9
  • 9
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Load testing on an alarm server : The scope of the thesis project was to develop an automatic load-testing tool for the partner company. / Last testning på en alarm server : Målet med projektet var att utveckla ett automatiskt last test för partner företaget.

Leo, Babic, Sebastian, Falkman January 2023 (has links)
What is crucial for an alarm server to work correctly? Companies these days are reliant on alarm servers to stay operational continually. Therefore, these servers must be correctly tested. This is where load testing comes in. By performing load testing before placing a server in production, the company can be confident it stays operational even when handling heavy loads. By automating the process, we ensure quality assurance of each test and leave human error out of it. In this thesis, we describe our research regarding Locust, an open-source tool for load testing, the important parts of testing and the methods employed in our software. First and foremost, the research gave us valuable insights into the basic principles of load testing. What aspects must we include, and what tests should we perform? Furthermore, it helped us conclude that Locust is the best open-source tool for our purpose due to its outstanding performance compared to JMeter. Developing the software could then be executed with a mixture of work from the Locust documentation and instructions given by the partner company. The most important aspect of Locust is the load shape construction and metrics to be recorded. Instructions from the partner were for the software to be able to integrate into their CI/CD pipeline (Continuous Integration and Continuous Deployment) and for parameters to be input for scaling the load test. Finally, while metrics are recorded and showcased, the thesis does not evaluate the quality of the alarm server. The metrics showcased are used to ensure the functionality of the test. Instead, the metrics collected will be evaluated in the pipeline by the company in the future. The results achieved fulfilled all the tasks given except for the input parameters that were limited due to the functionality of Locust.
42

An automatic test pattern generation in the logic gate level circuits and MOS transistor circuits at Ohio University

Lee, Hoon-Kyeu January 1986 (has links)
No description available.
43

Search State Extensibility based Learning Framework for Model Checking and Test Generation

Chandrasekar, Maheshwar 20 September 2010 (has links)
The increasing design complexity and shrinking feature size of hardware designs have created resource intensive design verification and manufacturing test phases in the product life-cycle of a digital system. On the contrary, time-to-market constraints require faster verification and test phases; otherwise it may result in a buggy design or a defective product. This trend in the semiconductor industry has considerably increased the complexity and importance of Design Verification, Manufacturing Test and Silicon Diagnosis phases of a digital system production life-cycle. In this dissertation, we present a generalized learning framework, which can be customized to the common solving technique for problems in these three phases. During Design Verification, the conformance of the final design to its specifications is verified. Simulation-based and Formal verification are the two widely known techniques for design verification. Although the former technique can increase confidence in the design, only the latter can ensure the correctness of a design with respect to a given specification. Originally, Design Verification techniques were based on Binary Decision Diagram (BDD) but now such techniques are based on branch-and-bound procedures to avoid space explosion. However, branch-and-bound procedures may explode in time; thus efficient heuristics and intelligent learning techniques are essential. In this dissertation, we propose a novel extensibility relation between search states and a learning framework that aids in identifying non-trivial redundant search states during the branch-and-bound search procedure. Further, we also propose a probability based heuristic to guide our learning technique. First, we utilize this framework in a branch-and-bound based preimage computation engine. Next, we show that it can be used to perform an upper-approximation based state space traversal, which is essential to handle industrial-scale hardware designs. Finally, we propose a simple but elegant image extraction technique that utilizes our learning framework to compute over-approximate image space. This image computation is later leveraged to create an abstraction-refinement based model checking framework. During Manufacturing Test, test patterns are applied to the fabricated system, in a test environment, to check for the existence of fabrication defects. Such patterns are usually generated by Automatic Test Pattern Generation (ATPG) techniques, which assume certain fault types to model arbitrary defects. The size of fault list and test set has a major impact on the economics of manufacturing test. Towards this end, we propose a fault col lapsing approach to compact the size of target fault list for ATPG techniques. Further, from the very beginning, ATPG techniques were based on branch-and-bound procedures that model the problem in a Boolean domain. However, ATPG is a problem in the multi-valued domain; thus we propose a multi-valued ATPG framework to utilize this underlying nature. We also employ our learning technique for branch-and-bound procedures in this multi-valued framework. To improve the yield for high-volume manufacturing, silicon diagnosis identifies a set of candidate defect locations in a faulty chip. Subsequently physical failure analysis - an extremely time consuming step - utilizes these candidates as an aid to locate the defects. To reduce the number of candidates returned to the physical failure analysis step, efficient diagnostic patterns are essential. Towards this objective, we propose an incremental framework that utilizes our learning technique for a branch-and-bound procedure. Further, it learns from the ATPG phase where detection-patterns are generated and utilizes this information during diagnostic-pattern generation. Finally, we present a probability based heuristic for X-filling of detection-patterns with the objective of enhancing the diagnostic resolution of such patterns. We unify these techniques into a framework for test pattern generation with good detection and diagnostic ability. Overall, we propose a learning framework that can speed up design verification, test and diagnosis steps in the life cycle of a hardware system. / Ph. D.
44

Framework de geração de dados de teste para programas orientados a objetos / Test data generation framework for object-oriented software

Ferreira, Fernando Henrique Inocêncio Borba 13 December 2012 (has links)
A geração de dados de teste é uma tarefa obrigatória do processo de teste de software. Em geral, é realizada por prossionais de teste, o que torna seu custo elevado e sua automatização necessária. Os frameworks existentes que auxiliam essa atividade são restritos, fornecendo apenas uma única técnica de geração de dados de teste, uma única função de aptidão para avaliação dos indivíduos e apenas um algoritmo de seleção. Este trabalho apresenta o framework JaBTeG (Java Bytecode Test Generation) de geração de dados de teste. A principal característica do framework é permitir o desenvolvimento de métodos de geração de dados de teste por meio da seleção da técnica de geração de dados de teste, da função de aptidão, do algoritmo de seleção e critério de teste estrutural. Utilizando o framework JaBTeG, técnicas de geração de dados de teste podem ser criadas e experimentadas. O framework está associado à ferramenta de teste JaBUTi (Java Bytecode Understanding and Testing) para auxiliar a geração de dados de teste. Quatro técnicas de geração de dados de teste, duas funções de aptidão e quatro algoritmos de seleção foram desenvolvidos para validação da abordagem proposta pelo framework. De maneira complementar, cinco programas com características diferentes foram testados com dados gerados usando os métodos providos pelo framework JaBTeG. / Test data generation is a mandatory activity of the software testing process. In general, it is carried out by testing practitioners, which makes it costly and its automation needed. Existing frameworks to support this activity are restricted, providing only one data generation technique, a single tness function to evaluate individuals, and a unique selection algorithm. This work describes the JaBTeG (Test Java Bytecode Generation) framework for testing data generation. The main characteristc of JaBTeG is to allow the development of data generation methods by selecting the data generation technique, the tness function, the selection algorithm and the structural testing criteria. By using JaBTeG, new methods for testing data generation can be developed and experimented. The framework was associated with JaBUTi (Java Bytecode Understanding and Testing) to support testing data creation. Four data generation techniques, two tness functions, and four selection algorithms were developed to validate the approach proposed by the framework. In addition, ve programs with dierent characteristics were tested with data generated using the methods supported by JaBTeG.
45

Framework de geração de dados de teste para programas orientados a objetos / Test data generation framework for object-oriented software

Fernando Henrique Inocêncio Borba Ferreira 13 December 2012 (has links)
A geração de dados de teste é uma tarefa obrigatória do processo de teste de software. Em geral, é realizada por prossionais de teste, o que torna seu custo elevado e sua automatização necessária. Os frameworks existentes que auxiliam essa atividade são restritos, fornecendo apenas uma única técnica de geração de dados de teste, uma única função de aptidão para avaliação dos indivíduos e apenas um algoritmo de seleção. Este trabalho apresenta o framework JaBTeG (Java Bytecode Test Generation) de geração de dados de teste. A principal característica do framework é permitir o desenvolvimento de métodos de geração de dados de teste por meio da seleção da técnica de geração de dados de teste, da função de aptidão, do algoritmo de seleção e critério de teste estrutural. Utilizando o framework JaBTeG, técnicas de geração de dados de teste podem ser criadas e experimentadas. O framework está associado à ferramenta de teste JaBUTi (Java Bytecode Understanding and Testing) para auxiliar a geração de dados de teste. Quatro técnicas de geração de dados de teste, duas funções de aptidão e quatro algoritmos de seleção foram desenvolvidos para validação da abordagem proposta pelo framework. De maneira complementar, cinco programas com características diferentes foram testados com dados gerados usando os métodos providos pelo framework JaBTeG. / Test data generation is a mandatory activity of the software testing process. In general, it is carried out by testing practitioners, which makes it costly and its automation needed. Existing frameworks to support this activity are restricted, providing only one data generation technique, a single tness function to evaluate individuals, and a unique selection algorithm. This work describes the JaBTeG (Test Java Bytecode Generation) framework for testing data generation. The main characteristc of JaBTeG is to allow the development of data generation methods by selecting the data generation technique, the tness function, the selection algorithm and the structural testing criteria. By using JaBTeG, new methods for testing data generation can be developed and experimented. The framework was associated with JaBUTi (Java Bytecode Understanding and Testing) to support testing data creation. Four data generation techniques, two tness functions, and four selection algorithms were developed to validate the approach proposed by the framework. In addition, ve programs with dierent characteristics were tested with data generated using the methods supported by JaBTeG.
46

Passive interoperability testing for communication protocols / Le test d'interopérabilité passif pour les protocoles de communication

Chen, Nanxing 24 June 2013 (has links)
Dans le domaine des réseaux, le test de protocoles de communication est une activité importante afin de valider les protocoles applications avant de les mettre en service. Généralement, les services qu'un protocole doit fournir sont décrits dans sa spécification. Cette spécification est une norme ou un standard défini par des organismes de normalisation tels que l'ISO (International Standards Organisation), l'IETF (Internet Engineering Task Force), l'ITU (International Telecommunication Union), etc. Le but du test est de vérifier que les implémentations du protocole fonctionnent correctement et rendent bien les services prévus. Pour atteindre cet objectif, différentes méthodes de tests peuvent être utilisées. Parmi eux, le test de conformité vérifie qu'un produit est conforme à sa spécification. Le test de robustesse vérifie les comportements de l'implémentation de protocole face à des événements imprévus. Dans cette thèse, nous nous intéressons plus particulièrement au test d'interopérabilité, qui vise à vérifier que plusieurs composants réseaux interagissent correctement et fournissent les services prévus. L'architecture générale de test d'interopérabilité fait intervenir un système sous test (SUT) composé de plusieurs implémentations sous test (IUT). Les objectifs du test d'interopérabilité sont à la fois de vérifier que plusieurs implémentations (basées sur des protocoles conçus pour fonctionner ensemble) sont capables d'interagir et que, lors de leur interaction, elles rendent les services prévus dans leurs spécifications respectives. En général, les méthodes de test d'interopérabilité peuvent être classées en deux grandes approches: le test actif et le test passif. Le test actif est une technique de validation très populaire, dont l'objectif est essentiellement de tester les implémentations (IUT), en pratiquant une suite de contrôles et d'observations sur celles-ci. Cependant, une caractéristique fondamentale du test actif est que le testeur possède la capacité de contrôler les IUTs. Cela implique que le testeur perturbe le fonctionnement normal du système testé. De ce fait, le test actif n'est pas une technique appropriée pour le test d'interopérabilité, qui est souvent effectué dans les réseaux opérationnels, où il est difficile d'insérer des entrées arbitraires sans affecter les services ou les fonctionnements normaux des réseaux. A l'inverse, le test passif est une technique se basant uniquement sur les observations. Le testeur n'a pas besoin d'agir sur le SUT notamment en lui envoyant des stimuli. Cela permet au test d'être effectué sans perturber l'environnement normal du système sous test. Le test passif possède également d'autres avantages comme par exemple, pour les systèmes embarqués où le testeur n'a pas d'accès direct, de pourvoir effectuer le test en collectant des traces d'exécution du système, puis de détecter les éventuelles erreurs ou déviations de ces traces vis-à-vis du comportement du système. / In the field of networking, testing of communication protocols is an important activity to validate protocol applications before commercialisation. Generally, the services that must be provided by a protocol are described in its specification(s). A specification is generally a standard defined by standards bodies such as ISO (International Standards Organization), IETF (Internet Engineering Task Force), ITU (International Telecommunication Union), etc. The purpose of testing is to verify that the protocol implementations work correctly and guarantee the quality of the services in order to meet customers expectations. To achieve this goal, a variety of testing methods have been developed. Among them, interoperability testing is to verify that several network components cooperate correctly and provide expected services. Conformance testing verifies that a product conforms to its specification. Robustness testing determines the degree to which a system operates correctly in the presence of exceptional inputs or stressful environmental conditions. In this thesis, we focus on interoperability testing. The general architecture of interoperability testing involves a system under test (SUT), which consists of at least two implementations under test (IUT). The objectives of interoperability testing are to ensure that interconnected protocol implementations are able to interact correctly and, during their interaction, provide the services predefined in their specifications. In general, the methods of interoperability testing can be classified into two approaches: active and passive testing. Among them, active test is the most conventionally used technique, which aims to test the implementations (IUT) by injecting a series of test messages (stimuli) and observing the corresponding outputs. However, the intrusive nature of active testing is that the tester has the ability to control IUTS. This implies that the tester interrupts inevitably the normal operations of the system under test. In this sense, active testing is not a suitable technique for interoperability testing, which is often carried out in operational networks. In such context, it is difficult to insert arbitrary testing messages without affecting the normal behavior and the services of the system. On the contrary, passive testing is a technique based only on observation. The tester does not need to interact with the SUT. This allows the test to be carried out without disturbing the normal operations of the system under test. Besides, passive testing also has other advantages such as: for embedded systems to which the tester does not have direct access, test can still be performed by collecting the execution traces of the system and then detect errors by comparing the trace with the behavior of the system described in its specification. In addition, passive testing makes it possible to moniter a system over a long period, and report abnomality at any time.
47

Techniques to facilitate symbolic execution of real-world programs

Anand, Saswat 11 May 2012 (has links)
The overall goal of this research is to reduce the cost of software development and improve the quality of software. Symbolic execution is a program-analysis technique that is used to address several problems that arise in developing high-quality software. Despite the fact that the symbolic execution technique is well understood, and performing symbolic execution on simple programs is straightforward, it is still not possible to apply the technique to the general class of large, real-world software. A symbolic-execution system can be effectively applied to large, real-world software if it has at least the two features: efficiency and automation. However, efficient and automatic symbolic execution of real-world programs is a lofty goal because of both theoretical and practical reasons. Theoretically, achieving this goal requires solving an intractable problem (i.e., solving constraints). Practically, achieving this goal requires overwhelming effort to implement a symbolic-execution system that can precisely and automatically symbolically execute real-world programs. This research makes three major contributions. 1. Three new techniques that address three important problems of symbolic execution. Compared to existing techniques, the new techniques * reduce the manual effort that may be required to symbolically execute those programs that either generate complex constraints or parts of which cannot be symbolically executed due to limitations of a symbolic-execution system. * improve the usefulness of symbolic execution (e.g., expose more bugs in a program) by enabling discovery of more feasible paths within a given time budget. 2. A novel approach that uses symbolic execution to generate test inputs for Apps that run on modern mobile devices such as smartphones and tablets. 3. Implementations of the above techniques and empirical results obtained from applying those techniques to real-world programs that demonstrate their effectiveness.
48

Proof-of-concept of Model-based testing based on an UML-model of a water-level measurement system

Alshekhly, Zoubida, Gill, Namra January 2020 (has links)
Software testing is a very important phase in software development as it minimize risks ina software system, however, it consumes time and can be very expensive. With automatictest case generation time consumption and cost can be reduced. Model-based testing isa method to test a software system with a model of the systems behaviour. Automatictest case generation is often considered a favorable support in model-based testing. In thiswork, the concept of model-based testing is explored along with testing the embedded partof a water-level measurement system (WLM) to investigate the efficiency of model-basedtesting on a software system. As a result of this, a model-based testing tool, MoMut::UMLis used to generate the test-cases on the UML model of WLM system that is built ina UML modeling environment, Eclipse-Papyrus. However, MoMut::UML implements aspecial type of model-based testing technique, model-based mutation testing; that injectsfaults in the UML model, and generates test-data on the fault-based model. By this, thebehaviour of system-under-test, only the UML model of water-level measurement system,is tested.
49

Empirical Comparison Between Conventional and AI-based Automated Unit Test Generation Tools in Java

Gkikopouli, Marios, Bataa, Batjigdrel January 2023 (has links)
Unit testing plays a crucial role in ensuring the quality and reliability of software systems. However, manual testing can often be a slow and time-consuming process. With current advancements in artificial intelligence (AI), new tools have emerged for automated unit testing to address this issue. But how do these new AI tools compare to conventional automated unit test generation tools? To answer this question, we compared two state-of-the-art conventional unit test tools (EVOSUITE and RANDOOP) with the sole commercially available AI-based unit test tool (DIFFBLUE COVER) for Java. We tested them on 10 sample classes from 3 real-life projects provided by the Defects4J dataset to evaluate their performance regarding code coverage, mutation score, and fault detection. The results showed that EVOSUITE achieved the highest code coverage, averaging 89%, while RANDOOP and DIFFBLUE COVER achieved similar results, averaging 63%. In terms of mutation score, DIFFBLUE COVER had the lowest average score of 40%, while EVOSUITE and RANDOOP scored 67% and 50%, respectively. For fault detection, EVOSUITE and RANDOOP detected a higher number of bugs (7 out of 10 and 5 out of 10, respectively) compared to DIFFBLUE COVER, which found only 4 out of 10. Although the AI-based tool was outperformed in all three criteria, it still shows promise by being able to achieve adequate results, in some cases even surpassing the conventional tools while generating a significantly smaller number of total assertions and more comprehensive tests. Nonetheless, the study acknowledges its limitations in terms of the restricted number of AI-based tools used and the small number of projects utilized from Defects4J.
50

On Detection, Analysis and Characterization of Transient and Parametric Failures in Nano-scale CMOS VLSI

Sanyal, Alodeep 01 May 2010 (has links)
As we move deep into nanometer regime of CMOS VLSI (45nm node and below), the device noise margin gets sharply eroded because of continuous lowering of device threshold voltage together with ever increasing rate of signal transitions driven by the consistent demand for higher performance. Sharp erosion of device noise margin vastly increases the likelihood of intermittent failures (also known as parametric failures) during device operation as opposed to permanent failures caused by physical defects introduced during manufacturing process. The major sources of intermittent failures are capacitive crosstalk between neighbor interconnects, abnormal drop in power supply voltage (also known as droop), localized thermal gradient, and soft errors caused by impact of high energy particles on semiconductor surface. In nanometer technology, these intermittent failures largely outnumber the permanent failures caused by physical defects. Therefore, it is of paramount importance to come up with efficient test generation and test application methods to accurately detect and characterize these classes of failures. Soft error rate (SER) is an important design metric used in semiconductor industry and represented by number of such errors encountered per Billion hours of device operation, known as Failure-In-Time (FIT) rate. Soft errors are rare events. Traditional techniques for SER characterization involve testing multiple devices in parallel, or testing the device while keeping it in a high energy neutron bombardment chamber to artificially accelerate the occurrence of single events. Motivated by the fact that measurement of SER incurs high time and cost overhead, in this thesis, we propose a two step approach: hii a new filtering technique based on amplitude of the noise pulse, which significantly reduces the set of soft error susceptible nodes to be considered for a given design; followed by hiii an Integer Linear Program (ILP)-based pattern generation technique that accelerates the SER characterization process by 1-2 orders of magnitude compared to the current state-of-the-art. During test application, it is important to distinguish between an intermittent failure and a permanent failure. Motivated by the fact that most of the intermittent failures are temporally sparse in nature, we present a novel design-for-testability (DFT) architecture which facilitates application of the same test vector twice in a row. The underlying assumption here is that a soft fail will not manifest its effect in two consecutive test cycles whereas the error caused by a physical defect will produce an identically corrupt output signature in both test cycles. Therefore, comparing the output signature for two consecutive applications of the same test vector will accurately distinguish between a soft fail and a hard fail. We show application of this DFT technique in measuring soft error rate as well as other circuit marginality related parametric failures, such as thermal hot-spot induced delay failures. A major contribution of this thesis lies on investigating the effect of multiple sources of noise acting together in exacerbating the noise effect even further. The existing literature on signal integrity verification and test falls short of taking the combined noise effects into account. We particularly focus on capacitive crosstalk on long signal nets. A typical long net is capacitively coupled with multiple aggressors and also tend to have multiple fanout gates. Gate leakage current that originates in fanout receivers, flows backward and terminates in the driver causing a shift in driver output voltage. This effect becomes more prominent as gate oxide is scaled more aggressively. In this thesis, we first present a dynamic simulation-based study to establish the significance of the problem, followed by proposing an automatic test pattern generation (ATPG) solution which uses 0-1 Integer Linear Program (ILP) to maximize the cumulative voltage noise at a given victim net due to crosstalk and gate leakage loading in conjunction with propagating the fault effect to an observation point. Pattern pairs generated by this technique are useful for both manufacturing test application as well as signal integrity verification for nanometer designs. This research opens up a new direction for studying nanometer noise effects and motivates us to extend the study to other noise sources in tandem including voltage drop and temperature effects.

Page generated in 0.0743 seconds