• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 10
  • 8
  • 5
  • 4
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 28
  • 16
  • 16
  • 15
  • 14
  • 14
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic test generation for industrial control software

Enoiu, Eduard January 2016 (has links)
Since the early days of software testing, automatic test generation has been suggested as a way of allowing tests to be created at a lower cost. However, industrially useful and applicable tools for automatic test generation are still scarce. As a consequence, the evidence regarding the applicability or feasibility of automatic test generation in industrial practice is limited. This is especially problematic if we consider the use of automatic test generation for industrial safety-critical control systems, such as are found in power plants, airplanes, or trains. In this thesis, we improve the current state of automatic test generation by developing a technique based on model-checking that works with IEC 61131-3 industrial control software. We show how automatic test generation for IEC 61131-3 programs, containing both functional and timing information, can be solved as a model checking problem for both code and mutation coverage criteria.  The developed technique has been implemented in the CompleteTest tool. To evaluate the potential application of our technique, we present several studies where the tool is applied to industrial control software. Results show that CompleteTest is viable for use in industrial practice; it is efficient in terms of the time required to generate tests that satisfy both code and mutation coverage and scales well for most of the industrial programs considered. However, our results also show that there are still challenges associated with the use of automatic test generation. In particular, we found that while automatically generated tests, based on code coverage, can exercise the logic of the software as well as tests written manually, and can do so in a fraction of the time, they do not show better fault detection compared to manually created tests. Specifically, it seems that manually created tests are able to detect more faults of certain types (i.e, logical replacement, negation insertion and timer replacement) than automatically generated tests. To tackle this issue, we propose an approach for improving fault detection by using mutation coverage as a test criterion. We implemented this approach in the CompleteTest tool and used it to evaluate automatic test generation based on mutation testing. While the resulting tests were more effective than automatic tests generated based on code coverage, in terms of fault detection, they still were not better than manually created tests. In summary, our results highlight the need for improving the goals used by automatic test generation tools. Specifically, fault detection scores could be increased by considering some new mutation operators as well as higher-order mutations. Our thesis suggests that automatically generated test suites are significantly less costly in terms of testing time than manually created test suites. One conclusion, strongly supported by the results of this thesis, is that automatic test generation is efficient but currently not quite as effective as manual testing. This is a significant progress that needs to be further studied; we need to consider the implications and the extent to which automatic test generation can be used in the development of reliable safety-critical systems.
2

Fault simulation and test generation for small delay faults

Qiu, Wangqi 25 April 2007 (has links)
Delay faults are an increasingly important test challenge. Traditional delay fault models are incomplete in that they model only a subset of delay defect behaviors. To solve this problem, a more realistic delay fault model has been developed which models delay faults caused by the combination of spot defects and parametric process variation. According to the new model, a realistic delay fault coverage metric has been developed. Traditional path delay fault coverage metrics result in unrealistically low fault coverage, and the real test quality is not reflected. The new metric uses a statistical approach and the simulation based fault coverage is consistent with silicon data. Fast simulation algorithms are also included in this dissertation. The new metric suggests that testing the K longest paths per gate (KLPG) has high detection probability for small delay faults under process variation. In this dissertation, a novel automatic test pattern generation (ATPG) methodology to find the K longest testable paths through each gate for both combinational and sequential circuits is presented. Many techniques are used to reduce search space and CPU time significantly. Experimental results show that this methodology is efficient and able to handle circuits with an exponential number of paths, such as ISCAS85 benchmark circuit c6288. The ATPG methodology has been implemented on industrial designs. Speed binning has been done on many devices and silicon data has shown significant benefit of the KLPG test, compared to several traditional delay test approaches.
3

ANALYTICAL METHODS TO PROPAGATE AND DIAGNOSE SINGLE EVENT TRANSIENTS

Gangadhar, Sreenivas 01 August 2012 (has links)
Rapidly shrinking technology node and aggressive scaling of voltage have increased the probability of soft errors. In the current deep sub-micron technology, a small inaccuracy in computing the probability of occurrence of a soft error results in an unacceptably large chip failure rate. We propose a method that considers timing information to determine accurately the probability of SET propagation resulting into an error. Disjoint covers of appropriately formulated functions are used for the probability computations in order to consider re-convergent paths in the circuit. The probabilities are calculated at the output gate at different time instants that SET can propagate within a latching window considering electrical attenuation. Bayes' theorem is used to model the SET injection. The method is extended to consider multiple SETs. A novel method is proposed to enhance SET propagation probability and is shown how it can assist the hardening process. A method to determine a set of patterns is also proposed that must be applied at the inputs to determine propagation characteristics of the SET that are meaningful for hardening purposes. A heuristic based on the probabilistic framework for SET propagation is proposed to diagnose (on-line or off-line) the location and time of strike based on errors observed at multiple points. The proposed diagnostic framework requires a new approach to calculate the probability for SET propagation to multiple non-independent variables. It is shown experimentally that the error appearances at multiple observable points help in SET diagnosis. The time performance of the proposed diagnostic framework is compared against an alternative implementation. This is particularly important in on-line diagnosis. The proposed methods are experimentally verified on ISCAS and ITC benchmarks considering both fixed gate delays and probability distribution function gate delays .
4

Approaches to test set generation using binary decision diagrams

Wingfield, James 30 September 2004 (has links)
This research pursues the use of powerful BDD-based functional circuit analysis to evaluate some approaches to test set generation. Functional representations of the circuit allow the measurement of information about faults that is not directly available through circuit simulation methods, such as probability of random detection and test-space overlap between faults. I have created a software tool that performs experiments to make such measurements and augments existing test generation strategies with this new information. Using this tool, I explored the relationship of fault model difficulty to test set length through fortuitous detection, and I experimented with the application of function-based methods to help reconcile the traditionally opposed goals of making test sets that are both smaller and more effective.
5

Approaches to test set generation using binary decision diagrams

Wingfield, James 30 September 2004 (has links)
This research pursues the use of powerful BDD-based functional circuit analysis to evaluate some approaches to test set generation. Functional representations of the circuit allow the measurement of information about faults that is not directly available through circuit simulation methods, such as probability of random detection and test-space overlap between faults. I have created a software tool that performs experiments to make such measurements and augments existing test generation strategies with this new information. Using this tool, I explored the relationship of fault model difficulty to test set length through fortuitous detection, and I experimented with the application of function-based methods to help reconcile the traditionally opposed goals of making test sets that are both smaller and more effective.
6

Test First Model-Driven Development

Shappee, Bartlett A 26 April 2012 (has links)
Test Driven Development (TDD), Model-Driven Development (MDD), and Test Case Generation with their associated practices and tools each in their own right promise to deliver robust higher quality code more economically then other approaches. These process are not mutually exclusive but are not typically used together. This thesis develops a combined approach using complimentary aspects of each of the above three process. Test cases are described, generated, and then injected back into the model, which is then used to produce the test and production code. We have enhanced a model-driven tool to support the approach, adding a test case generator, capable of understanding augmented MDD software model and utilizing the constraints captured in our test-centric language to generate model-level test cases back into the model. Our results show that, with a reduction in overall effort one can produce a tested model-based system in which its test and implementation for multiple platforms such as C and Java, using one of multiple test xUnit frameworks.
7

Using ordered partial decision diagrams for manufacture test generation

Cobb, Bradley Douglas 30 September 2004 (has links)
Because of limited tester time and memory, a primary goal of digital circuit manufacture test generation is to create compact test sets. Test generation programs that use Ordered Binary Decision Diagrams (OBDDs) as their primary functional representation excel at this task. Unfortunately, the use of OBDDs limits the application of these test generation programs to small circuits. This is because the size of the OBDD used to represent a function can be exponential in the number of the function's switching variables. Working with these functions can cause OBDD-based programs to exceed acceptable time and memory limits. This research proposes using Ordered Partial Decision Diagrams (OPDDs) instead as the primary functional representation for test generation systems. By limiting the number of vertices allowed in a single OPDD, complex functions can be partially represented in order to save time and memory. An OPDD-based test generation system is developed and techniques which improve its performance are evaluated on a small benchmark circuit. The new system is then demonstrated on larger and more complex circuits than its OBDD-based counterpart allows.
8

Using ordered partial decision diagrams for manufacture test generation

Cobb, Bradley Douglas 30 September 2004 (has links)
Because of limited tester time and memory, a primary goal of digital circuit manufacture test generation is to create compact test sets. Test generation programs that use Ordered Binary Decision Diagrams (OBDDs) as their primary functional representation excel at this task. Unfortunately, the use of OBDDs limits the application of these test generation programs to small circuits. This is because the size of the OBDD used to represent a function can be exponential in the number of the function's switching variables. Working with these functions can cause OBDD-based programs to exceed acceptable time and memory limits. This research proposes using Ordered Partial Decision Diagrams (OPDDs) instead as the primary functional representation for test generation systems. By limiting the number of vertices allowed in a single OPDD, complex functions can be partially represented in order to save time and memory. An OPDD-based test generation system is developed and techniques which improve its performance are evaluated on a small benchmark circuit. The new system is then demonstrated on larger and more complex circuits than its OBDD-based counterpart allows.
9

Loginės funkcijos termų generavimo algoritmas pagrįstas programinio prototipo modeliu / Terms’ of logical function generation algorithm based on software prototype model

Žemaitis, Tomas 16 August 2007 (has links)
Technologijų plėtojimas leidžia vis labiau vystytis sudėtingų elektroninių sistemų produkcijai. Visos šios sistemos turi būti patikrintos ir ištestuotos tam, kad užtikrinti tikslų jų funkcionavimą. Kai sistemų sudėtingumas didėja, testavimas tampa vienas iš svarbiausių faktorių nustatant galutinę produkto kainą. Žinomų žemo lygio metodų, skirtų techninės įrangos testavimui, nepakanka ir daugiau darbo turi būti atlikta abstrakčiame lygyje pradiniuose projektavimo etapuose negu klasikiniame ventiliniame ir registrų perdavimo lygiuose. Realizuotas algoritmas, kuris atsitiktinai generuoja įėjimo poveikį, pagal programinio prototipo modelį paskaičiuoja poveikio reakciją ir iškraipant po vien��� įėjimo poveikio signalo reikšmę apibrėžia galimus išėjimų loginių funkcijų termus. Nagrinėjant kitus įėjimo poveikius apibrėžti išėjimų loginių funkcijų termai patikslinami išmetant dalinius termus. Atsitiktinai sugeneravus ir išnagrinėjus daug įėjimo poveikių gaunami galutiniai išėjimų loginių funkcijų termai. Algoritmas negarantuoja , kad bus gauti visi ir tikslūs išėjimų loginių funkcijų termai, bet gauti termai gali būti naudojami testų generavimui. Gauti išėjimų loginių funkcijų termai užrašomi kartu su įėjimo poveikiu, pagal kurį termas buvo nustatytas, ir patys paskaičiuoti termai jau gali būti naudojami kaip tikrinantys testai. Gauti rezultatai galės būti panaudoti tolimesniems tyrimams: schemų testavimui, defektų šalinimui, funkcijos elementų palyginimui, algoritmo gerinimui... [toliau žr. visą tekstą] / The technological development is enabling production of increasingly complex electronic systems. All those systems must be verified and tested to guarantee correct behavior. As the complexity grows, testing is becoming one of the most significant factors that contribute to the final product cost. The established low-level methods for hardware testing are not any more sufficient and more work has to be done at abstraction levels higher than the classical gate and register-transfer levels. Realized algorithm, which random generates inputs, computes reaction based on software prototype model and deforming values of inputs one by one determines possible terms of logical functions. Analyzing other inputs determined terms of logical functions are corrected by eliminating partial terms. After random generating and analyzing a lot of inputs terminal terms of logical functions are derived. Algorithm doesn’t guarantee that all and exact terms of logical functions are obtained but those terms could be used when generating test vectors. Derived terms of logical functions’ outputs are recorded with input that formed them and following terms can be used as inspecting tests. Collected results can be used for further researches: schemes testing, defect detection, comparing elements of logical function, improving algorithm. Main aspects of design are introduced. Experimental accurateness of results and factors (initial number of random generated test vectors, improvement coefficient, maximum... [to full text]
10

Software-centric and interaction-oriented system-on-chip verification.

Xu, Xiao Xi January 2009 (has links)
As the complexity of very-large-scale-integrated-circuits (VLSI) soars, the complexity of verifying them increases even faster. Design verification becomes the biggest bottleneck in VLSI design, consuming around 70% of the effort and time in a typical design cycle. The problem is even more severe as the system-on-chip (SoC) design paradigm is gaining popularity. Unfortunately, the development in verification techniques has not kept up with the growth of the design capability, and is being left further behind in the SoC era. In recent years, a new generation of hardware-modelling-languages alongside the best practices to use them have emerged and evolved in an attempt to productively build an intelligent stimulationobservation environment referred to as the test-bench. Ironically, as test-benches are becoming more powerful and sophisticated under these best practices known as verification methodologies, the overall verification approaches today are still officially described as ad hoc and experimental and are in great need of a methodological breakthrough. Our research was carried out to seek the desirable methodological breakthrough, and this thesis presents the research outcome: a novel and holistic methodology that brings an opportunity to address the SoC verification problems. Furthermore, our methodology is a solution completely independent of the underlying simulation technologies; therefore, it could extend its applicability into future VLSI designs. Our methodology presents two ideas. (a) We propose that system-level verification should resort to the SoC-native languages rather than the test-bench construction languages; the software native to the SoC should take more critical responsibilities than the test-benches. (b) We challenge the fundamental assumption that “objects-under-test” and “tests” are distinct entities; instead, they should be understood as one type of entities – the interactions; interactions, together with the interference between interactions, i.e., the parallelism and resource-competitions, should be treated as the focus in system-level verification. The above two ideas, namely, software-centric verification and interaction-oriented verification have yielded practical techniques. This thesis elaborates on these techniques, including the transfer-resource-graph based test-generation method targeting the parallelism, the coverage measures of the concurrency completeness using Petri-nets, the automation of the test-programs which can execute smartly in an event-driven manner, and a software observation mechanism that gives insights into the system-level behaviours. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1363926 / Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2009

Page generated in 0.1373 seconds