• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 26
  • 9
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Using ordered partial decision diagrams for manufacture test generation

Cobb, Bradley Douglas 30 September 2004 (has links)
Because of limited tester time and memory, a primary goal of digital circuit manufacture test generation is to create compact test sets. Test generation programs that use Ordered Binary Decision Diagrams (OBDDs) as their primary functional representation excel at this task. Unfortunately, the use of OBDDs limits the application of these test generation programs to small circuits. This is because the size of the OBDD used to represent a function can be exponential in the number of the function's switching variables. Working with these functions can cause OBDD-based programs to exceed acceptable time and memory limits. This research proposes using Ordered Partial Decision Diagrams (OPDDs) instead as the primary functional representation for test generation systems. By limiting the number of vertices allowed in a single OPDD, complex functions can be partially represented in order to save time and memory. An OPDD-based test generation system is developed and techniques which improve its performance are evaluated on a small benchmark circuit. The new system is then demonstrated on larger and more complex circuits than its OBDD-based counterpart allows.
2

Using ordered partial decision diagrams for manufacture test generation

Cobb, Bradley Douglas 30 September 2004 (has links)
Because of limited tester time and memory, a primary goal of digital circuit manufacture test generation is to create compact test sets. Test generation programs that use Ordered Binary Decision Diagrams (OBDDs) as their primary functional representation excel at this task. Unfortunately, the use of OBDDs limits the application of these test generation programs to small circuits. This is because the size of the OBDD used to represent a function can be exponential in the number of the function's switching variables. Working with these functions can cause OBDD-based programs to exceed acceptable time and memory limits. This research proposes using Ordered Partial Decision Diagrams (OPDDs) instead as the primary functional representation for test generation systems. By limiting the number of vertices allowed in a single OPDD, complex functions can be partially represented in order to save time and memory. An OPDD-based test generation system is developed and techniques which improve its performance are evaluated on a small benchmark circuit. The new system is then demonstrated on larger and more complex circuits than its OBDD-based counterpart allows.
3

Mining simple and complex patterns efficiently using Binary Decision Diagrams

Loekito, E. January 2009 (has links)
Pattern mining is a knowledge discovery task which is useful for finding interesting data characteristics. Existing mining techniques sometimes suffer from limited performance in challenging situations, such as when finding patterns in high-dimensional datasets. Binary Decision Diagrams and their variants are a compact and efficient graph data structure for representing and manipulating boolean functions and they are potentially attractive for solving many problems in pattern mining. This thesis explores techniques for the use of binary decision diagrams for mining both simple and complex types of patterns. / Firstly, we investigate the use of Binary Decision Diagrams for mining the fundamental types of patterns. These include frequent patterns, also known as frequent itemsets. We introduce a structure called the Weighted Zero-suppressed Binary Decision Diagram and evaluate its use on high dimensional data. This type of Decision Diagram is extremely useful for re-using intermediate patterns during computation. / Secondly, we study the problem of mining patterns in sequential databases. Here, we introduce a new structure called the Sequence Binary Decision Diagram, which can be used for mining frequent subsequences. We show that our technique is competitive with the state of the art and identify situations where it is superior. / Thirdly, we show how Weighted Zero-suppressed Binary Decision Diagrams can be used for discovering new and complex types of patterns. We introduce new types of highly expressive patterns for capturing contrasts, which express disjunctions of attribute values. Moreover, to investigate the usefulness of disjunctive patterns for knowledge discovery, we employ a statistical methodology for testing their significance, and study their use for solving classification problems. Our findings show that classifiers based on significant disjunctive patterns can be more robust than those which are only based on simple patterns. / Finally, we introduce patterns for capturing second-order differences between two groups of classes, which can provide useful insights for human experts. Again, we show how binary decision diagrams can be deployed for efficiently discovering this type of knowledge. / In summary, we demonstrate that Binary Decision Diagrams, are a powerful and scalable tool in pattern mining. We believe their use is very promising for a range of current and future tasks in the data mining context.
4

Symbolic Bidirectional Breadth-First Heuristic Search

Richards, Simon Kim 11 December 2004 (has links)
A Reduced Ordered Binary Decision Diagram (BDD) is a symbolic data structure introduced to the model checking community by Bryant in 1986 to help verify properties of systems with very large state spaces. Recently, BDDs have been used in heuristic search algorithms as an approach to representing and solving search problems with very large state spaces. However, these algorithms are still not memory efficient. This thesis presents a symbolic heuristic search algorithm that uses BDDs in a memory efficient way by performing bidirectional breadthirst heuristic search. The approach is evaluated empirically against existing symbolic methods and is shown to provide a significant improvement in performance.
5

Application of exclusive-OR logic in technology independent logic optimisation

Kozlowski, Tomasz January 1996 (has links)
No description available.
6

Discrete Function Representations Utilizing Decision Diagrams and Spectral Techniques

Townsend, Whitney Jeanne 03 August 2002 (has links)
All discrete function representations become exponential in size in the worst case. Binary decision diagrams have become a common method of representing discrete functions in computer-aided design applications. For many functions, binary decision diagrams do provide compact representations. This work presents a way to represent large decision diagrams as multiple smaller partial binary decision diagrams. In the Boolean domain, each truth table entry consisting of a Boolean value only provides local information about a function at that point in the Boolean space. Partial binary decision diagrams thus result in the loss of information for a portion of the Boolean space. If the function were represented in the spectral domain however, each integer-valued coefficient would contain some global information about the function. This work also explores spectral representations of discrete functions, including the implementation of a method for transforming circuits from netlist representations directly into spectral decision diagrams.
7

Autocorrelation coefficients in the representation and classification of switching functions

Rice, Jacqueline Elsie 21 November 2018 (has links)
Reductions in the cost and size of integrated circuits are allowing more and more complex functions to be included in previously simple tools such as lawn-mowers, ovens, and thermostats. Because of this, the process of synthesizing such functions from their initial representation to an optimal VLSI implementation is rarely hand-performed; instead, automated synthesis and optimization tools are a necessity. The factors such tools must take into account are numerous, including area (size), power consumption, and timing factors, to name just a few. Existing tools have traditionally focused upon optimization of two-level representations. However, new technologies such as Field Programmable Gate Arrays (FPGAs) have generated additional interest in three-level representations and structures such as Kronecker Decision Diagrams (KDDs). The reason for this is that when implementing a circuit on an FPGA, the cost of implementing exclusive-or logic is no more than that of traditional AND or OR gates. This dissertation investigates the use of the autocorrelation coefficients in logic synthesis for these types of structures; specifically, whether it is possible to pre-process a function to produce a subset of its autocorrelation coefficients and make use of this information in the choice of a three-level decomposition or of decomposition types within a KDD. This research began as a general investigation into the properties of autocorrelation coefficients of switching functions. Much work has centered around the use of a function's spectral coefficients in logic synthesis; however, very little work has used a function's autocorrelation coefficients. Their use has been investigated in the areas of testing, optimization for Programmable Logic Arrays (PLAs), identification of types of complexity measures, and in various DD-related applications, but in a limited manner. This has likely been due to the complexity in their computation. In order to investigate the uses of these coefficients, a fast computation technique was required, as well as knowledge of their basic properties. Both areas are detailed as part of this work, which demonstrates that it is feasible to quickly compute the autocorrelation coefficients. With these investigations as a foundation we further apply the autocorrelation coefficients to the development of a classification technique. The autocorrelation classes are similar to the spectral classes, but provide significantly different information. The dissertation demonstrates that some of this information highlighted by the autocorrelation classes may allow for the identification of exclusive-or logic within the function or classes of functions. In relation to this, a major contribution of this work involves the design and implementation of algorithms based on these results. The first of these algorithms is used to identify three-level decompositions for functions, and the second to determine decomposition type lists for KDD-representations. Each of these implementations compares well with existing tools, requiring on average less than one second to complete, and performing as well as the existing tools about 70% of the time. / Graduate
8

A Hierarchical Modelling and Evaluation Technique for Safety Critical Systems / Une technique hiérarchique pour la modélisation et l'évaluation des systèmes de sécurité fonctionnelle

Pock, Michael 30 March 2012 (has links)
Cette thèse présente une nouvelle approche pour la modélisation des systèmes de sécurité fonctionnelle qui prend en compte plusieurs modes de défaillance pour les composants et le système global. Les diagrammes de flux d'information (IFDs) ont été initialement développé dans un thèse précédent. Dans ce travail, l'évaluation si l'approche flux d'information être rendue plus efficace par utiliser les diagrammes de décision binaires (BDD).Cette thèse sera d'expliquer pourquoi ce modèle est nécessaire et pratique, suivie d'une explication détaillée des IFDs. Cela inclut sa structure hiérarchique et comment ce modèle peut être appliqué.La prochaine étape est la formalisation du modèle IFD original pour permettre l'utilisation des techniques d'évaluation plus efficaces. Il sera expliqué pourquoi ces étapes de formalisation ont été prises et les avantages de leur utilisation.Ensuite une explication détaillée des algorithmes développés est présenté. Ces algorithmes sont basés sur une combinaison de différentes techniques de BDD. Zero Suppressed BDDs (ZBDDs) sont combinées avec des Boolean Expression Diagrams (BEDs). En outre, la structure des IFD est utilisé pour construire un BDD global sur plusieurs petits BDDs. Cela augmente l'efficacité du processus d'évaluation.Les techniques présentées sont évaluées par l'analyse de plusieurs cas d'utilisation qui sont expliqués dans ce travail / This thesis presents a novel approach for modelling safety critical systems which takes into account several failure modes both for components and the global system. The so called Information Flow Diagrams (IFDs) were originally developed in a previous PhD-thesis. In this work, the evaluation if the IFD-approach should be made more efficient by using Binary Decision Diagrams (BDDs).This thesis will explain why such a model is necessary and practical, followed by a detailed explanation of the IFD-model. This includes its hierarchical structure and how this model can be applied. The next step is to formalise the original IFD-model in order to enable more efficient evaluation techniques. It will be explained why these formalisation steps were taken and what was gained by using them. Afterwards a detailed explanation of the developed algorithms is presented. These algorithms are based on a combination of different BDD-techniques. Zero Suppressed BDDs (ZBDDs) are combined with Boolean Expression Diagrams (BEDs). Furthermore, the structure of the IFDs is used in order to construct a large BDD out of several smaller BDDs. This increases the efficiency of the evaluation process.The presented techniques are evaluated by analysing several use cases which are explained in this work
9

SPEA2-based safety system multi-objective optimization

Riauke, Jelena January 2009 (has links)
Safety systems are designed to prevent the occurrence of certain conditions and their future development into a hazardous situation. The consequence of the failure of a safety system of a potentially hazardous industrial system or process varies from minor inconvenience and cost to personal injury, significant economic loss and death. To minimise the likelihood of a hazardous situation, safety systems must be designed to maximise their availability. Therefore, the purpose of this thesis is to propose an effective safety system design optimization scheme. A multi-objective genetic algorithm has been adopted, where the criteria catered for includes unavailability, cost, spurious trip and maintenance down time. Analyses of individual system designs are carried out using the latest advantages of the fault tree analysis technique and the binary decision diagram approach (BDD). The improved strength Pareto evolutionary approach (SPEA2) is chosen to perform the system optimization resulting in the final design specifications. The practicality of the developed approach is demonstrated initially through application to a High Integrity Protection System (HIPS) and subsequently to test scalability using the more complex Firewater Deluge System (FDS). Computer code has been developed to carry out the analysis. The results for both systems are compared to those using a single objective optimization approach (GASSOP) and exhaustive search. The overall conclusions show a number of benefits of the SPEA2 based technique application to the safety system design optimization. It is common for safety systems to feature dependency relationships between its components. To enable the use of the fault tree analysis technique and the BDD approach for such systems, the Markov method is incorporated into the optimization process. The main types of dependency which can exist between the safety system component failures are identified. The Markov model generation algorithms are suggested for each type of dependency. The modified optimization tool is tested on the HIPS and FDS. Results comparison shows the benefit of using the modified technique for safety system optimization. Finally the effectiveness and application to general safety systems is discussed.
10

Efficient Reasoning Techniques for Large Scale Feature Models

Mendonca, Marcilio January 2009 (has links)
In Software Product Lines (SPLs), a feature model can be used to represent the similarities and differences within a family of software systems. This allows describing the systems derived from the product line as a unique combination of the features in the model. What makes feature models particularly appealing is the fact that the constraints in the model prevent incompatible features from being part of the same product. Despite the benefits of feature models, constructing and maintaining these models can be a laborious task especially in product lines with a large number of features and constraints. As a result, the study of automated techniques to reason on feature models has become an important research topic in the SPL community in recent years. Two techniques, in particular, have significant appeal for researchers: SAT solvers and Binary Decision Diagrams (BDDs). Each technique has been applied successfully for over four decades now to tackle many practical combinatorial problems in various domains. Currently, several approaches have proposed the compilation of feature models to specific logic representations to enable the use of SAT solvers and BDDs. In this thesis, we argue that several critical issues related to the use of SAT solvers and BDDs have been consistently neglected. For instance, satisfiability is a well-known NP-complete problem which means that, in theory, a SAT solver might be unable to check the satisfiability of a feature model in a feasible amount of time. Similarly, it is widely known that the size of BDDs can become intractable for large models. At the same time, we currently do not know precisely whether these are real issues when feature models, especially large ones, are compiled to SAT and BDD representations. Therefore, in our research we provide a significant step forward in the state-of-the-art by examining deeply many relevant properties of the feature modeling domain and the mechanics of SAT solvers and BDDs and the sensitive issues related to these techniques when applied in that domain. Specifically, we provide more accurate explanations for the space and/or time (in)tractability of these techniques in the feature modeling domain, and enhance the algorithmic performance of these techniques for reasoning on feature models. The contributions of our work include the proposal of novel heuristics to reduce the size of BDDs compiled from feature models, several insights on the construction of efficient domain-specific reasoning algorithms for feature models, and empirical studies to evaluate the efficiency of SAT solvers in handling very large feature models.

Page generated in 0.1044 seconds