Spelling suggestions: "subject:"algebra, boolean"" "subject:"algebra, oxoolean""
31 
Mathematical models for control of probabilistic Boolean networksJiao, Yue., 焦月. January 2008 (has links)
published_or_final_version / Mathematics / Master / Master of Philosophy

32 
Solving graph coloring and SAT problems using field programmable gate arrays.January 1999 (has links)
ChuKeung Chung. / Thesis (M.Phil.)Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 8892). / Abstracts in English and Chinese. / Abstract  p.i / Acknowledgments  p.iii / Chapter 1  Introduction  p.1 / Chapter 1.1  Motivation and Aims  p.1 / Chapter 1.2  Contributions  p.3 / Chapter 1.3  Structure of the Thesis  p.4 / Chapter 2  Literature Review  p.6 / Chapter 2.1  Introduction  p.6 / Chapter 2.2  Complete Algorithms  p.7 / Chapter 2.2.1  Parallel Checking  p.7 / Chapter 2.2.2  Mom's  p.8 / Chapter 2.2.3  DavisPutnam  p.9 / Chapter 2.2.4  Nonchronological Backtracking  p.9 / Chapter 2.2.5  Iterative Logic Array (ILA)  p.10 / Chapter 2.3  Incomplete Algorithms  p.11 / Chapter 2.3.1  GENET  p.11 / Chapter 2.3.2  GSAT  p.12 / Chapter 2.4  Summary  p.13 / Chapter 3  Algorithms  p.14 / Chapter 3.1  Introduction  p.14 / Chapter 3.2  Tree Search Techniques  p.14 / Chapter 3.2.1  Depth First Search  p.15 / Chapter 3.2.2  Forward Checking  p.16 / Chapter 3.2.3  DavisPutnam  p.17 / Chapter 3.2.4  GRASP  p.19 / Chapter 3.3  Incomplete Algorithms  p.20 / Chapter 3.3.1  GENET  p.20 / Chapter 3.3.2  GSAT Algorithm  p.22 / Chapter 3.4  Summary  p.23 / Chapter 4  Field Programmable Gate Arrays  p.24 / Chapter 4.1  Introduction  p.24 / Chapter 4.2  FPGA  p.24 / Chapter 4.2.1  Xilinx 4000 series FPGAs  p.26 / Chapter 4.2.2  Bitstream  p.31 / Chapter 4.3  Giga Operations Reconfigurable Computing Platform  p.32 / Chapter 4.4  Annapolis Wildforce PCI board  p.33 / Chapter 4.5  Summary  p.35 / Chapter 5  Implementation  p.36 / Chapter 5.1  Parallel Graph Coloring Machine  p.36 / Chapter 5.1.1  System Architecture  p.38 / Chapter 5.1.2  Evaluator  p.39 / Chapter 5.1.3  Finite State Machine (FSM)  p.42 / Chapter 5.1.4  Memory  p.43 / Chapter 5.1.5  Hardware Resources  p.43 / Chapter 5.2  Serial Graph Coloring Machine  p.44 / Chapter 5.2.1  System Architecture  p.44 / Chapter 5.2.2  Input Memory  p.46 / Chapter 5.2.3  Solution Store  p.46 / Chapter 5.2.4  Constraint Memory  p.47 / Chapter 5.2.5  Evaluator  p.48 / Chapter 5.2.6  Input Mapper  p.49 / Chapter 5.2.7  Output Memory  p.49 / Chapter 5.2.8  Backtrack Checker  p.50 / Chapter 5.2.9  Word Generator  p.51 / Chapter 5.2.10  State Machine  p.51 / Chapter 5.2.11  Hardware Resources  p.54 / Chapter 5.3  Serial Boolean Satisfiability Solver  p.56 / Chapter 5.3.1  System Architecture  p.58 / Chapter 5.3.2  Solutions  p.59 / Chapter 5.3.3  Solution Generator  p.59 / Chapter 5.3.4  Evaluator  p.60 / Chapter 5.3.5  AND/OR  p.62 / Chapter 5.3.6  State Machine  p.62 / Chapter 5.3.7  Hardware Resources  p.64 / Chapter 5.4  GSAT Solver  p.65 / Chapter 5.4.1  System Architecture  p.65 / Chapter 5.4.2  Variable Memory  p.65 / Chapter 5.4.3  FlipBit Vector  p.66 / Chapter 5.4.4  Clause Evaluator  p.67 / Chapter 5.4.5  Adder  p.70 / Chapter 5.4.6  Random Bit Generator  p.71 / Chapter 5.4.7  Comparator  p.71 / Chapter 5.4.8  Sum Register  p.71 / Chapter 5.5  Summary  p.71 / Chapter 6  Results  p.73 / Chapter 6.1  Introduction  p.73 / Chapter 6.2  Parallel Graph Coloring Machine  p.73 / Chapter 6.3  Serial Graph Coloring Machine  p.74 / Chapter 6.4  Serial SAT Solver  p.74 / Chapter 6.5  GSAT Solver  p.75 / Chapter 6.6  Summary  p.76 / Chapter 7  Conclusion  p.77 / Chapter 7.1  Future Work  p.78 / Chapter A  Software Implementation of Graph Coloring in CHIP  p.79 / Chapter B  Density Improvements Using Xilinx RAM  p.81 / Chapter C  Bit stream Configuration  p.83 / Bibliography  p.88 / Publications  p.93

33 
On efficient ordered binary decision diagram minimization heuristics based on twolevel logic.January 1999 (has links)
by Chun Gu. / Thesis (M.Phil.)Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 6971). / Abstract also in Chinese. / Chapter 1  Introduction  p.3 / Chapter 2  Definitions  p.7 / Chapter 3  Some Previous Work on OBDD  p.13 / Chapter 3.1  The Work of Bryant  p.13 / Chapter 3.2  Some Variations of the OBDD  p.14 / Chapter 3.3  Previous Work on Variable Ordering of OBDD  p.16 / Chapter 3.3.1  The FIH Heuristic  p.16 / Chapter 3.3.2  The Dynamic Variable Ordering  p.17 / Chapter 3.3.3  The Interleaving method  p.19 / Chapter 4  Two Level Logic Function and OBDD  p.21 / Chapter 5  DSCF Algorithm  p.25 / Chapter 6  Thin Boolean Function  p.33 / Chapter 6.1  The Structure and Properties of thin Boolean functions  p.33 / Chapter 6.1.1  The construction of Thin OBDDs  p.33 / Chapter 6.1.2  Properties of Thin Boolean Functions  p.38 / Chapter 6.1.3  Thin Factored Functions  p.49 / Chapter 6.2  The Revised DSCF Algorithm  p.52 / Chapter 6.3  Experimental Results  p.54 / Chapter 7  A Pattern Merging Algorithm  p.59 / Chapter 7.1  Merging of Patterns  p.60 / Chapter 7.2  The Algorithm  p.62 / Chapter 7.3  Experiments and Conclusion  p.65 / Chapter 8  Conclusions  p.67

34 
AttributeBased Encryption for Boolean FormulasKowalczyk, Lucas January 2019 (has links)
We present attributebased encryption (ABE) schemes for Boolean formulas that are adaptively secure under simple assumptions. Notably, our KPABE scheme enjoys a ciphertext size that is linear in the attribute vector length and independent of the formula size (even when attributes can be used multiple times), and we achieve an analogous result for CPABE. This resolves the central open problem in attributebased encryption posed by Lewko and Waters. Along the way, we develop a theory of modular design for unbounded ABE schemes and answer an open question regarding the adaptive security of Yao’s Secret Sharing scheme for NC1 circuits.

35 
On construction and control of probabilistic Boolean networksChen, Xi, 陈曦 January 2012 (has links)
Modeling gene regulation is an important problem in genomic research. The Boolean network (BN) and its generalization Probabilistic Boolean network (PBN) have been proposed to model genetic regulatory interactions.
BN is a deterministic model while PBN is a stochastic model. In a PBN, on one hand, its stationary distribution gives important information about the longrun behavior of the network. On the other hand, one may be interested in system synthesis which requires the construction of networks from the observed stationary distribution. This results in an inverse problem of constructing PBNs from a given stationary distribution and a given set of Boolean Networks (BNs), which is illposed and challenging, because there may be many networks or no network having the given properties and the size of the inverse problem is huge. The inverse problem is first formulated as a constrained least squares problem. A heuristic method is then proposed based on the conjugate gradient (CG) algorithm, an iterative method, to solve the resulting least squares problem. An estimation method for the parameters of the PBNs is also discussed. Numerical examples are then given to demonstrate the effectiveness of the proposed methods.
However, the PBNs generated by the above algorithm depends on the initial guess and is not unique. A heuristic method is then proposed for generating PBNs from a given transition probability matrix. Unique solution can be obtained in this case. Moreover, these algorithms are able to recover the dominated BNs and therefore the major structure of the network.
To further evaluate the feasible solutions, a maximum entropy approach is proposed using entropy as a measure of the fitness. Newton’s method in conjunction with the CG method is then applied to solving the inverse problem. The convergence rate of the proposed method is demonstrated. Numerical examples are also given to demonstrate the effectiveness of our proposed method.
Another important problem is to find the optimal control policy for a PBN so as to avoid the network from entering into undesirable states. By applying external control, the network is desired to enter into some state within a few time steps. For PBN CONTROL, people propose to find a control sequence such that the network will terminate in the desired state with a maximum probability. Also, the problem of minimizing the maximum cost is considered. Integer linear programming (ILP) and dynamic programming (DP) in conjunction with hard constraints are then employed to solve the above problems. Numerical experiments are given to demonstrate the effectiveness of our algorithms. A hardness result is demonstrated and suggests that PBN CONTROL is harder than BN CONTROL. In addition, deciding the steady state probability in PBN for a specified global state is demonstrated to be NPhard.
However, due to the high computational complexity of PBNs, DP method is computationally inefficient for a large size network. Inspired by the state reduction strategies studied in [86], the DP method in conjunction with state reduction approach is then proposed to reduce the computational cost of the DP method. Numerical examples are given to demonstrate both the effectiveness and the efficiency of our proposed method. / published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy

36 
Computer aided synthesis of memoryless logic circuits.Cerny, Eduard. January 1971 (has links)
No description available.

37 
An efficient algorithm for extracting Boolean functions from linear threshold gates, and a synthetic decompositional approach to extracting Boolean functions from feedforward neural networks with arbitrary transfer functionsPeh, Lawrence T. W. January 2000 (has links)
[Formulae and special characters can only be approximated here. Please see the pdf version of the Abstract for an accurate reproduction.] Artificial neural networks are universal function approximators that represent functions subsymbolically by weights, thresholds and network topology. Naturally, the representation remains the same regardless of the problem domain. Suppose a network is applied to a symbolic domain. It is difficult for a human to dynamically construct the symbolic function from the neural representation. It is also difficult to retrain networks on perturbed training vectors, to resume training with different training sets, to form a new neuron by combining trained neurons, and to reason with trained neurons. Even the original training set does not provide a symbolic representation of the function implemented by the trained network because the set may be incomplete or inconsistent, and the training phase may terminate with residual errors. The symbolic information in the network would be more useful if it is available in the language of the problem domain. Algorithms that translate the subsymbolic neural representation to a symbolic representation are called extraction algorithms. I argue that extraction algorithms that operate on singleoutput, layered feedforward networks are sufficient to analyse the class of multipleoutput networks with arbitrary connections, including recurrent networks. The translucency dimensions of the ADT taxonomy for feedforward networks classifies extraction approaches as pedagogical, eclectic, or decompositional. Pedagogical and eclectic approaches typically use a symbolic learning algorithm that takes the network’s inputoutput behaviour as its raw data. Both approaches construct a set of input patterns and observe the network’s output for each pattern. Eclectic and pedagogical approaches construct the input patterns respectively with and without reference to the network’s internal information. These approaches are suitable for approximating the network’s function using a probablyapproximatelycorrect (PAC) or similar framework, but they are unsuitable for constructing the network’s complete function. Decompositional approaches use internal information from a network more directly to produce the network’s function in symbolic form. Decompositional algorithms have two components. The first component is a core extraction algorithm that operates on a single neuron that is assumed to implement a symbolic function. The second component provides the superstructure for the first. It consists of a decomposition rule for producing such neurons and a recomposition rule for symbolically aggregating the extracted functions into the symbolic function of the network. This thesis makes contributions to both components for Boolean extraction. I introduce a relatively efficient core algorithm called WSX based on a novel Boolean form called BvF. The algorithm has a worst case complexity of O(2 to power of n divided by the square root of n) for a neuron with n inputs, but in all cases, its complexity can also be expressed as O(l) with an O(n) precalculation phase, where l is the length of the extracted expression in terms of the number of symbols it contains. I extend WSX for approximate extraction (AWSX) by introducing an interval about the neuron’s threshold. Assuming that the input patterns far from the threshold are more symbolically significant to the neuron than those near the threshold, ASWX ignores the neuron’s mappings for the symbolically input patterns, remapping them as convenient for efficiency. In experiments, this dramatically decreased extraction time while retaining most of the neuron’s mappings for the training set. Synthetic decomposition is this thesis’ contribution to the second component of decompositional extraction. Classical decomposition decomposes the network into its constituent neurons. By extracting symbolic functions from these neurons, classical decomposition assumes that the neurons implement symbolic functions, or that approximating the subsymbolic computation in the neurons with symbolic computation does not significantly affect the network’s symbolic function. I show experimentally that this assumption does not always hold. Instead of decomposing a network into its constituent neurons, synthetic decomposition uses constraints in the network that have the same functional form as neurons that implement Boolean functions; these neurons are called synthetic neurons. I present a starting point for constructing synthetic decompositional algorithms, and proceed to construct two such algorithms, each with a different strategy for decomposition and recomposition. One of the algorithms, ACX, works for networks with arbitrary monotonic transfer functions, so long as an inverse exists for the functions. It also has an elegant geometric interpretation that leads to meaningful approximations. I also show that ACX can be extended to layered networks with any number of layers.

38 
Boolean techniques in discrete optimization and expert systems.Lu, Peng. Siddall, J.N. Unknown Date (has links)
Thesis (Ph.D.)McMaster University (Canada), 1989. / Source: Dissertation Abstracts International, Volume: 6213, Section: A, page: 0000.

39 
Mathematical models for control of probabilistic Boolean networksJiao, Yue. January 2008 (has links)
Thesis (M. Phil.)University of Hong Kong, 2009. / Includes bibliographical references (leaves 5056) Also available in print.

40 
Monadic bounded algebras : a thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Doctor of Philosophy in Mathematics /Akishev, Galym. January 2009 (has links)
Thesis (Ph.D.)Victoria University of Wellington, 2009. / Includes bibliographical references and index.

Page generated in 0.0489 seconds