• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2669
  • 1219
  • 191
  • 179
  • 120
  • 59
  • 35
  • 27
  • 26
  • 24
  • 24
  • 21
  • 20
  • 19
  • 18
  • Tagged with
  • 5683
  • 5683
  • 2012
  • 1732
  • 1466
  • 1370
  • 1254
  • 1198
  • 981
  • 753
  • 691
  • 669
  • 617
  • 530
  • 511
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

The Induced topology of local minima with applications to artificial neural networks.

January 1992 (has links)
by Yun Chung Chu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 163-[165]). / Chapter 1 --- Background --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Basic notations --- p.4 / Chapter 1.3 --- Object of study --- p.6 / Chapter 2 --- Review of Kohonen's algorithm --- p.22 / Chapter 2.1 --- General form of Kohonen's algorithm --- p.22 / Chapter 2.2 --- r-neighborhood by matrix --- p.25 / Chapter 2.3 --- Examples --- p.28 / Chapter 3 --- Local minima --- p.34 / Chapter 3.1 --- Theory of local minima --- p.35 / Chapter 3.2 --- Minimizing the number of local minima --- p.40 / Chapter 3.3 --- Detecting the success or failure of Kohonen's algorithm --- p.48 / Chapter 3.4 --- Local minima for different graph structures --- p.59 / Chapter 3.5 --- Formulation by geodesic distance --- p.65 / Chapter 3.6 --- Local minima and Voronoi regions --- p.67 / Chapter 4 --- Induced graph --- p.70 / Chapter 4.1 --- Formalism --- p.71 / Chapter 4.2 --- Practical way to find the induced graph --- p.88 / Chapter 4.3 --- Some examples --- p.95 / Chapter 5 --- Given mapping vs induced mapping --- p.102 / Chapter 5.1 --- Comparison between given mapping and induced mapping --- p.102 / Chapter 5.2 --- Matching the induced mapping to given mapping --- p.115 / Chapter 6 --- A special topic: application to determination of dimension --- p.131 / Chapter 6.1 --- Theory --- p.133 / Chapter 6.2 --- Advanced examples --- p.151 / Chapter 6.3 --- Special applications --- p.156 / Chapter 7 --- Conclusion --- p.159 / Bibliography --- p.163
282

A specification-based design tool for artificial neural networks.

January 1992 (has links)
Wong Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 78-80). / Chapter 1. --- Introduction --- p.1 / Chapter 1.1. --- Specification Environment --- p.2 / Chapter 1.2. --- Specification Analysis --- p.2 / Chapter 1.3. --- Outline --- p.3 / Chapter 2. --- Survey --- p.4 / Chapter 2.1. --- Concurrence Specification --- p.4 / Chapter 2.1.1. --- Sequential Approach --- p.5 / Chapter 2.1.2. --- Mapping onto Concurrent Architecture --- p.6 / Chapter 2.1.3. --- Automatic Concurrence Introduction --- p.7 / Chapter 2.2. --- Specification Analysis --- p.8 / Chapter 2.2.1. --- Motivation --- p.8 / Chapter 2.2.2. --- Cyclic Dependency --- p.8 / Chapter 3. --- The Design Tool --- p.11 / Chapter 3.1. --- Specification Environment --- p.11 / Chapter 3.1.1. --- Framework --- p.11 / Chapter 3.1.1.1. --- Formal Neurons --- p.12 / Chapter 3.1.1.2. --- Configuration --- p.12 / Chapter 3.1.1.3. --- Control Neuron --- p.13 / Chapter 3.1.2. --- Dataflow Specification --- p.14 / Chapter 3.1.2.1. --- Absence of Control Information --- p.14 / Chapter 3.1.2.2. --- Single-Valued Variables & Explicit Time Indices --- p.14 / Chapter 3.1.2.3. --- Explicit Notations --- p.15 / Chapter 3.1.3. --- User Interface --- p.15 / Chapter 3.2. --- Specification Analysis --- p.16 / Chapter 3.2.1. --- Data Dependency Analysis --- p.16 / Chapter 3.2.2. --- Attribute Analysis --- p.16 / Chapter 4. --- BP-Net Specification --- p.18 / Chapter 4.1. --- BP-Net Paradigm --- p.18 / Chapter 4.1.1. --- Neurons of a BP-Net --- p.18 / Chapter 4.1.2. --- Configuration of BP-Net --- p.20 / Chapter 4.2. --- Constant Declarations --- p.20 / Chapter 4.3. --- Formal Neuron Specification --- p.21 / Chapter 4.3.1. --- Mapping the Paradigm --- p.22 / Chapter 4.3.1.1. --- Mapping Symbols onto Parameter Names --- p.22 / Chapter 4.3.1.2. --- Mapping Neuron Equations onto Internal Functions --- p.22 / Chapter 4.3.2. --- Form Entries --- p.23 / Chapter 4.3.2.1. --- Neuron Type Entry --- p.23 / Chapter 4.3.2.2. --- "Input, Output and Internal Parameter Entries" --- p.23 / Chapter 4.3.2.3. --- Initial Value Entry --- p.25 / Chapter 4.3.2.4. --- Internal Function Entry --- p.25 / Chapter 4.4. --- Configuration Specification --- p.28 / Chapter 4.4.1. --- Fonn Entries --- p.29 / Chapter 4.4.1.1. --- Neuron Label Entry --- p.29 / Chapter 4.4.1.2. --- Neuron Character Entry --- p.30 / Chapter 4.4.1.3. --- Connection Pattern Entry --- p.31 / Chapter 4.4.2. --- Characteristics of the Syntax --- p.33 / Chapter 4.5. --- Control Neuron Specification --- p.34 / Chapter 4.5.1. --- Form Entries --- p.35 / Chapter 4.5.1.1. --- "Global Input, Output, Parameter & Initial Value Entries" --- p.35 / Chapter 4.5.1.2. --- Input & Output File Entries --- p.36 / Chapter 4.5.1.3. --- Global Function Entry --- p.36 / Chapter 5. --- Data Dependency Analysis_ --- p.40 / Chapter 5.1. --- Graph Construction --- p.41 / Chapter 5.1.1. --- Simplification and Normalization --- p.41 / Chapter 5.1.1.1. --- Removing Non-Esscntial Information --- p.41 / Chapter 5.1.1.2. --- Removing File Record Parameters --- p.42 / Chapter 5.1.1.3. --- Rearranging Temporal offset --- p.42 / Chapter 5.1.1.4. --- Conservation of Temporal Relationship --- p.43 / Chapter 5.1.1.5. --- Zero/Negative Offset for Determining Parameters --- p.43 / Chapter 5.1.2. --- Internal Dependency Graphs (IDGs) --- p.43 / Chapter 5.1.3. --- IDG of Control Neuron (CnIDG) --- p.45 / Chapter 5.1.4. --- Global Dependency Graphs (GDGs) --- p.45 / Chapter 5.2. --- Cycle Detection --- p.48 / Chapter 5.2.1. --- BP-Net --- p.48 / Chapter 5.2.2. --- Other Examples --- p.49 / Chapter 5.2.2.1. --- The Perceptron --- p.50 / Chapter 5.2.2.2. --- The Boltzmann Machinc --- p.51 / Chapter 5.2.3. --- Number of Cycles --- p.52 / Chapter 5.2.3.1. --- Different Number of Layers --- p.52 / Chapter 5.2.3.2. --- Different Network Types --- p.52 / Chapter 5.2.4. --- Cycle Length --- p.53 / Chapter 5.2.4.1. --- Different Number of Layers --- p.53 / Chapter 5.2.4.2. --- Comparison Among Different Networks --- p.53 / Chapter 5.2.5. --- Difficulties in Analysis --- p.53 / Chapter 5.3. --- Dependency Cycle Analysis --- p.54 / Chapter 5.3.1. --- Temporal Index Analysis --- p.54 / Chapter 5.3.2. --- Non-Temporal Index Analysis --- p.55 / Chapter 5.3.2.1. --- A Simple Example --- p.55 / Chapter 5.3.2.2. --- Single Parameter --- p.56 / Chapter 5.3.2.3. --- Multiple Parameters --- p.57 / Chapter 5.3.3. --- Combined Method --- p.58 / Chapter 5.3.4. --- Scheduling --- p.58 / Chapter 5.3.4.1. --- Algorithm --- p.59 / Chapter 5.3.4.2. --- Schedule for the BP-Net --- p.59 / Chapter 5.4. --- Symmetry in Graph Construction --- p.60 / Chapter 5.4.1. --- Basic Approach --- p.60 / Chapter 5.4.2. --- Construction of the BP-Net GDG --- p.61 / Chapter 5.4.3. --- Limitation --- p.63 / Chapter 6. --- Attribute Analysis__ --- p.64 / Chapter 6.1. --- Parameter Analysis --- p.64 / Chapter 6.1.1. --- Internal Dependency Graphs (IDGs) --- p.65 / Chapter 6.1.1.1. --- Correct Properties of Parameters in IDGs --- p.65 / Chapter 6.1.1.2. --- Example --- p.65 / Chapter 6.1.2. --- Combined Internal Dependency Graphs (CIDG) --- p.66 / Chapter 6.1.2.1. --- Tests on Parameters of CIDG --- p.66 / Chapter 6.1.2.2. --- Example --- p.67 / Chapter 6.1.3. --- Finalized Neuron Obtained --- p.67 / Chapter 6.1 4. --- CIDG of the BP-Net --- p.68 / Chapter 6.2. --- Constraint Checking --- p.68 / Chapter 6.2.1. --- "Syntactic, Semantic and Simple Checkings" --- p.68 / Chapter 6.2.1.1. --- The Syntactic & Semantic Techniques --- p.68 / Chapter 6.2.1.2. --- Simple Matching --- p.70 / Chapter 6.2.2. --- Constraints --- p.71 / Chapter 6.2.2.1. --- Constraints on Formal Neuron --- p.71 / Chapter 6.2.2.2. --- Constraints on Configuration --- p.72 / Chapter 6.2.2.3. --- Constraints on Control Neuron --- p.73 / Chapter 6.3. --- Complete Checking Procedure --- p.73 / Chapter 7. --- Conclusions_ --- p.75 / Chapter 7.1. --- Limitations --- p.76 / Chapter 7.1.1. --- Exclusive Conditional Dependency Cycles --- p.76 / Chapter 7.1.2. --- Maximum Parallelism --- p.77 / Reference --- p.78 / Appendix --- p.1 / Chapter I. --- Form Syntax --- p.1 / Chapter A. --- Syntax Conventions --- p.1 / Chapter B. --- Form Definition --- p.1 / Chapter 1. --- Form Structure --- p.1 / Chapter 2. --- Constant Declaration --- p.1 / Chapter 3. --- Formal Neuron Declaration --- p.1 / Chapter 4. --- Configuration Declaration --- p.2 / Chapter 5. --- Control Neuron --- p.2 / Chapter 6. --- Supplementary Definition --- p.3 / Chapter II. --- Algorithms --- p.4 / Chapter III. --- Deadlock & Dependency Cycles --- p.14 / Chapter A. --- Deadlock Prevention --- p.14 / Chapter 1. --- Necessary Conditions for Deadlock --- p.14 / Chapter 2. --- Resource Allocation Graphs --- p.15 / Chapter 3. --- Cycles and Blocked Requests --- p.15 / Chapter B. --- Deadlock in ANN Systems --- p.16 / Chapter 1. --- Shared resources --- p.16 / Chapter 2. --- Presence of the Necessary Conditions for Deadlocks --- p.16 / Chapter 3. --- Operation Constraint for Communication --- p.16 / Chapter 4. --- Checkings Required --- p.17 / Chapter C. --- Data Dependency Graphs --- p.17 / Chapter 1. --- Simplifying Resource Allocation Graphs --- p.17 / Chapter 2. --- Expanding into Parameter Level --- p.18 / Chapter 3. --- Freezing the Request Edges --- p.18 / Chapter 4. --- Reversing the Edge Directions --- p.18 / Chapter 5. --- Mutual Dependency Cycles --- p.18 / Chapter IV. --- Case Studies --- p.19 / Chapter A. --- BP-Net --- p.19 / Chapter 1. --- Specification Forms --- p.19 / Chapter 2. --- Results After Simple Checkings --- p.21 / Chapter 3. --- Internal Dependency Graphs Construction --- p.21 / Chapter 4. --- Results From Parameter Analysis --- p.21 / Chapter 5. --- Global Dependency Graphs Construction --- p.21 / Chapter 6. --- Cycles Detection --- p.21 / Chapter 7. --- Time Subscript Analysis --- p.21 / Chapter 8. --- Subscript Analysis --- p.21 / Chapter 9. --- Scheduling --- p.21 / Chapter B. --- Perceptron --- p.21 / Chapter 1. --- Specification Forms --- p.22 / Chapter 2. --- Results After Simple Checkings --- p.24 / Chapter 3. --- Internal Dependency Graphs Construction --- p.24 / Chapter 4. --- Results From Parameter Analysis --- p.25 / Chapter 5. --- Global Dependency Graph Construction --- p.25 / Chapter 6. --- Cycles Detection --- p.25 / Chapter 7. --- Time Subscript Analysis --- p.25 / Chapter 8. --- Subscript Analysis --- p.25 / Chapter 9. --- Scheduling --- p.25 / Chapter C. --- Boltzmann Machine --- p.26 / Chapter 1. --- Specification Forms --- p.26 / Chapter 2. --- Results After Simple Checkings --- p.35 / Chapter 3. --- Graphs Construction --- p.35 / Chapter 4. --- Results From Parameter Analysis --- p.36 / Chapter 5. --- Global Dependency Graphs Construction --- p.36 / Chapter 6. --- Cycle Detection --- p.36 / Chapter 7. --- Time Subscript Analysis --- p.36 / Chapter 8. --- Subscript Analysis --- p.36 / Chapter 9. --- Scheduling --- p.36
283

Restoration network design and neural network.

January 1992 (has links)
by Leung Lee. / Thesis (M.Sc.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references. / Chapter SECTION 1. --- Introduction --- p.1 / Chapter SECTION 2. --- Formulation of Problem --- p.2 / Chapter 2.1 --- Problem Identification --- p.2 / Chapter 2.2 --- Network Planning Parameters and Assumptions --- p.3 / Chapter 2.3 --- Neural Network Model Transformation --- p.5 / Chapter 2.4 --- Algorithm and Implementation --- p.12 / Chapter SECTION 3. --- Simulation Results --- p.15 / Chapter 3.1 --- All Link Costs Are Same or Nearly the Same --- p.17 / Chapter 3.2 --- Fluctuated Cost in One or Two Fibre Paths --- p.18 / Chapter 3.3 --- Sudden Traffic Demand Change in Last Season --- p.19 / Chapter SECTION 4. --- Discussion --- p.20 / Chapter SECTION 5. --- Conclusion --- p.26 / GLOSSARY OF TERMS --- p.27 / BIBLIOGRAPHY --- p.29 / APPENDIX --- p.A1 / Chapter A --- Simulation Results --- p.A1 / Chapter B --- ANN Traffic Routing Example --- p.B1
284

Recurrent neural network for optimization with application to computer vision.

January 1993 (has links)
by Cheung Kwok-wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves [146-154]). / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- Programmed computing vs. neurocomputing --- p.1-1 / Chapter 1.2 --- Development of neural networks - feedforward and feedback models --- p.1-2 / Chapter 1.3 --- State of art of applying recurrent neural network towards computer vision problem --- p.1-3 / Chapter 1.4 --- Objective of the Research --- p.1-6 / Chapter 1.5 --- Plan of the thesis --- p.1-7 / Chapter Chapter 2 --- Background / Chapter 2.1 --- Short history on development of Hopfield-like neural network --- p.2-1 / Chapter 2.2 --- Hopfield network model --- p.2-3 / Chapter 2.2.1 --- Neuron's transfer function --- p.2-3 / Chapter 2.2.2 --- Updating sequence --- p.2-6 / Chapter 2.3 --- Hopfield energy function and network convergence properties --- p.2-1 / Chapter 2.4 --- Generalized Hopfield network --- p.2-13 / Chapter 2.4.1 --- Network order and generalized Hopfield network --- p.2-13 / Chapter 2.4.2 --- Associated energy function and network convergence property --- p.2-13 / Chapter 2.4.3 --- Hardware implementation consideration --- p.2-15 / Chapter Chapter 3 --- Recurrent neural network for optimization / Chapter 3.1 --- Mapping to Neural Network formulation --- p.3-1 / Chapter 3.2 --- Network stability verse Self-reinforcement --- p.3-5 / Chapter 3.2.1 --- Quadratic problem and Hopfield network --- p.3-6 / Chapter 3.2.2 --- Higher-order case and reshaping strategy --- p.3-8 / Chapter 3.2.3 --- Numerical Example --- p.3-10 / Chapter 3.3 --- Local minimum limitation and existing solutions in the literature --- p.3-12 / Chapter 3.3.1 --- Simulated Annealing --- p.3-13 / Chapter 3.3.2 --- Mean Field Annealing --- p.3-15 / Chapter 3.3.3 --- Adaptively changing neural network --- p.3-16 / Chapter 3.3.4 --- Correcting Current Method --- p.3-16 / Chapter 3.4 --- Conclusions --- p.3-17 / Chapter Chapter 4 --- A Novel Neural Network for Global Optimization - Tunneling Network / Chapter 4.1 --- Tunneling Algorithm --- p.4-1 / Chapter 4.1.1 --- Description of Tunneling Algorithm --- p.4-1 / Chapter 4.1.2 --- Tunneling Phase --- p.4-2 / Chapter 4.2 --- A Neural Network with tunneling capability Tunneling network --- p.4-8 / Chapter 4.2.1 --- Network Specifications --- p.4-8 / Chapter 4.2.2 --- Tunneling function for Hopfield network and the corresponding updating rule --- p.4-9 / Chapter 4.3 --- Tunneling network stability and global convergence property --- p.4-12 / Chapter 4.3.1 --- Tunneling network stability --- p.4-12 / Chapter 4.3.2 --- Global convergence property --- p.4-15 / Chapter 4.3.2.1 --- Markov chain model for Hopfield network --- p.4-15 / Chapter 4.3.2.2 --- Classification of the Hopfield markov chain --- p.4-16 / Chapter 4.3.2.3 --- Markov chain model for tunneling network and its convergence towards global minimum --- p.4-18 / Chapter 4.3.3 --- Variation of pole strength and its effect --- p.4-20 / Chapter 4.3.3.1 --- Energy Profile analysis --- p.4-21 / Chapter 4.3.3.2 --- Size of attractive basin and pole strength required --- p.4-24 / Chapter 4.3.3.3 --- A new type of pole eases the implementation problem --- p.4-30 / Chapter 4.4 --- Simulation Results and Performance comparison --- p.4-31 / Chapter 4.4.1 --- Simulation Experiments --- p.4-32 / Chapter 4.4.2 --- Simulation Results and Discussions --- p.4-37 / Chapter 4.4.2.1 --- Comparisons on optimal path obtained and the convergence rate --- p.4-37 / Chapter 4.4.2.2 --- On decomposition of Tunneling network --- p.4-38 / Chapter 4.5 --- Suggested hardware implementation of Tunneling network --- p.4-48 / Chapter 4.5.1 --- Tunneling network hardware implementation --- p.4-48 / Chapter 4.5.2 --- Alternative implementation theory --- p.4-52 / Chapter 4.6 --- Conclusions --- p.4-54 / Chapter Chapter 5 --- Recurrent Neural Network for Gaussian Filtering / Chapter 5.1 --- Introduction --- p.5-1 / Chapter 5.1.1 --- Silicon Retina --- p.5-3 / Chapter 5.1.2 --- An Active Resistor Network for Gaussian Filtering of Image --- p.5-5 / Chapter 5.1.3 --- Motivations of using recurrent neural network --- p.5-7 / Chapter 5.1.4 --- Difference between the active resistor network model and recurrent neural network model for gaussian filtering --- p.5-8 / Chapter 5.2 --- From Problem formulation to Neural Network formulation --- p.5-9 / Chapter 5.2.1 --- One Dimensional Case --- p.5-9 / Chapter 5.2.2 --- Two Dimensional Case --- p.5-13 / Chapter 5.3 --- Simulation Results and Discussions --- p.5-14 / Chapter 5.3.1 --- Spatial impulse response of the 1-D network --- p.5-14 / Chapter 5.3.2 --- Filtering property of the 1-D network --- p.5-14 / Chapter 5.3.3 --- Spatial impulse response of the 2-D network and some filtering results --- p.5-15 / Chapter 5.4 --- Conclusions --- p.5-16 / Chapter Chapter 6 --- Recurrent Neural Network for Boundary Detection / Chapter 6.1 --- Introduction --- p.6-1 / Chapter 6.2 --- From Problem formulation to Neural Network formulation --- p.6-3 / Chapter 6.2.1 --- Problem Formulation --- p.6-3 / Chapter 6.2.2 --- Recurrent Neural Network Model used --- p.6-4 / Chapter 6.2.3 --- Neural Network formulation --- p.6-5 / Chapter 6.3 --- Simulation Results and Discussions --- p.6-7 / Chapter 6.3.1 --- Feasibility study and Performance comparison --- p.6-7 / Chapter 6.3.2 --- Smoothing and Boundary Detection --- p.6-9 / Chapter 6.3.3 --- Convergence improvement by network decomposition --- p.6-10 / Chapter 6.3.4 --- Hardware implementation consideration --- p.6-10 / Chapter 6.4 --- Conclusions --- p.6-11 / Chapter Chapter 7 --- Conclusions and Future Researches / Chapter 7.1 --- Contributions and Conclusions --- p.7-1 / Chapter 7.2 --- Limitations and Suggested Future Researches --- p.7-3 / References --- p.R-l / Appendix I The assignment of the boundary connection of 2-D recurrent neural network for gaussian filtering --- p.Al-1 / Appendix II Formula for connection weight assignment of 2-D recurrent neural network for gaussian filtering and the proof on symmetric property --- p.A2-1 / Appendix III Details on reshaping strategy --- p.A3-1
285

On implementation and applications of the adaptive-network-based fuzzy inference system.

January 1994 (has links)
by Ong Kai Hin George. / Thesis (M.Sc.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves [102-104]).
286

On the Synthesis of fuzzy neural systems.

January 1995 (has links)
by Chung, Fu Lai. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 166-174). / ACKNOWLEDGEMENT --- p.iii / ABSTRACT --- p.iv / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Integration of Fuzzy Systems and Neural Networks --- p.1 / Chapter 1.2 --- Objectives of the Research --- p.7 / Chapter 1.2.1 --- Fuzzification of Competitive Learning Algorithms --- p.7 / Chapter 1.2.2 --- Capacity Analysis of FAM and FRNS Models --- p.8 / Chapter 1.2.3 --- Structure and Parameter Identifications of FRNS --- p.9 / Chapter 1.3 --- Outline of the Thesis --- p.9 / Chapter 2. --- A Fuzzy System Primer --- p.11 / Chapter 2.1 --- Basic Concepts of Fuzzy Sets --- p.11 / Chapter 2.2 --- Fuzzy Set-Theoretic Operators --- p.15 / Chapter 2.3 --- "Linguistic Variable, Fuzzy Rule and Fuzzy Inference" --- p.19 / Chapter 2.4 --- Basic Structure of a Fuzzy System --- p.22 / Chapter 2.4.1 --- Fuzzifier --- p.22 / Chapter 2.4.2 --- Fuzzy Knowledge Base --- p.23 / Chapter 2.4.3 --- Fuzzy Inference Engine --- p.24 / Chapter 2.4.4 --- Defuzzifier --- p.28 / Chapter 2.5 --- Concluding Remarks --- p.29 / Chapter 3. --- Categories of Fuzzy Neural Systems --- p.30 / Chapter 3.1 --- Introduction --- p.30 / Chapter 3.2 --- Fuzzification of Neural Networks --- p.31 / Chapter 3.2.1 --- Fuzzy Membership Driven Models --- p.32 / Chapter 3.2.2 --- Fuzzy Operator Driven Models --- p.34 / Chapter 3.2.3 --- Fuzzy Arithmetic Driven Models --- p.35 / Chapter 3.3 --- Layered Network Implementation of Fuzzy Systems --- p.36 / Chapter 3.3.1 --- Mamdani's Fuzzy Systems --- p.36 / Chapter 3.3.2 --- Takagi and Sugeno's Fuzzy Systems --- p.37 / Chapter 3.3.3 --- Fuzzy Relation Based Fuzzy Systems --- p.38 / Chapter 3.4 --- Concluding Remarks --- p.40 / Chapter 4. --- Fuzzification of Competitive Learning Networks --- p.42 / Chapter 4.1 --- Introduction --- p.42 / Chapter 4.2 --- Crisp Competitive Learning --- p.44 / Chapter 4.2.1 --- Unsupervised Competitive Learning Algorithm --- p.46 / Chapter 4.2.2 --- Learning Vector Quantization Algorithm --- p.48 / Chapter 4.2.3 --- Frequency Sensitive Competitive Learning Algorithm --- p.50 / Chapter 4.3 --- Fuzzy Competitive Learning --- p.50 / Chapter 4.3.1 --- Unsupervised Fuzzy Competitive Learning Algorithm --- p.53 / Chapter 4.3.2 --- Fuzzy Learning Vector Quantization Algorithm --- p.54 / Chapter 4.3.3 --- Fuzzy Frequency Sensitive Competitive Learning Algorithm --- p.58 / Chapter 4.4 --- Stability of Fuzzy Competitive Learning --- p.58 / Chapter 4.5 --- Controlling the Fuzziness of Fuzzy Competitive Learning --- p.60 / Chapter 4.6 --- Interpretations of Fuzzy Competitive Learning Networks --- p.61 / Chapter 4.7 --- Simulation Results --- p.64 / Chapter 4.7.1 --- Performance of Fuzzy Competitive Learning Algorithms --- p.64 / Chapter 4.7.2 --- Performance of Monotonically Decreasing Fuzziness Control Scheme --- p.74 / Chapter 4.7.3 --- Interpretation of Trained Networks --- p.76 / Chapter 4.8 --- Concluding Remarks --- p.80 / Chapter 5. --- Capacity Analysis of Fuzzy Associative Memories --- p.82 / Chapter 5.1 --- Introduction --- p.82 / Chapter 5.2 --- Fuzzy Associative Memories (FAMs) --- p.83 / Chapter 5.3 --- Storing Multiple Rules in FAMs --- p.87 / Chapter 5.4 --- A High Capacity Encoding Scheme for FAMs --- p.90 / Chapter 5.5 --- Memory Capacity --- p.91 / Chapter 5.6 --- Rule Modification --- p.93 / Chapter 5.7 --- Inference Performance --- p.99 / Chapter 5.8 --- Concluding Remarks --- p.104 / Chapter 6. --- Capacity Analysis of Fuzzy Relational Neural Systems --- p.105 / Chapter 6.1 --- Introduction --- p.105 / Chapter 6.2 --- Fuzzy Relational Equations and Fuzzy Relational Neural Systems --- p.107 / Chapter 6.3 --- Solving a System of Fuzzy Relational Equations --- p.109 / Chapter 6.4 --- New Solvable Conditions --- p.112 / Chapter 6.4.1 --- Max-t Fuzzy Relational Equations --- p.112 / Chapter 6.4.2 --- Min-s Fuzzy Relational Equations --- p.117 / Chapter 6.5 --- Approximate Resolution --- p.119 / Chapter 6.6 --- System Capacity --- p.123 / Chapter 6.7 --- Inference Performance --- p.125 / Chapter 6.8 --- Concluding Remarks --- p.127 / Chapter 7. --- Structure and Parameter Identifications of Fuzzy Relational Neural Systems --- p.129 / Chapter 7.1 --- Introduction --- p.129 / Chapter 7.2 --- Modelling Nonlinear Dynamic Systems by Fuzzy Relational Equations --- p.131 / Chapter 7.3 --- A General FRNS Identification Algorithm --- p.138 / Chapter 7.4 --- An Evolutionary Computation Approach to Structure and Parameter Identifications --- p.139 / Chapter 7.4.1 --- Guided Evolutionary Simulated Annealing --- p.140 / Chapter 7.4.2 --- An Evolutionary Identification (EVIDENT) Algorithm --- p.143 / Chapter 7.5 --- Simulation Results --- p.146 / Chapter 7.6 --- Concluding Remarks --- p.158 / Chapter 8. --- Conclusions --- p.159 / Chapter 8.1 --- Summary of Contributions --- p.160 / Chapter 8.1.1 --- Fuzzy Competitive Learning --- p.160 / Chapter 8.1.2 --- Capacity Analysis of FAM and FRNS --- p.160 / Chapter 8.1.3 --- Numerical Identification of FRNS --- p.161 / Chapter 8.2 --- Further Investigations --- p.162 / Appendix A Publication List of the Candidate --- p.164 / BIBLIOGRAPHY --- p.166
287

Approaches to the implementation of binary relation inference network.

January 1994 (has links)
by C.W. Tong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 96-98). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Availability of Parallel Processing Machines --- p.2 / Chapter 1.1.1 --- Neural Networks --- p.5 / Chapter 1.2 --- Parallel Processing in the Continuous-Time Domain --- p.6 / Chapter 1.3 --- Binary Relation Inference Network --- p.10 / Chapter 2 --- Binary Relation Inference Network --- p.12 / Chapter 2.1 --- Binary Relation Inference Network --- p.12 / Chapter 2.1.1 --- Network Structure --- p.14 / Chapter 2.2 --- Shortest Path Problem --- p.17 / Chapter 2.2.1 --- Problem Statement --- p.17 / Chapter 2.2.2 --- A Binary Relation Inference Network Solution --- p.18 / Chapter 3 --- A Binary Relation Inference Network Prototype --- p.21 / Chapter 3.1 --- The Prototype --- p.22 / Chapter 3.1.1 --- The Network --- p.22 / Chapter 3.1.2 --- Computational Element --- p.22 / Chapter 3.1.3 --- Network Response Time --- p.27 / Chapter 3.2 --- Improving Response --- p.29 / Chapter 3.2.1 --- Removing Feedback --- p.29 / Chapter 3.2.2 --- Selecting Minimum with Diodes --- p.30 / Chapter 3.3 --- Speeding Up the Network Response --- p.33 / Chapter 3.4 --- Conclusion --- p.35 / Chapter 4 --- VLSI Building Blocks --- p.36 / Chapter 4.1 --- The Site --- p.37 / Chapter 4.2 --- The Unit --- p.40 / Chapter 4.2.1 --- A Minimum Finding Circuit --- p.40 / Chapter 4.2.2 --- A Tri-state Comparator --- p.44 / Chapter 4.3 --- The Computational Element --- p.45 / Chapter 4.3.1 --- Network Performances --- p.46 / Chapter 4.4 --- Discussion --- p.47 / Chapter 5 --- A VLSI Chip --- p.48 / Chapter 5.1 --- Spatial Configuration --- p.49 / Chapter 5.2 --- Layout --- p.50 / Chapter 5.2.1 --- Computational Elements --- p.50 / Chapter 5.2.2 --- The Network --- p.52 / Chapter 5.2.3 --- I/O Requirements --- p.53 / Chapter 5.2.4 --- Optional Modules --- p.53 / Chapter 5.3 --- A Scalable Design --- p.54 / Chapter 6 --- The Inverse Shortest Paths Problem --- p.57 / Chapter 6.1 --- Problem Statement --- p.59 / Chapter 6.2 --- The Embedded Approach --- p.63 / Chapter 6.2.1 --- The Formulation --- p.63 / Chapter 6.2.2 --- The Algorithm --- p.65 / Chapter 6.3 --- Implementation Results --- p.66 / Chapter 6.4 --- Other Implementations --- p.67 / Chapter 6.4.1 --- Sequential Machine --- p.67 / Chapter 6.4.2 --- Parallel Machine --- p.68 / Chapter 6.5 --- Discussion --- p.68 / Chapter 7 --- Closed Semiring Optimization Circuits --- p.71 / Chapter 7.1 --- Transitive Closure Problem --- p.72 / Chapter 7.1.1 --- Problem Statement --- p.72 / Chapter 7.1.2 --- Inference Network Solutions --- p.73 / Chapter 7.2 --- Closed Semirings --- p.76 / Chapter 7.3 --- Closed Semirings and the Binary Relation Inference Network --- p.79 / Chapter 7.3.1 --- Minimum Spanning Tree --- p.80 / Chapter 7.3.2 --- VLSI Implementation --- p.84 / Chapter 7.4 --- Conclusion --- p.86 / Chapter 8 --- Conclusions --- p.87 / Chapter 8.1 --- Summary of Achievements --- p.87 / Chapter 8.2 --- Future Work --- p.89 / Chapter 8.2.1 --- VLSI Fabrication --- p.89 / Chapter 8.2.2 --- Network Robustness --- p.90 / Chapter 8.2.3 --- Inference Network Applications --- p.91 / Chapter 8.2.4 --- Architecture for the Bellman-Ford Algorithm --- p.91 / Bibliography --- p.92 / Appendices --- p.99 / Chapter A --- Detailed Schematic --- p.99 / Chapter A.1 --- Schematic of the Inference Network Structures --- p.99 / Chapter A.1.1 --- Unit with Self-Feedback --- p.99 / Chapter A.1.2 --- Unit with Self-Feedback Removed --- p.100 / Chapter A.1.3 --- Unit with a Compact Minimizer --- p.100 / Chapter A.1.4 --- Network Modules --- p.100 / Chapter A.2 --- Inference Network Interface Circuits --- p.100 / Chapter B --- Circuit Simulation and Layout Tools --- p.107 / Chapter B.1 --- Circuit Simulation --- p.107 / Chapter B.2 --- VLSI Circuit Design --- p.110 / Chapter B.3 --- VLSI Circuit Layout --- p.111 / Chapter C --- The Conjugate-Gradient Descent Algorithm --- p.113 / Chapter D --- Shortest Path Problem on MasPar --- p.115
288

Recurrent neural networks for force optimization of multi-fingered robotic hands.

January 2002 (has links)
Fok Lo Ming. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 133-135). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Multi-fingered Robotic Hands --- p.1 / Chapter 1.2 --- Grasping Force Optimization --- p.2 / Chapter 1.3 --- Neural Networks --- p.6 / Chapter 1.4 --- Previous Work for Grasping Force Optimization --- p.9 / Chapter 1.5 --- Contributions of this work --- p.10 / Chapter 1.6 --- Organization of this thesis --- p.12 / Chapter 2. --- Problem Formulations --- p.13 / Chapter 2.1 --- Grasping Force Optimization without Joint Torque Limits --- p.14 / Chapter 2.1.1 --- Linearized Friction Cone Approach --- p.15 / Chapter i. --- Linear Formulation --- p.17 / Chapter ii. --- Quadratic Formulation --- p.18 / Chapter 2.1.2 --- Nonlinear Friction Cone as Positive Semidefinite Matrix --- p.19 / Chapter 2.1.3 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.20 / Chapter 2.2 --- Grasping Force Optimization with Joint Torque Limits --- p.21 / Chapter 2.2.1 --- Linearized Friction Cone Approach --- p.23 / Chapter 2.2.2 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.23 / Chapter 2.3 --- Grasping Force Optimization with Time-varying External Wrench --- p.24 / Chapter 2.3.1 --- Linearized Friction Cone Approach --- p.25 / Chapter 2.3.2 --- Nonlinear Friction Cone as Positive Semidefinite Matrix --- p.25 / Chapter 2.3.3 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.26 / Chapter 3. --- Recurrent Neural Network Models --- p.27 / Chapter 3.1 --- Networks for Grasping Force Optimization without Joint Torque Limits / Chapter 3.1.1 --- The Primal-dual Network for Linear Programming --- p.29 / Chapter 3.1.2 --- The Deterministic Annealing Network for Linear Programming --- p.32 / Chapter 3.1.3 --- The Primal-dual Network for Quadratic Programming --- p.34 / Chapter 3.1.4 --- The Dual Network --- p.35 / Chapter 3.1.5 --- The Deterministic Annealing Network --- p.39 / Chapter 3.1.6 --- The Novel Network --- p.41 / Chapter 3.2 --- Networks for Grasping Force Optimization with Joint Torque Limits / Chapter 3.2.1 --- The Dual Network --- p.43 / Chapter 3.2.2 --- The Novel Network --- p.45 / Chapter 3.3 --- Networks for Grasping Force Optimization with Time-varying External Wrench / Chapter 3.3.1 --- The Primal-dual Network for Quadratic Programming --- p.48 / Chapter 3.3.2 --- The Deterministic Annealing Network --- p.50 / Chapter 3.3.3 --- The Novel Network --- p.52 / Chapter 4. --- Simulation Results --- p.54 / Chapter 4.1 --- Three-finger Grasping Example of Grasping Force Optimization without Joint Torque Limits --- p.54 / Chapter 4.1.1 --- The Primal-dual Network for Linear Programming --- p.57 / Chapter 4.1.2 --- The Deterministic Annealing Network for Linear Programming --- p.59 / Chapter 4.1.3 --- The Primal-dual Network for Quadratic Programming --- p.61 / Chapter 4.1.4 --- The Dual Network --- p.63 / Chapter 4.1.5 --- The Deterministic Annealing Network --- p.65 / Chapter 4.1.6 --- The Novel Network --- p.57 / Chapter 4.1.7 --- Network Complexity Analysis --- p.59 / Chapter 4.2 --- Four-finger Grasping Example of Grasping Force Optimization without Joint Torque Limits --- p.73 / Chapter 4.2.1 --- The Primal-dual Network for Linear Programming --- p.75 / Chapter 4.2.2 --- The Deterministic Annealing Network for Linear Programming --- p.77 / Chapter 4.2.3 --- The Primal-dual Network for Quadratic Programming --- p.79 / Chapter 4.2.4 --- The Dual Network --- p.81 / Chapter 4.2.5 --- The Deterministic Annealing Network --- p.83 / Chapter 4.2.6 --- The Novel Network --- p.85 / Chapter 4.2.7 --- Network Complexity Analysis --- p.87 / Chapter 4.3 --- Three-finger Grasping Example of Grasping Force Optimization with Joint Torque Limits --- p.90 / Chapter 4.3.1 --- The Dual Network --- p.93 / Chapter 4.3.2 --- The Novel Network --- p.95 / Chapter 4.3.3 --- Network Complexity Analysis --- p.97 / Chapter 4.4 --- Three-finger Grasping Example of Grasping Force Optimization with Time-varying External Wrench --- p.99 / Chapter 4.4.1 --- The Primal-dual Network for Quadratic Programming --- p.101 / Chapter 4.4.2 --- The Deterministic Annealing Network --- p.103 / Chapter 4.4.3 --- The Novel Network --- p.105 / Chapter 4.4.4 --- Network Complexity Analysis --- p.107 / Chapter 4.5 --- Four-finger Grasping Example of Grasping Force Optimization with Time-varying External Wrench --- p.109 / Chapter 4.5.1 --- The Primal-dual Network for Quadratic Programming --- p.111 / Chapter 4.5.2 --- The Deterministic Annealing Network --- p.113 / Chapter 4.5.3 --- The Novel Network --- p.115 / Chapter 5.5.4 --- Network Complexity Analysis --- p.117 / Chapter 4.6 --- Four-finger Grasping Example of Grasping Force Optimization with Nonlinear Velocity Variation --- p.119 / Chapter 4.5.1 --- The Primal-dual Network for Quadratic Programming --- p.121 / Chapter 4.5.2 --- The Deterministic Annealing Network --- p.123 / Chapter 4.5.3 --- The Novel Network --- p.125 / Chapter 5.5.4 --- Network Complexity Analysis --- p.127 / Chapter 5. --- Conclusions and Future Work --- p.129 / Publications --- p.132 / Bibliography --- p.133 / Appendix --- p.136
289

Faster Training of Neural Networks for Recommender Systems

Kogel, Wendy E. 01 May 2002 (has links)
In this project we investigate the use of artificial neural networks(ANNs) as the core prediction function of a recommender system. In the past, research concerned with recommender systems that use ANNs have mainly concentrated on using collaborative-based information. We look at the effects of adding content-based information and how altering the topology of the network itself affects the accuracy of the recommendations generated. In particular, we investigate a mixture of experts topology. We create two expert clusters in the hidden layer of the ANN, one for content-based data and another for collaborative-based data. This greatly reduces the number of connections between the input and hidden layers. Our experimental evaluation shows that this new architecture produces the same accuracy of recommendation as the fully connected configuration with a large decrease in the amount of time it takes to train the network. This decrease in time is a great advantage because of the need for recommender systems to provide real time results to the user.
290

Deep Learning Binary Neural Network on an FPGA

Redkar, Shrutika 27 April 2017 (has links)
In recent years, deep neural networks have attracted lots of attentions in the field of computer vision and artificial intelligence. Convolutional neural network exploits spatial correlations in an input image by performing convolution operations in local receptive fields. When compared with fully connected neural networks, convolutional neural networks have fewer weights and are faster to train. Many research works have been conducted to further reduce computational complexity and memory requirements of convolutional neural networks, to make it applicable to low-power embedded applications. This thesis focuses on a special class of convolutional neural network with only binary weights and activations, referred as binary neural networks. Weights and activations for convolutional and fully connected layers are binarized to take only two values, +1 and -1. Therefore, the computations and memory requirement have been reduced significantly. The proposed architecture of binary neural networks has been implemented on an FPGA as a real time, high speed, low power computer vision platform. Only on-chip memories are utilized in the FPGA design. The FPGA implementation is evaluated using the CIFAR-10 benchmark and achieved a processing speed of 332,164 images per second for CIFAR-10 dataset with classification accuracy of about 86.06%.

Page generated in 0.413 seconds