Spelling suggestions: "subject:"beural networks (computer cience)"" "subject:"beural networks (computer cscience)""
41 |
Neural network based exchange-correlation functionalLi, Xiaobo, 李曉博 January 2007 (has links)
published_or_final_version / abstract / Chemistry / Master / Master of Philosophy
|
42 |
Fixed planar holographic interconnects for optically implemented neural networks.Keller, Paul Edwin. January 1991 (has links)
In recent years there has been a great interest in neural networks, since neural networks are capable of performing pattern recognition, classification, decision, search, and optimization. A key element of most neural network systems is the massive number of weighted interconnections (synapses) used to tie relatively simple processing elements (neurons) together in a useful architecture. The inherent parallelism and interconnection capability of optics make it a likely candidate for the implementation of the neural network interconnection process. While there are several optical technologies worth exploring, this dissertation examines the capabilities and limitations of using fixed planar holographic interconnects in a neural network system. While optics is well suited to the interconnection task, nonlinear processing operations are difficult to implement in optics and better suited to electronic implementations. Therefore, a hybrid neural network architecture of planar interconnection holograms and opto-electronic neurons is a sensible approach to implementing a neural network. This architecture is analyzed. The interconnection hologram must accurately encode synaptic weights, have a high diffraction efficiency, and maximize the number of interconnections. Various computer generated hologram techniques are tested for their ability to produce the interconnection hologram. A new technique using the Gerchberg-Saxton process followed by a random-search error minimization produces the highest interconnect accuracy and highest diffraction efficiency of the techniques tested. The analysis shows that a reasonable size planar hologram has a capacity to connect 5000 neuron outputs to 5000 neuron inputs and that the bipolar synaptic weights can have an accuracy of approximately 5 bits. To demonstrate the concept of an opto-electronic neural network and planar holographic interconnects, a Hopfield style associative memory is constructed and shown to perform almost as well as an ideal system.
|
43 |
Multiple self-organised spiking neural networksAmin, Muhamad Kamal M. January 2009 (has links)
This thesis presents a Multiple Self-Organised Spiking Neural Networks (MSOSNN). The aim of this architecture is to achieve a more biologically plausible artificial neural network. Spiking neurons with delays are proposed to encode the information and perform computations. The proposed method is further implemented to enable unsupervised competitive and self-organising learning. The method is evaluated by application to real world datasets. Computer simulation results show that the proposed method is able to function similarly to conventional neural networks i.e. the Kohonen Self-Organising Maps. The SOSNN are further combined to form multiple networks of the Self-Organised Spiking Neural Networks. This network architecture is structured into <i>n</i> component modules with each module providing a solution to the sub-task and then combined with other modules to solve the main task. The training is made in such a way that a module becomes a winner at each step of the learning phase. The evaluation using different data sets as well as comparing the network to a single unity network showed that the proposed architecture is very useful for high dimensional input vectors. The Multiple SOSNN architecture thus provides a guideline for a complex large-scale network solution.
|
44 |
Deep learning and SVM methods for lung diseases detection and direction recognitionLi, Lei January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
|
45 |
The Induced topology of local minima with applications to artificial neural networks.January 1992 (has links)
by Yun Chung Chu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 163-[165]). / Chapter 1 --- Background --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Basic notations --- p.4 / Chapter 1.3 --- Object of study --- p.6 / Chapter 2 --- Review of Kohonen's algorithm --- p.22 / Chapter 2.1 --- General form of Kohonen's algorithm --- p.22 / Chapter 2.2 --- r-neighborhood by matrix --- p.25 / Chapter 2.3 --- Examples --- p.28 / Chapter 3 --- Local minima --- p.34 / Chapter 3.1 --- Theory of local minima --- p.35 / Chapter 3.2 --- Minimizing the number of local minima --- p.40 / Chapter 3.3 --- Detecting the success or failure of Kohonen's algorithm --- p.48 / Chapter 3.4 --- Local minima for different graph structures --- p.59 / Chapter 3.5 --- Formulation by geodesic distance --- p.65 / Chapter 3.6 --- Local minima and Voronoi regions --- p.67 / Chapter 4 --- Induced graph --- p.70 / Chapter 4.1 --- Formalism --- p.71 / Chapter 4.2 --- Practical way to find the induced graph --- p.88 / Chapter 4.3 --- Some examples --- p.95 / Chapter 5 --- Given mapping vs induced mapping --- p.102 / Chapter 5.1 --- Comparison between given mapping and induced mapping --- p.102 / Chapter 5.2 --- Matching the induced mapping to given mapping --- p.115 / Chapter 6 --- A special topic: application to determination of dimension --- p.131 / Chapter 6.1 --- Theory --- p.133 / Chapter 6.2 --- Advanced examples --- p.151 / Chapter 6.3 --- Special applications --- p.156 / Chapter 7 --- Conclusion --- p.159 / Bibliography --- p.163
|
46 |
A specification-based design tool for artificial neural networks.January 1992 (has links)
Wong Wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 78-80). / Chapter 1. --- Introduction --- p.1 / Chapter 1.1. --- Specification Environment --- p.2 / Chapter 1.2. --- Specification Analysis --- p.2 / Chapter 1.3. --- Outline --- p.3 / Chapter 2. --- Survey --- p.4 / Chapter 2.1. --- Concurrence Specification --- p.4 / Chapter 2.1.1. --- Sequential Approach --- p.5 / Chapter 2.1.2. --- Mapping onto Concurrent Architecture --- p.6 / Chapter 2.1.3. --- Automatic Concurrence Introduction --- p.7 / Chapter 2.2. --- Specification Analysis --- p.8 / Chapter 2.2.1. --- Motivation --- p.8 / Chapter 2.2.2. --- Cyclic Dependency --- p.8 / Chapter 3. --- The Design Tool --- p.11 / Chapter 3.1. --- Specification Environment --- p.11 / Chapter 3.1.1. --- Framework --- p.11 / Chapter 3.1.1.1. --- Formal Neurons --- p.12 / Chapter 3.1.1.2. --- Configuration --- p.12 / Chapter 3.1.1.3. --- Control Neuron --- p.13 / Chapter 3.1.2. --- Dataflow Specification --- p.14 / Chapter 3.1.2.1. --- Absence of Control Information --- p.14 / Chapter 3.1.2.2. --- Single-Valued Variables & Explicit Time Indices --- p.14 / Chapter 3.1.2.3. --- Explicit Notations --- p.15 / Chapter 3.1.3. --- User Interface --- p.15 / Chapter 3.2. --- Specification Analysis --- p.16 / Chapter 3.2.1. --- Data Dependency Analysis --- p.16 / Chapter 3.2.2. --- Attribute Analysis --- p.16 / Chapter 4. --- BP-Net Specification --- p.18 / Chapter 4.1. --- BP-Net Paradigm --- p.18 / Chapter 4.1.1. --- Neurons of a BP-Net --- p.18 / Chapter 4.1.2. --- Configuration of BP-Net --- p.20 / Chapter 4.2. --- Constant Declarations --- p.20 / Chapter 4.3. --- Formal Neuron Specification --- p.21 / Chapter 4.3.1. --- Mapping the Paradigm --- p.22 / Chapter 4.3.1.1. --- Mapping Symbols onto Parameter Names --- p.22 / Chapter 4.3.1.2. --- Mapping Neuron Equations onto Internal Functions --- p.22 / Chapter 4.3.2. --- Form Entries --- p.23 / Chapter 4.3.2.1. --- Neuron Type Entry --- p.23 / Chapter 4.3.2.2. --- "Input, Output and Internal Parameter Entries" --- p.23 / Chapter 4.3.2.3. --- Initial Value Entry --- p.25 / Chapter 4.3.2.4. --- Internal Function Entry --- p.25 / Chapter 4.4. --- Configuration Specification --- p.28 / Chapter 4.4.1. --- Fonn Entries --- p.29 / Chapter 4.4.1.1. --- Neuron Label Entry --- p.29 / Chapter 4.4.1.2. --- Neuron Character Entry --- p.30 / Chapter 4.4.1.3. --- Connection Pattern Entry --- p.31 / Chapter 4.4.2. --- Characteristics of the Syntax --- p.33 / Chapter 4.5. --- Control Neuron Specification --- p.34 / Chapter 4.5.1. --- Form Entries --- p.35 / Chapter 4.5.1.1. --- "Global Input, Output, Parameter & Initial Value Entries" --- p.35 / Chapter 4.5.1.2. --- Input & Output File Entries --- p.36 / Chapter 4.5.1.3. --- Global Function Entry --- p.36 / Chapter 5. --- Data Dependency Analysis_ --- p.40 / Chapter 5.1. --- Graph Construction --- p.41 / Chapter 5.1.1. --- Simplification and Normalization --- p.41 / Chapter 5.1.1.1. --- Removing Non-Esscntial Information --- p.41 / Chapter 5.1.1.2. --- Removing File Record Parameters --- p.42 / Chapter 5.1.1.3. --- Rearranging Temporal offset --- p.42 / Chapter 5.1.1.4. --- Conservation of Temporal Relationship --- p.43 / Chapter 5.1.1.5. --- Zero/Negative Offset for Determining Parameters --- p.43 / Chapter 5.1.2. --- Internal Dependency Graphs (IDGs) --- p.43 / Chapter 5.1.3. --- IDG of Control Neuron (CnIDG) --- p.45 / Chapter 5.1.4. --- Global Dependency Graphs (GDGs) --- p.45 / Chapter 5.2. --- Cycle Detection --- p.48 / Chapter 5.2.1. --- BP-Net --- p.48 / Chapter 5.2.2. --- Other Examples --- p.49 / Chapter 5.2.2.1. --- The Perceptron --- p.50 / Chapter 5.2.2.2. --- The Boltzmann Machinc --- p.51 / Chapter 5.2.3. --- Number of Cycles --- p.52 / Chapter 5.2.3.1. --- Different Number of Layers --- p.52 / Chapter 5.2.3.2. --- Different Network Types --- p.52 / Chapter 5.2.4. --- Cycle Length --- p.53 / Chapter 5.2.4.1. --- Different Number of Layers --- p.53 / Chapter 5.2.4.2. --- Comparison Among Different Networks --- p.53 / Chapter 5.2.5. --- Difficulties in Analysis --- p.53 / Chapter 5.3. --- Dependency Cycle Analysis --- p.54 / Chapter 5.3.1. --- Temporal Index Analysis --- p.54 / Chapter 5.3.2. --- Non-Temporal Index Analysis --- p.55 / Chapter 5.3.2.1. --- A Simple Example --- p.55 / Chapter 5.3.2.2. --- Single Parameter --- p.56 / Chapter 5.3.2.3. --- Multiple Parameters --- p.57 / Chapter 5.3.3. --- Combined Method --- p.58 / Chapter 5.3.4. --- Scheduling --- p.58 / Chapter 5.3.4.1. --- Algorithm --- p.59 / Chapter 5.3.4.2. --- Schedule for the BP-Net --- p.59 / Chapter 5.4. --- Symmetry in Graph Construction --- p.60 / Chapter 5.4.1. --- Basic Approach --- p.60 / Chapter 5.4.2. --- Construction of the BP-Net GDG --- p.61 / Chapter 5.4.3. --- Limitation --- p.63 / Chapter 6. --- Attribute Analysis__ --- p.64 / Chapter 6.1. --- Parameter Analysis --- p.64 / Chapter 6.1.1. --- Internal Dependency Graphs (IDGs) --- p.65 / Chapter 6.1.1.1. --- Correct Properties of Parameters in IDGs --- p.65 / Chapter 6.1.1.2. --- Example --- p.65 / Chapter 6.1.2. --- Combined Internal Dependency Graphs (CIDG) --- p.66 / Chapter 6.1.2.1. --- Tests on Parameters of CIDG --- p.66 / Chapter 6.1.2.2. --- Example --- p.67 / Chapter 6.1.3. --- Finalized Neuron Obtained --- p.67 / Chapter 6.1 4. --- CIDG of the BP-Net --- p.68 / Chapter 6.2. --- Constraint Checking --- p.68 / Chapter 6.2.1. --- "Syntactic, Semantic and Simple Checkings" --- p.68 / Chapter 6.2.1.1. --- The Syntactic & Semantic Techniques --- p.68 / Chapter 6.2.1.2. --- Simple Matching --- p.70 / Chapter 6.2.2. --- Constraints --- p.71 / Chapter 6.2.2.1. --- Constraints on Formal Neuron --- p.71 / Chapter 6.2.2.2. --- Constraints on Configuration --- p.72 / Chapter 6.2.2.3. --- Constraints on Control Neuron --- p.73 / Chapter 6.3. --- Complete Checking Procedure --- p.73 / Chapter 7. --- Conclusions_ --- p.75 / Chapter 7.1. --- Limitations --- p.76 / Chapter 7.1.1. --- Exclusive Conditional Dependency Cycles --- p.76 / Chapter 7.1.2. --- Maximum Parallelism --- p.77 / Reference --- p.78 / Appendix --- p.1 / Chapter I. --- Form Syntax --- p.1 / Chapter A. --- Syntax Conventions --- p.1 / Chapter B. --- Form Definition --- p.1 / Chapter 1. --- Form Structure --- p.1 / Chapter 2. --- Constant Declaration --- p.1 / Chapter 3. --- Formal Neuron Declaration --- p.1 / Chapter 4. --- Configuration Declaration --- p.2 / Chapter 5. --- Control Neuron --- p.2 / Chapter 6. --- Supplementary Definition --- p.3 / Chapter II. --- Algorithms --- p.4 / Chapter III. --- Deadlock & Dependency Cycles --- p.14 / Chapter A. --- Deadlock Prevention --- p.14 / Chapter 1. --- Necessary Conditions for Deadlock --- p.14 / Chapter 2. --- Resource Allocation Graphs --- p.15 / Chapter 3. --- Cycles and Blocked Requests --- p.15 / Chapter B. --- Deadlock in ANN Systems --- p.16 / Chapter 1. --- Shared resources --- p.16 / Chapter 2. --- Presence of the Necessary Conditions for Deadlocks --- p.16 / Chapter 3. --- Operation Constraint for Communication --- p.16 / Chapter 4. --- Checkings Required --- p.17 / Chapter C. --- Data Dependency Graphs --- p.17 / Chapter 1. --- Simplifying Resource Allocation Graphs --- p.17 / Chapter 2. --- Expanding into Parameter Level --- p.18 / Chapter 3. --- Freezing the Request Edges --- p.18 / Chapter 4. --- Reversing the Edge Directions --- p.18 / Chapter 5. --- Mutual Dependency Cycles --- p.18 / Chapter IV. --- Case Studies --- p.19 / Chapter A. --- BP-Net --- p.19 / Chapter 1. --- Specification Forms --- p.19 / Chapter 2. --- Results After Simple Checkings --- p.21 / Chapter 3. --- Internal Dependency Graphs Construction --- p.21 / Chapter 4. --- Results From Parameter Analysis --- p.21 / Chapter 5. --- Global Dependency Graphs Construction --- p.21 / Chapter 6. --- Cycles Detection --- p.21 / Chapter 7. --- Time Subscript Analysis --- p.21 / Chapter 8. --- Subscript Analysis --- p.21 / Chapter 9. --- Scheduling --- p.21 / Chapter B. --- Perceptron --- p.21 / Chapter 1. --- Specification Forms --- p.22 / Chapter 2. --- Results After Simple Checkings --- p.24 / Chapter 3. --- Internal Dependency Graphs Construction --- p.24 / Chapter 4. --- Results From Parameter Analysis --- p.25 / Chapter 5. --- Global Dependency Graph Construction --- p.25 / Chapter 6. --- Cycles Detection --- p.25 / Chapter 7. --- Time Subscript Analysis --- p.25 / Chapter 8. --- Subscript Analysis --- p.25 / Chapter 9. --- Scheduling --- p.25 / Chapter C. --- Boltzmann Machine --- p.26 / Chapter 1. --- Specification Forms --- p.26 / Chapter 2. --- Results After Simple Checkings --- p.35 / Chapter 3. --- Graphs Construction --- p.35 / Chapter 4. --- Results From Parameter Analysis --- p.36 / Chapter 5. --- Global Dependency Graphs Construction --- p.36 / Chapter 6. --- Cycle Detection --- p.36 / Chapter 7. --- Time Subscript Analysis --- p.36 / Chapter 8. --- Subscript Analysis --- p.36 / Chapter 9. --- Scheduling --- p.36
|
47 |
Restoration network design and neural network.January 1992 (has links)
by Leung Lee. / Thesis (M.Sc.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references. / Chapter SECTION 1. --- Introduction --- p.1 / Chapter SECTION 2. --- Formulation of Problem --- p.2 / Chapter 2.1 --- Problem Identification --- p.2 / Chapter 2.2 --- Network Planning Parameters and Assumptions --- p.3 / Chapter 2.3 --- Neural Network Model Transformation --- p.5 / Chapter 2.4 --- Algorithm and Implementation --- p.12 / Chapter SECTION 3. --- Simulation Results --- p.15 / Chapter 3.1 --- All Link Costs Are Same or Nearly the Same --- p.17 / Chapter 3.2 --- Fluctuated Cost in One or Two Fibre Paths --- p.18 / Chapter 3.3 --- Sudden Traffic Demand Change in Last Season --- p.19 / Chapter SECTION 4. --- Discussion --- p.20 / Chapter SECTION 5. --- Conclusion --- p.26 / GLOSSARY OF TERMS --- p.27 / BIBLIOGRAPHY --- p.29 / APPENDIX --- p.A1 / Chapter A --- Simulation Results --- p.A1 / Chapter B --- ANN Traffic Routing Example --- p.B1
|
48 |
Recurrent neural network for optimization with application to computer vision.January 1993 (has links)
by Cheung Kwok-wai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves [146-154]). / Chapter Chapter 1 --- Introduction / Chapter 1.1 --- Programmed computing vs. neurocomputing --- p.1-1 / Chapter 1.2 --- Development of neural networks - feedforward and feedback models --- p.1-2 / Chapter 1.3 --- State of art of applying recurrent neural network towards computer vision problem --- p.1-3 / Chapter 1.4 --- Objective of the Research --- p.1-6 / Chapter 1.5 --- Plan of the thesis --- p.1-7 / Chapter Chapter 2 --- Background / Chapter 2.1 --- Short history on development of Hopfield-like neural network --- p.2-1 / Chapter 2.2 --- Hopfield network model --- p.2-3 / Chapter 2.2.1 --- Neuron's transfer function --- p.2-3 / Chapter 2.2.2 --- Updating sequence --- p.2-6 / Chapter 2.3 --- Hopfield energy function and network convergence properties --- p.2-1 / Chapter 2.4 --- Generalized Hopfield network --- p.2-13 / Chapter 2.4.1 --- Network order and generalized Hopfield network --- p.2-13 / Chapter 2.4.2 --- Associated energy function and network convergence property --- p.2-13 / Chapter 2.4.3 --- Hardware implementation consideration --- p.2-15 / Chapter Chapter 3 --- Recurrent neural network for optimization / Chapter 3.1 --- Mapping to Neural Network formulation --- p.3-1 / Chapter 3.2 --- Network stability verse Self-reinforcement --- p.3-5 / Chapter 3.2.1 --- Quadratic problem and Hopfield network --- p.3-6 / Chapter 3.2.2 --- Higher-order case and reshaping strategy --- p.3-8 / Chapter 3.2.3 --- Numerical Example --- p.3-10 / Chapter 3.3 --- Local minimum limitation and existing solutions in the literature --- p.3-12 / Chapter 3.3.1 --- Simulated Annealing --- p.3-13 / Chapter 3.3.2 --- Mean Field Annealing --- p.3-15 / Chapter 3.3.3 --- Adaptively changing neural network --- p.3-16 / Chapter 3.3.4 --- Correcting Current Method --- p.3-16 / Chapter 3.4 --- Conclusions --- p.3-17 / Chapter Chapter 4 --- A Novel Neural Network for Global Optimization - Tunneling Network / Chapter 4.1 --- Tunneling Algorithm --- p.4-1 / Chapter 4.1.1 --- Description of Tunneling Algorithm --- p.4-1 / Chapter 4.1.2 --- Tunneling Phase --- p.4-2 / Chapter 4.2 --- A Neural Network with tunneling capability Tunneling network --- p.4-8 / Chapter 4.2.1 --- Network Specifications --- p.4-8 / Chapter 4.2.2 --- Tunneling function for Hopfield network and the corresponding updating rule --- p.4-9 / Chapter 4.3 --- Tunneling network stability and global convergence property --- p.4-12 / Chapter 4.3.1 --- Tunneling network stability --- p.4-12 / Chapter 4.3.2 --- Global convergence property --- p.4-15 / Chapter 4.3.2.1 --- Markov chain model for Hopfield network --- p.4-15 / Chapter 4.3.2.2 --- Classification of the Hopfield markov chain --- p.4-16 / Chapter 4.3.2.3 --- Markov chain model for tunneling network and its convergence towards global minimum --- p.4-18 / Chapter 4.3.3 --- Variation of pole strength and its effect --- p.4-20 / Chapter 4.3.3.1 --- Energy Profile analysis --- p.4-21 / Chapter 4.3.3.2 --- Size of attractive basin and pole strength required --- p.4-24 / Chapter 4.3.3.3 --- A new type of pole eases the implementation problem --- p.4-30 / Chapter 4.4 --- Simulation Results and Performance comparison --- p.4-31 / Chapter 4.4.1 --- Simulation Experiments --- p.4-32 / Chapter 4.4.2 --- Simulation Results and Discussions --- p.4-37 / Chapter 4.4.2.1 --- Comparisons on optimal path obtained and the convergence rate --- p.4-37 / Chapter 4.4.2.2 --- On decomposition of Tunneling network --- p.4-38 / Chapter 4.5 --- Suggested hardware implementation of Tunneling network --- p.4-48 / Chapter 4.5.1 --- Tunneling network hardware implementation --- p.4-48 / Chapter 4.5.2 --- Alternative implementation theory --- p.4-52 / Chapter 4.6 --- Conclusions --- p.4-54 / Chapter Chapter 5 --- Recurrent Neural Network for Gaussian Filtering / Chapter 5.1 --- Introduction --- p.5-1 / Chapter 5.1.1 --- Silicon Retina --- p.5-3 / Chapter 5.1.2 --- An Active Resistor Network for Gaussian Filtering of Image --- p.5-5 / Chapter 5.1.3 --- Motivations of using recurrent neural network --- p.5-7 / Chapter 5.1.4 --- Difference between the active resistor network model and recurrent neural network model for gaussian filtering --- p.5-8 / Chapter 5.2 --- From Problem formulation to Neural Network formulation --- p.5-9 / Chapter 5.2.1 --- One Dimensional Case --- p.5-9 / Chapter 5.2.2 --- Two Dimensional Case --- p.5-13 / Chapter 5.3 --- Simulation Results and Discussions --- p.5-14 / Chapter 5.3.1 --- Spatial impulse response of the 1-D network --- p.5-14 / Chapter 5.3.2 --- Filtering property of the 1-D network --- p.5-14 / Chapter 5.3.3 --- Spatial impulse response of the 2-D network and some filtering results --- p.5-15 / Chapter 5.4 --- Conclusions --- p.5-16 / Chapter Chapter 6 --- Recurrent Neural Network for Boundary Detection / Chapter 6.1 --- Introduction --- p.6-1 / Chapter 6.2 --- From Problem formulation to Neural Network formulation --- p.6-3 / Chapter 6.2.1 --- Problem Formulation --- p.6-3 / Chapter 6.2.2 --- Recurrent Neural Network Model used --- p.6-4 / Chapter 6.2.3 --- Neural Network formulation --- p.6-5 / Chapter 6.3 --- Simulation Results and Discussions --- p.6-7 / Chapter 6.3.1 --- Feasibility study and Performance comparison --- p.6-7 / Chapter 6.3.2 --- Smoothing and Boundary Detection --- p.6-9 / Chapter 6.3.3 --- Convergence improvement by network decomposition --- p.6-10 / Chapter 6.3.4 --- Hardware implementation consideration --- p.6-10 / Chapter 6.4 --- Conclusions --- p.6-11 / Chapter Chapter 7 --- Conclusions and Future Researches / Chapter 7.1 --- Contributions and Conclusions --- p.7-1 / Chapter 7.2 --- Limitations and Suggested Future Researches --- p.7-3 / References --- p.R-l / Appendix I The assignment of the boundary connection of 2-D recurrent neural network for gaussian filtering --- p.Al-1 / Appendix II Formula for connection weight assignment of 2-D recurrent neural network for gaussian filtering and the proof on symmetric property --- p.A2-1 / Appendix III Details on reshaping strategy --- p.A3-1
|
49 |
On implementation and applications of the adaptive-network-based fuzzy inference system.January 1994 (has links)
by Ong Kai Hin George. / Thesis (M.Sc.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves [102-104]).
|
50 |
On the Synthesis of fuzzy neural systems.January 1995 (has links)
by Chung, Fu Lai. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 166-174). / ACKNOWLEDGEMENT --- p.iii / ABSTRACT --- p.iv / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Integration of Fuzzy Systems and Neural Networks --- p.1 / Chapter 1.2 --- Objectives of the Research --- p.7 / Chapter 1.2.1 --- Fuzzification of Competitive Learning Algorithms --- p.7 / Chapter 1.2.2 --- Capacity Analysis of FAM and FRNS Models --- p.8 / Chapter 1.2.3 --- Structure and Parameter Identifications of FRNS --- p.9 / Chapter 1.3 --- Outline of the Thesis --- p.9 / Chapter 2. --- A Fuzzy System Primer --- p.11 / Chapter 2.1 --- Basic Concepts of Fuzzy Sets --- p.11 / Chapter 2.2 --- Fuzzy Set-Theoretic Operators --- p.15 / Chapter 2.3 --- "Linguistic Variable, Fuzzy Rule and Fuzzy Inference" --- p.19 / Chapter 2.4 --- Basic Structure of a Fuzzy System --- p.22 / Chapter 2.4.1 --- Fuzzifier --- p.22 / Chapter 2.4.2 --- Fuzzy Knowledge Base --- p.23 / Chapter 2.4.3 --- Fuzzy Inference Engine --- p.24 / Chapter 2.4.4 --- Defuzzifier --- p.28 / Chapter 2.5 --- Concluding Remarks --- p.29 / Chapter 3. --- Categories of Fuzzy Neural Systems --- p.30 / Chapter 3.1 --- Introduction --- p.30 / Chapter 3.2 --- Fuzzification of Neural Networks --- p.31 / Chapter 3.2.1 --- Fuzzy Membership Driven Models --- p.32 / Chapter 3.2.2 --- Fuzzy Operator Driven Models --- p.34 / Chapter 3.2.3 --- Fuzzy Arithmetic Driven Models --- p.35 / Chapter 3.3 --- Layered Network Implementation of Fuzzy Systems --- p.36 / Chapter 3.3.1 --- Mamdani's Fuzzy Systems --- p.36 / Chapter 3.3.2 --- Takagi and Sugeno's Fuzzy Systems --- p.37 / Chapter 3.3.3 --- Fuzzy Relation Based Fuzzy Systems --- p.38 / Chapter 3.4 --- Concluding Remarks --- p.40 / Chapter 4. --- Fuzzification of Competitive Learning Networks --- p.42 / Chapter 4.1 --- Introduction --- p.42 / Chapter 4.2 --- Crisp Competitive Learning --- p.44 / Chapter 4.2.1 --- Unsupervised Competitive Learning Algorithm --- p.46 / Chapter 4.2.2 --- Learning Vector Quantization Algorithm --- p.48 / Chapter 4.2.3 --- Frequency Sensitive Competitive Learning Algorithm --- p.50 / Chapter 4.3 --- Fuzzy Competitive Learning --- p.50 / Chapter 4.3.1 --- Unsupervised Fuzzy Competitive Learning Algorithm --- p.53 / Chapter 4.3.2 --- Fuzzy Learning Vector Quantization Algorithm --- p.54 / Chapter 4.3.3 --- Fuzzy Frequency Sensitive Competitive Learning Algorithm --- p.58 / Chapter 4.4 --- Stability of Fuzzy Competitive Learning --- p.58 / Chapter 4.5 --- Controlling the Fuzziness of Fuzzy Competitive Learning --- p.60 / Chapter 4.6 --- Interpretations of Fuzzy Competitive Learning Networks --- p.61 / Chapter 4.7 --- Simulation Results --- p.64 / Chapter 4.7.1 --- Performance of Fuzzy Competitive Learning Algorithms --- p.64 / Chapter 4.7.2 --- Performance of Monotonically Decreasing Fuzziness Control Scheme --- p.74 / Chapter 4.7.3 --- Interpretation of Trained Networks --- p.76 / Chapter 4.8 --- Concluding Remarks --- p.80 / Chapter 5. --- Capacity Analysis of Fuzzy Associative Memories --- p.82 / Chapter 5.1 --- Introduction --- p.82 / Chapter 5.2 --- Fuzzy Associative Memories (FAMs) --- p.83 / Chapter 5.3 --- Storing Multiple Rules in FAMs --- p.87 / Chapter 5.4 --- A High Capacity Encoding Scheme for FAMs --- p.90 / Chapter 5.5 --- Memory Capacity --- p.91 / Chapter 5.6 --- Rule Modification --- p.93 / Chapter 5.7 --- Inference Performance --- p.99 / Chapter 5.8 --- Concluding Remarks --- p.104 / Chapter 6. --- Capacity Analysis of Fuzzy Relational Neural Systems --- p.105 / Chapter 6.1 --- Introduction --- p.105 / Chapter 6.2 --- Fuzzy Relational Equations and Fuzzy Relational Neural Systems --- p.107 / Chapter 6.3 --- Solving a System of Fuzzy Relational Equations --- p.109 / Chapter 6.4 --- New Solvable Conditions --- p.112 / Chapter 6.4.1 --- Max-t Fuzzy Relational Equations --- p.112 / Chapter 6.4.2 --- Min-s Fuzzy Relational Equations --- p.117 / Chapter 6.5 --- Approximate Resolution --- p.119 / Chapter 6.6 --- System Capacity --- p.123 / Chapter 6.7 --- Inference Performance --- p.125 / Chapter 6.8 --- Concluding Remarks --- p.127 / Chapter 7. --- Structure and Parameter Identifications of Fuzzy Relational Neural Systems --- p.129 / Chapter 7.1 --- Introduction --- p.129 / Chapter 7.2 --- Modelling Nonlinear Dynamic Systems by Fuzzy Relational Equations --- p.131 / Chapter 7.3 --- A General FRNS Identification Algorithm --- p.138 / Chapter 7.4 --- An Evolutionary Computation Approach to Structure and Parameter Identifications --- p.139 / Chapter 7.4.1 --- Guided Evolutionary Simulated Annealing --- p.140 / Chapter 7.4.2 --- An Evolutionary Identification (EVIDENT) Algorithm --- p.143 / Chapter 7.5 --- Simulation Results --- p.146 / Chapter 7.6 --- Concluding Remarks --- p.158 / Chapter 8. --- Conclusions --- p.159 / Chapter 8.1 --- Summary of Contributions --- p.160 / Chapter 8.1.1 --- Fuzzy Competitive Learning --- p.160 / Chapter 8.1.2 --- Capacity Analysis of FAM and FRNS --- p.160 / Chapter 8.1.3 --- Numerical Identification of FRNS --- p.161 / Chapter 8.2 --- Further Investigations --- p.162 / Appendix A Publication List of the Candidate --- p.164 / BIBLIOGRAPHY --- p.166
|
Page generated in 0.0725 seconds