Spelling suggestions: "subject:"beural networks (computer)"" "subject:"aneural networks (computer)""
61 |
Feature matching by Hopfield type neural networks. / CUHK electronic theses & dissertations collectionJanuary 2002 (has links)
Li Wenjing. / "April 2002." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 155-167). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
62 |
A neurodynamic optimization approach to constrained pseudoconvex optimization.January 2011 (has links)
Guo, Zhishan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 71-82). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement i --- p.ii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1 / Chapter 1.2 --- Recurrent Neural Networks --- p.4 / Chapter 1.3 --- Thesis Organization --- p.7 / Chapter 2 --- Literature Review --- p.8 / Chapter 2.1 --- Pseudo convex Optimization --- p.8 / Chapter 2.2 --- Recurrent Neural Networks --- p.10 / Chapter 3 --- Model Description and Convergence Analysis --- p.17 / Chapter 3.1 --- Model Descriptions --- p.18 / Chapter 3.2 --- Global Convergence --- p.20 / Chapter 4 --- Numerical Examples --- p.27 / Chapter 4.1 --- Gaussian Optimization --- p.28 / Chapter 4.2 --- Quadratic Fractional Programming --- p.36 / Chapter 4.3 --- Nonlinear Convex Programming --- p.39 / Chapter 5 --- Real-time Data Reconciliation --- p.42 / Chapter 5.1 --- Introduction --- p.42 / Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44 / Chapter 5.3 --- Examples --- p.45 / Chapter 6 --- Real-time Portfolio Optimization --- p.53 / Chapter 6.1 --- Introduction --- p.53 / Chapter 6.2 --- Model Description --- p.54 / Chapter 6.3 --- Theoretical Analysis --- p.56 / Chapter 6.4 --- Illustrative Examples --- p.58 / Chapter 7 --- Conclusions and Future Works --- p.67 / Chapter 7.1 --- Concluding Remarks --- p.67 / Chapter 7.2 --- Future Works --- p.68 / Chapter A --- Publication List --- p.69 / Bibliography --- p.71
|
63 |
Applications of neural networks in the binary classification problem.January 1997 (has links)
by Chan Pak Kei, Bernard. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 125-127). / Chapter 1 --- Introduction --- p.10 / Chapter 1.1 --- Overview --- p.10 / Chapter 1.2 --- Classification Approaches --- p.11 / Chapter 1.3 --- The Use of Neural Network --- p.12 / Chapter 1.4 --- Motivations --- p.14 / Chapter 1.5 --- Organization of Thesis --- p.16 / Chapter 2 --- Related Work --- p.19 / Chapter 2.1 --- Overview --- p.19 / Chapter 2.2 --- Neural Network --- p.20 / Chapter 2.2.1 --- Backpropagation Feedforward Neural Network --- p.20 / Chapter 2.2.2 --- Training of a Backpropagation Feedforward Neural Network --- p.22 / Chapter 2.2.3 --- Single Hidden-layer Model --- p.27 / Chapter 2.2.4 --- Data Preprocessing --- p.27 / Chapter 2.3 --- Fuzzy Sets --- p.29 / Chapter 2.3.1 --- Fuzzy Linear Regression Analysis --- p.29 / Chapter 2.4 --- Network Architecture Altering Algorithms --- p.31 / Chapter 2.4.1 --- Pruning Algorithms --- p.32 / Chapter 2.4.2 --- Constructive/Growing Algorithms --- p.35 / Chapter 2.5 --- Summary --- p.38 / Chapter 3 --- Hybrid Classification Systems --- p.39 / Chapter 3.1 --- Overview --- p.39 / Chapter 3.2 --- Literature Review --- p.41 / Chapter 3.2.1 --- Fuzzy Linear Regression(FLR) with Fuzzy Interval Analysis --- p.41 / Chapter 3.3 --- Data Sample and Methodology --- p.44 / Chapter 3.4 --- Hybrid Model --- p.46 / Chapter 3.4.1 --- Construction of Model --- p.46 / Chapter 3.5 --- Experimental Results --- p.50 / Chapter 3.5.1 --- Experimental Results on Breast Cancer Database --- p.50 / Chapter 3.5.2 --- Experimental Results on Synthetic Data --- p.53 / Chapter 3.6 --- Conclusion --- p.55 / Chapter 4 --- Searching for Suitable Network Size Automatically --- p.59 / Chapter 4.1 --- Overview --- p.59 / Chapter 4.2 --- Literature Review --- p.61 / Chapter 4.2.1 --- Pruning Algorithm --- p.61 / Chapter 4.2.2 --- Constructive Algorithms (Growing) --- p.66 / Chapter 4.2.3 --- Integration of methods --- p.67 / Chapter 4.3 --- Methodology and Approaches --- p.68 / Chapter 4.3.1 --- Growing --- p.68 / Chapter 4.3.2 --- Combinations of Growing and Pruning --- p.69 / Chapter 4.4 --- Experimental Results --- p.75 / Chapter 4.4.1 --- Breast-Cancer Cytology Database --- p.76 / Chapter 4.4.2 --- Tic-Tac-Toe Database --- p.82 / Chapter 4.5 --- Conclusion --- p.89 / Chapter 5 --- Conclusion --- p.91 / Chapter 5.1 --- Recall of Thesis Objectives --- p.91 / Chapter 5.2 --- Summary of Achievements --- p.92 / Chapter 5.2.1 --- Data Preprocessing --- p.92 / Chapter 5.2.2 --- Network Size --- p.93 / Chapter 5.3 --- Future Works --- p.94 / Chapter A --- Experimental Results of Ch3 --- p.95 / Chapter B --- Experimental Results of Ch4 --- p.112 / Bibliography --- p.125
|
64 |
Radial basis function of neural network in performance attribution.January 2003 (has links)
Wong Hing-Kwok. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 34-35). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Radial Basis Function (RBF) of Neural Network --- p.5 / Chapter 2.1 --- Neural Network --- p.6 / Chapter 2.2 --- Radial Basis Function (RBF) Network --- p.8 / Chapter 2.3 --- Model Specification --- p.10 / Chapter 2.4 --- Estimation --- p.12 / Chapter 3 --- RBF in Performance Attribution --- p.17 / Chapter 3.1 --- Background of Data Set --- p.18 / Chapter 3.2 --- Portfolio Construction --- p.20 / Chapter 3.3 --- Portfolio Rebalance --- p.22 / Chapter 3.4 --- Result --- p.23 / Chapter 4 --- Comparison --- p.26 / Chapter 4.1 --- Standard Linear Model --- p.27 / Chapter 4.2 --- Fixed Additive Model --- p.28 / Chapter 4.3 --- Refined Additive Model --- p.29 / Chapter 4.4 --- Result --- p.30 / Chapter 5 --- Conclusion --- p.32 / Bibliography --- p.34
|
65 |
Neural networks for optimizationCheung, Ka Kit 01 January 2001 (has links)
No description available.
|
66 |
Neurodynamic approaches to model predictive control.January 2009 (has links)
Pan, Yunpeng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (p. 98-107). / Abstract also in Chinese. / Abstract --- p.i / p.iii / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.2 / Chapter 1.1 --- Model Predictive Control --- p.2 / Chapter 1.2 --- Neural Networks --- p.3 / Chapter 1.3 --- Existing studies --- p.6 / Chapter 1.4 --- Thesis structure --- p.7 / Chapter 2 --- Two Recurrent Neural Networks Approaches to Linear Model Predictive Control --- p.9 / Chapter 2.1 --- Problem Formulation --- p.9 / Chapter 2.1.1 --- Quadratic Programming Formulation --- p.10 / Chapter 2.1.2 --- Linear Programming Formulation --- p.13 / Chapter 2.2 --- Neural Network Approaches --- p.15 / Chapter 2.2.1 --- Neural Network Model 1 --- p.15 / Chapter 2.2.2 --- Neural Network Model 2 --- p.16 / Chapter 2.2.3 --- Control Scheme --- p.17 / Chapter 2.3 --- Simulation Results --- p.18 / Chapter 3 --- Model Predictive Control for Nonlinear Affine Systems Based on the Simplified Dual Neural Network --- p.22 / Chapter 3.1 --- Problem Formulation --- p.22 / Chapter 3.2 --- A Neural Network Approach --- p.25 / Chapter 3.2.1 --- The Simplified Dual Network --- p.26 / Chapter 3.2.2 --- RNN-based MPC Scheme --- p.28 / Chapter 3.3 --- Simulation Results --- p.28 / Chapter 3.3.1 --- Example 1 --- p.28 / Chapter 3.3.2 --- Example 2 --- p.29 / Chapter 3.3.3 --- Example 3 --- p.33 / Chapter 4 --- Nonlinear Model Predictive Control Using a Recurrent Neural Network --- p.36 / Chapter 4.1 --- Problem Formulation --- p.36 / Chapter 4.2 --- A Recurrent Neural Network Approach --- p.40 / Chapter 4.2.1 --- Neural Network Model --- p.40 / Chapter 4.2.2 --- Learning Algorithm --- p.41 / Chapter 4.2.3 --- Control Scheme --- p.41 / Chapter 4.3 --- Application to Mobile Robot Tracking --- p.42 / Chapter 4.3.1 --- Example 1 --- p.44 / Chapter 4.3/2 --- Example 2 --- p.44 / Chapter 4.3.3 --- Example 3 --- p.46 / Chapter 4.3.4 --- Example 4 --- p.48 / Chapter 5 --- Model Predictive Control of Unknown Nonlinear Dynamic Sys- tems Based on Recurrent Neural Networks --- p.50 / Chapter 5.1 --- MPC System Description --- p.51 / Chapter 5.1.1 --- Model Predictive Control --- p.51 / Chapter 5.1.2 --- Dynamical System Identification --- p.52 / Chapter 5.2 --- Problem Formulation --- p.54 / Chapter 5.3 --- Dynamic Optimization --- p.58 / Chapter 5.3.1 --- The Simplified Dual Neural Network --- p.59 / Chapter 5.3.2 --- A Recursive Learning Algorithm --- p.60 / Chapter 5.3.3 --- Convergence Analysis --- p.61 / Chapter 5.4 --- RNN-based MPC Scheme --- p.65 / Chapter 5.5 --- Simulation Results --- p.67 / Chapter 5.5.1 --- Example 1 --- p.67 / Chapter 5.5.2 --- Example 2 --- p.68 / Chapter 5.5.3 --- Example 3 --- p.76 / Chapter 6 --- Model Predictive Control for Systems With Bounded Uncertainties Using a Discrete-Time Recurrent Neural Network --- p.81 / Chapter 6.1 --- Problem Formulation --- p.82 / Chapter 6.1.1 --- Process Model --- p.82 / Chapter 6.1.2 --- Robust. MPC Design --- p.82 / Chapter 6.2 --- Recurrent Neural Network Approach --- p.86 / Chapter 6.2.1 --- Neural Network Model --- p.86 / Chapter 6.2.2 --- Convergence Analysis --- p.88 / Chapter 6.2.3 --- Control Scheme --- p.90 / Chapter 6.3 --- Simulation Results --- p.91 / Chapter 7 --- Summary and future works --- p.95 / Chapter 7.1 --- Summary --- p.95 / Chapter 7.2 --- Future works --- p.96 / Bibliography --- p.97
|
67 |
Solutions of linear equations and a class of nonlinear equations using recurrent neural networksMathia, Karl 01 January 1996 (has links)
Artificial neural networks are computational paradigms which are inspired by biological neural networks (the human brain). Recurrent neural networks (RNNs) are characterized by neuron connections which include feedback paths. This dissertation uses the dynamics of RNN architectures for solving linear and certain nonlinear equations. Neural network with linear dynamics (variants of the well-known Hopfield network) are used to solve systems of linear equations, where the network structure is adapted to match properties of the linear system in question. Nonlinear equations inturn are solved using the dynamics of nonlinear RNNs, which are based on feedforward multilayer perceptrons. Neural networks are well-suited for implementation on special parallel hardware, due to their intrinsic parallelism. The RNNs developed here are implemented on a neural network processor (NNP) designed specifically for fast neural type processing, and are applied to the inverse kinematics problem in robotics, demonstrating their superior performance over alternative approaches.
|
68 |
On evolving modular neural networksSalama, Rameri January 2000 (has links)
The basis of this thesis is the presumption that while neural networks are useful structures that can be used to model complex, highly non-linear systems, current methods of training the neural networks are inadequate in some problem domains. Genetic algorithms have been used to optimise both the weights and architectures of neural networks, but these approaches do not treat the neural network in a sensible manner. In this thesis, I define the basis of computation within a neural network as a single neuron and its associated input connections. Sets of these neurons, stored in a matrix representation, comprise the building blocks that are transferred during one or more epochs of a genetic algorithm. I develop the concept of a Neural Building Block and two new genetic algorithms are created that utilise this concept. The first genetic algorithm utilises the micro neural building block (micro-NBB); a unit consisting of one or more neurons and their input connections. The micro-NBB is a unit that is transmitted through the process of crossover and hence requires the introduction of a new crossover operator. However the micro NBB can not be stored as a reusable component and must exist only as the product of the crossover operator. The macro neural building block (macro-NBB) is utilised in the second genetic algorithm, and encapsulates the idea that fit neural networks contain fit sub-networks, that need to be preserved across multiple epochs. A macro-NBB is a micro-NBB that exists across multiple epochs. Macro-NBBs must exist across multiple epochs, and this necessitates the use of a genetic store, and a new operator to introduce macro-NBBs back into the population at random intervals. Once the theoretical presentation is completed the newly developed genetic algorithms are used to evolve weights for a variety of architectures of neural networks to demonstrate the feasibility of the approach. Comparison of the new genetic algorithm with other approaches is very favourable on two problems: a multiplexer problem and a robot control problem.
|
69 |
Resilient modulus prediction using neural network algorithmHanittinan, Wichai. January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 142-149).
|
70 |
Advancing accelerometry-based physical activity monitors quantifying measurement error and improving energy expenditure prediction /Rothney, Megan Pearl. January 1900 (has links)
Thesis (Ph. D. in Biomedical Engineering)--Vanderbilt University, May 2007. / Title from title screen. Includes bibliographical references.
|
Page generated in 0.0867 seconds