Spelling suggestions: "subject:"beural networks (computer)"" "subject:"aneural networks (computer)""
211 |
Function approximation in high-dimensional spaces using lower-dimensional Gaussian RBF networks.January 1992 (has links)
by Jones Chui. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 62-[66]). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Fundamentals of Artificial Neural Networks --- p.2 / Chapter 1.1.1 --- Processing Unit --- p.2 / Chapter 1.1.2 --- Topology --- p.3 / Chapter 1.1.3 --- Learning Rules --- p.4 / Chapter 1.2 --- Overview of Various Neural Network Models --- p.6 / Chapter 1.3 --- Introduction to the Radial Basis Function Networks (RBFs) --- p.8 / Chapter 1.3.1 --- Historical Development --- p.9 / Chapter 1.3.2 --- Some Intrinsic Problems --- p.9 / Chapter 1.4 --- Objective of the Thesis --- p.10 / Chapter 2 --- Low-dimensional Gaussian RBF networks (LowD RBFs) --- p.13 / Chapter 2.1 --- Architecture of LowD RBF Networks --- p.13 / Chapter 2.1.1 --- Network Structure --- p.13 / Chapter 2.1.2 --- Learning Rules --- p.17 / Chapter 2.2 --- Construction of LowD RBF Networks --- p.19 / Chapter 2.2.1 --- Growing Heuristic --- p.19 / Chapter 2.2.2 --- Pruning Heuristic --- p.27 / Chapter 2.2.3 --- Summary --- p.31 / Chapter 3 --- Application examples --- p.34 / Chapter 3.1 --- Chaotic Time Series Prediction --- p.35 / Chapter 3.1.1 --- Performance Comparison --- p.39 / Chapter 3.1.2 --- Sensitivity Analysis of MSE THRESHOLDS --- p.41 / Chapter 3.1.3 --- Effects of Increased Embedding Dimension --- p.41 / Chapter 3.1.4 --- Comparison with Tree-Structured Network --- p.46 / Chapter 3.1.5 --- Overfitting Problem --- p.46 / Chapter 3.2 --- Nonlinear prediction of speech signal --- p.49 / Chapter 3.2.1 --- Comparison with Linear Predictive Coding (LPC) --- p.54 / Chapter 3.2.2 --- Performance Test in Noisy Conditions --- p.55 / Chapter 3.2.3 --- Iterated Prediction of Speech --- p.59 / Chapter 4 --- Conclusion --- p.60 / Chapter 4.1 --- Discussions --- p.60 / Chapter 4.2 --- Limitations and Suggestions for Further Research --- p.61 / Bibliography --- p.62
|
212 |
An integration of hidden Markov model and neural network for phoneme recognition.January 1993 (has links)
by Patrick Shu Pui Ko. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 77-78). / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Introduction to Speech Recognition --- p.1 / Chapter 1.2 --- Classifications and Constraints of Speech Recognition Systems --- p.1 / Chapter 1.2.1 --- Isolated Subword Unit Recognition --- p.1 / Chapter 1.2.2 --- Isolated Word Recognition --- p.2 / Chapter 1.2.3 --- Continuous Speech Recognition --- p.2 / Chapter 1.3 --- Objective of the Thesis --- p.3 / Chapter 1.3.1 --- What is the Problem --- p.3 / Chapter 1.3.2 --- How the Problem is Approached --- p.3 / Chapter 1.3.3 --- The Organization of this Thesis --- p.3 / Chapter 2. --- Literature Review --- p.5 / Chapter 2.1 --- Approaches to the Problem of Speech Recognition --- p.5 / Chapter 2.1.1 --- Template-Based Approaches --- p.6 / Chapter 2.1.2 --- Knowledge-Based Approaches --- p.9 / Chapter 2.1.3 --- Stochastic Approaches --- p.10 / Chapter 2.1.4 --- Connectionist Approaches --- p.14 / Chapter 3. --- Discrimination Issues of HMM --- p.16 / Chapter 3.1 --- Maximum Likelihood Estimation (MLE) --- p.16 / Chapter 3.2 --- Maximum Mutual Information (MMI) --- p.17 / Chapter 4. --- Neural Networks --- p.19 / Chapter 4.1 --- History --- p.19 / Chapter 4.2 --- Basic Concepts --- p.20 / Chapter 4.3 --- Learning --- p.21 / Chapter 4.3.1 --- Supervised Training --- p.21 / Chapter 4.3.2 --- Reinforcement Training --- p.22 / Chapter 4.3.3 --- Self-Organization --- p.22 / Chapter 4.4 --- Error Back-propagation --- p.22 / Chapter 5. --- Proposal of a Discriminative Neural Network Layer --- p.25 / Chapter 5.1 --- Rationale --- p.25 / Chapter 5.2 --- HMM Parameters --- p.27 / Chapter 5.3 --- Neural Network Layer --- p.28 / Chapter 5.4 --- Decision Rules --- p.29 / Chapter 6. --- Data Preparation --- p.31 / Chapter 6.1 --- TIMIT --- p.31 / Chapter 6.2 --- Feature Extraction --- p.34 / Chapter 6.3 --- Training --- p.43 / Chapter 7. --- Experiments and Results --- p.52 / Chapter 7.1 --- Experiments --- p.52 / Chapter 7.2 --- Experiment I --- p.52 / Chapter 7.3 --- Experiment II --- p.55 / Chapter 7.4 --- Experiment III --- p.57 / Chapter 7.5 --- Experiment IV --- p.58 / Chapter 7.6 --- Experiment V --- p.60 / Chapter 7.7 --- Computational Issues --- p.62 / Chapter 7.8 --- Limitations --- p.63 / Chapter 8. --- Conclusion --- p.64 / Chapter 9. --- Future Directions --- p.67 / Appendix / Chapter A. --- Linear Predictive Coding --- p.69 / Chapter B. --- Implementation of a Vector Quantizer --- p.70 / Chapter C. --- Implementation of HMM --- p.73 / Chapter C.1 --- Calculations Underflow --- p.73 / Chapter C.2 --- Zero-lising Effect --- p.75 / Chapter C.3 --- Training With Multiple Observation Sequences --- p.76 / References --- p.77
|
213 |
Continuous speech phoneme recognition using neural networks and grammar correction.January 1995 (has links)
by Wai-Tat Fu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 104-[109]). / Chapter 1 --- INTRODUCTION --- p.1 / Chapter 1.1 --- Problem of Speech Recognition --- p.1 / Chapter 1.2 --- Why continuous speech recognition? --- p.5 / Chapter 1.3 --- Current status of continuous speech recognition --- p.6 / Chapter 1.4 --- Research Goal --- p.10 / Chapter 1.5 --- Thesis outline --- p.10 / Chapter 2 --- Current Approaches to Continuous Speech Recognition --- p.12 / Chapter 2.1 --- BASIC STEPS FOR CONTINUOUS SPEECH RECOGNITION --- p.12 / Chapter 2.2 --- THE HIDDEN MARKOV MODEL APPROACH --- p.16 / Chapter 2.2.1 --- Introduction --- p.16 / Chapter 2.2.2 --- Segmentation and Pattern Matching --- p.18 / Chapter 2.2.3 --- Word Formation and Syntactic Processing --- p.22 / Chapter 2.2.4 --- Discussion --- p.23 / Chapter 2.3 --- NEURAL NETWORK APPROACH --- p.24 / Chapter 2.3.1 --- Introduction --- p.24 / Chapter 2.3.2 --- Segmentation and Pattern Matching --- p.25 / Chapter 2.3.3 --- Discussion --- p.27 / Chapter 2.4 --- MLP/HMM HYBRID APPROACH --- p.28 / Chapter 2.4.1 --- Introduction --- p.28 / Chapter 2.4.2 --- Architecture of Hybrid MLP/HMM Systems --- p.29 / Chapter 2.4.3 --- Discussions --- p.30 / Chapter 2.5 --- SYNTACTIC GRAMMAR --- p.30 / Chapter 2.5.1 --- Introduction --- p.30 / Chapter 2.5.2 --- Word formation and Syntactic Processing --- p.31 / Chapter 2.5.3 --- Discussion --- p.32 / Chapter 2.6 --- SUMMARY --- p.32 / Chapter 3 --- Neural Network As Pattern Classifier --- p.34 / Chapter 3.1 --- INTRODUCTION --- p.34 / Chapter 3.2 --- TRAINING ALGORITHMS AND TOPOLOGIES --- p.35 / Chapter 3.2.1 --- Multilayer Perceptrons --- p.35 / Chapter 3.2.2 --- Recurrent Neural Networks --- p.39 / Chapter 3.2.3 --- Self-organizing Maps --- p.41 / Chapter 3.2.4 --- Learning Vector Quantization --- p.43 / Chapter 3.3 --- EXPERIMENTS --- p.44 / Chapter 3.3.1 --- The Data Set --- p.44 / Chapter 3.3.2 --- Preprocessing of the Speech Data --- p.45 / Chapter 3.3.3 --- The Pattern Classifiers --- p.50 / Chapter 3.4 --- RESULTS AND DISCUSSIONS --- p.53 / Chapter 4 --- High Level Context Information --- p.56 / Chapter 4.1 --- INTRODUCTION --- p.56 / Chapter 4.2 --- HIDDEN MARKOV MODEL APPROACH --- p.57 / Chapter 4.3 --- THE DYNAMIC PROGRAMMING APPROACH --- p.59 / Chapter 4.4 --- THE SYNTACTIC GRAMMAR APPROACH --- p.60 / Chapter 5 --- Finite State Grammar Network --- p.62 / Chapter 5.1 --- INTRODUCTION --- p.62 / Chapter 5.2 --- THE GRAMMAR COMPILATION --- p.63 / Chapter 5.2.1 --- Introduction --- p.63 / Chapter 5.2.2 --- K-Tails Clustering Method --- p.66 / Chapter 5.2.3 --- Inference of finite state grammar --- p.67 / Chapter 5.2.4 --- Error Correcting Parsing --- p.69 / Chapter 5.3 --- EXPERIMENT --- p.71 / Chapter 5.4 --- RESULTS AND DISCUSSIONS --- p.73 / Chapter 6 --- The Integrated System --- p.81 / Chapter 6.1 --- INTRODUCTION --- p.81 / Chapter 6.2 --- POSTPROCESSING OF NEURAL NETWORK OUTPUT --- p.82 / Chapter 6.2.1 --- Activation Threshold --- p.82 / Chapter 6.2.2 --- Duration Threshold --- p.85 / Chapter 6.2.3 --- Merging of Phoneme boundaries --- p.88 / Chapter 6.3 --- THE ERROR CORRECTING PARSER --- p.90 / Chapter 6.4 --- RESULTS AND DISCUSSIONS --- p.96 / Chapter 7 --- Conclusions --- p.101 / Bibliography --- p.105
|
214 |
Applying Levenberg-Marquardt algorithm with block-diagonal Hessian approximation to recurrent neural network training.January 1999 (has links)
by Chi-cheong Szeto. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 162-165). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgment --- p.ii / Table of Contents --- p.iii / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Time series prediction --- p.1 / Chapter 1.2 --- Forecasting models --- p.1 / Chapter 1.2.1 --- Networks using time delays --- p.2 / Chapter 1.2.1.1 --- Model description --- p.2 / Chapter 1.2.1.2 --- Limitation --- p.3 / Chapter 1.2.2 --- Networks using context units --- p.3 / Chapter 1.2.2.1 --- Model description --- p.3 / Chapter 1.2.2.2 --- Limitation --- p.6 / Chapter 1.2.3 --- Layered fully recurrent networks --- p.6 / Chapter 1.2.3.1 --- Model description --- p.6 / Chapter 1.2.3.2 --- Our selection and motivation --- p.8 / Chapter 1.2.4 --- Other models --- p.8 / Chapter 1.3 --- Learning methods --- p.8 / Chapter 1.3.1 --- First order and second order methods --- p.9 / Chapter 1.3.2 --- Nonlinear least squares methods --- p.11 / Chapter 1.3.2.1 --- Levenberg-Marquardt method ´ؤ our selection and motivation --- p.13 / Chapter 1.3.2.2 --- Levenberg-Marquardt method - algorithm --- p.13 / Chapter 1.3.3 --- "Batch mode, semi-sequential mode and sequential mode of updating" --- p.15 / Chapter 1.4 --- Jacobian matrix calculations in recurrent networks --- p.15 / Chapter 1.4.1 --- RTBPTT-like Jacobian matrix calculation --- p.15 / Chapter 1.4.2 --- RTRL-like Jacobian matrix calculation --- p.17 / Chapter 1.4.3 --- Comparison between RTBPTT-like and RTRL-like calculations --- p.18 / Chapter 1.5 --- Computation complexity reduction techniques in recurrent networks --- p.19 / Chapter 1.5.1 --- Architectural approach --- p.19 / Chapter 1.5.1.1 --- Recurrent connection reduction method --- p.20 / Chapter 1.5.1.2 --- Treating the feedback signals as additional inputs method --- p.20 / Chapter 1.5.1.3 --- Growing network method --- p.21 / Chapter 1.5.2 --- Algorithmic approach --- p.21 / Chapter 1.5.2.1 --- History cutoff method --- p.21 / Chapter 1.5.2.2 --- Changing the updating frequency from sequential mode to semi- sequential mode method --- p.22 / Chapter 1.6 --- Motivation for using block-diagonal Hessian matrix --- p.22 / Chapter 1.7 --- Objective --- p.23 / Chapter 1.8 --- Organization of the thesis --- p.24 / Chapter Chapter 2 --- Learning with the block-diagonal Hessian matrix --- p.25 / Chapter 2.1 --- Introduction --- p.25 / Chapter 2.2 --- General form and factors of block-diagonal Hessian matrices --- p.25 / Chapter 2.2.1 --- General form of block-diagonal Hessian matrices --- p.25 / Chapter 2.2.2 --- Factors of block-diagonal Hessian matrices --- p.27 / Chapter 2.3 --- Four particular block-diagonal Hessian matrices --- p.28 / Chapter 2.3.1 --- Correlation block-diagonal Hessian matrix --- p.29 / Chapter 2.3.2 --- One-unit block-diagonal Hessian matrix --- p.35 / Chapter 2.3.3 --- Sub-network block-diagonal Hessian matrix --- p.35 / Chapter 2.3.4 --- Layer block-diagonal Hessian matrix --- p.36 / Chapter 2.4 --- Updating methods --- p.40 / Chapter Chapter 3 --- Data set and setup of experiments --- p.41 / Chapter 3.1 --- Introduction --- p.41 / Chapter 3.2 --- Data set --- p.41 / Chapter 3.2.1 --- Single sine --- p.41 / Chapter 3.2.2 --- Composite sine --- p.42 / Chapter 3.2.3 --- Sunspot --- p.43 / Chapter 3.3 --- Choices of recurrent neural network parameters and initialization methods --- p.44 / Chapter 3.3.1 --- "Choices of numbers of input, hidden and output units" --- p.45 / Chapter 3.3.2 --- Initial hidden states --- p.45 / Chapter 3.3.3 --- Weight initialization method --- p.45 / Chapter 3.4 --- Method of dealing with over-fitting --- p.47 / Chapter Chapter 4 --- Updating methods --- p.48 / Chapter 4.1 --- Introduction --- p.48 / Chapter 4.2 --- Asynchronous updating method --- p.49 / Chapter 4.2.1 --- Algorithm --- p.49 / Chapter 4.2.2 --- Method of study --- p.50 / Chapter 4.2.3 --- Performance --- p.51 / Chapter 4.2.4 --- Investigation on poor generalization --- p.52 / Chapter 4.2.4.1 --- Hidden states --- p.52 / Chapter 4.2.4.2 --- Incoming weight magnitudes of the hidden units --- p.54 / Chapter 4.2.4.3 --- Weight change against time --- p.56 / Chapter 4.3 --- Asynchronous updating with constraint method --- p.68 / Chapter 4.3.1 --- Algorithm --- p.68 / Chapter 4.3.2 --- Method of study --- p.69 / Chapter 4.3.3 --- Performance --- p.70 / Chapter 4.3.3.1 --- Generalization performance --- p.70 / Chapter 4.3.3.2 --- Training time performance --- p.71 / Chapter 4.3.4 --- Hidden states and incoming weight magnitudes of the hidden units --- p.73 / Chapter 4.3.4.1 --- Hidden states --- p.73 / Chapter 4.3.4.2 --- Incoming weight magnitudes of the hidden units --- p.73 / Chapter 4.4 --- Synchronous updating methods --- p.84 / Chapter 4.4.1 --- Single λ and multiple λ's synchronous updating methods --- p.84 / Chapter 4.4.1.1 --- Algorithm of single λ synchronous updating method --- p.84 / Chapter 4.4.1.2 --- Algorithm of multiple λ's synchronous updating method --- p.85 / Chapter 4.4.1.3 --- Method of study --- p.87 / Chapter 4.4.1.4 --- Performance --- p.87 / Chapter 4.4.1.5 --- Investigation on long training time: analysis of λ --- p.89 / Chapter 4.4.2 --- Multiple λ's with line search synchronous updating method --- p.97 / Chapter 4.4.2.1 --- Algorithm --- p.97 / Chapter 4.4.2.2 --- Performance --- p.98 / Chapter 4.4.2.3 --- Comparison of λ --- p.100 / Chapter 4.5 --- Comparison between asynchronous and synchronous updating methods --- p.101 / Chapter 4.5.1 --- Final training time --- p.101 / Chapter 4.5.2 --- Computation load per complete weight update --- p.102 / Chapter 4.5.3 --- Convergence speed --- p.103 / Chapter 4.6 --- Comparison between our proposed methods and the gradient descent method with adaptive learning rate and momentum --- p.111 / Chapter Chapter 5 --- Number and sizes of the blocks --- p.113 / Chapter 5.1 --- Introduction --- p.113 / Chapter 5.2 --- Performance --- p.113 / Chapter 5.2.1 --- Method of study --- p.113 / Chapter 5.2.2 --- Trend of performance --- p.115 / Chapter 5.2.2.1 --- Asynchronous updating method --- p.115 / Chapter 5.2.2.2 --- Synchronous updating method --- p.116 / Chapter 5.3 --- Computation load per complete weight update --- p.116 / Chapter 5.4 --- Convergence speed --- p.117 / Chapter 5.4.1 --- Trend of inverse of convergence speed --- p.117 / Chapter 5.4.2 --- Factors affecting the convergence speed --- p.117 / Chapter Chapter 6 --- Weight-grouping methods --- p.125 / Chapter 6.1 --- Introduction --- p.125 / Chapter 6.2 --- Training time and generalization performance of different weight-grouping methods --- p.125 / Chapter 6.2.1 --- Method of study --- p.125 / Chapter 6.2.2 --- Performance --- p.126 / Chapter 6.3 --- Degree of approximation of block-diagonal Hessian matrix with different weight- grouping methods --- p.128 / Chapter 6.3.1 --- Method of study --- p.128 / Chapter 6.3.2 --- Performance --- p.128 / Chapter Chapter 7 --- Discussion --- p.150 / Chapter 7.1 --- Advantages and disadvantages of using block-diagonal Hessian matrix --- p.150 / Chapter 7.1.1 --- Advantages --- p.150 / Chapter 7.1.2 --- Disadvantages --- p.151 / Chapter 7.2 --- Analysis of computation complexity --- p.151 / Chapter 7.2.1 --- Trend of computation complexity of each calculation --- p.154 / Chapter 7.2.2 --- Batch mode of updating --- p.155 / Chapter 7.2.3 --- Sequential mode of updating --- p.155 / Chapter 7.3 --- Analysis of storage complexity --- p.156 / Chapter 7.3.1 --- Trend of storage complexity of each set of variables --- p.157 / Chapter 7.3.2 --- Trend of overall storage complexity --- p.157 / Chapter 7.4 --- Parallel implementation --- p.158 / Chapter 7.5 --- Alternative implementation of weight change constraint --- p.158 / Chapter Chapter 8 --- Conclusions --- p.160 / References --- p.162
|
215 |
ForeNet: fourier recurrent neural networks for time series prediction.January 2001 (has links)
Ying-Qian Zhang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 115-124). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Objective --- p.2 / Chapter 1.3 --- Contributions --- p.3 / Chapter 1.4 --- Thesis Overview --- p.4 / Chapter 2 --- Literature Review --- p.6 / Chapter 2.1 --- Takens' Theorem --- p.6 / Chapter 2.2 --- Linear Models for Prediction --- p.7 / Chapter 2.2.1 --- Autoregressive Model --- p.7 / Chapter 2.2.2 --- Moving Average Model --- p.8 / Chapter 2.2.3 --- Autoregressive-moving Average Model --- p.9 / Chapter 2.2.4 --- Fitting a Linear Model to a Given Time Series --- p.9 / Chapter 2.2.5 --- State-space Reconstruction --- p.10 / Chapter 2.3 --- Neural Network Models for Time Series Processing --- p.11 / Chapter 2.3.1 --- Feed-forward Neural Networks --- p.11 / Chapter 2.3.2 --- Recurrent Neural Networks --- p.14 / Chapter 2.3.3 --- Training Algorithms for Recurrent Networks --- p.18 / Chapter 2.4 --- Combining Neural Networks and other approximation techniques --- p.22 / Chapter 3 --- ForeNet: Model and Representation --- p.24 / Chapter 3.1 --- Fourier Recursive Prediction Equation --- p.24 / Chapter 3.1.1 --- Fourier Analysis of Time Series --- p.25 / Chapter 3.1.2 --- Recursive Form --- p.25 / Chapter 3.2 --- Fourier Recurrent Neural Network Model (ForeNet) --- p.27 / Chapter 3.2.1 --- Neural Networks Representation --- p.28 / Chapter 3.2.2 --- Architecture of ForeNet --- p.29 / Chapter 4 --- ForeNet: Implementation --- p.32 / Chapter 4.1 --- Improvement on ForeNet --- p.33 / Chapter 4.1.1 --- Number of Hidden Neurons --- p.33 / Chapter 4.1.2 --- Real-valued Outputs --- p.34 / Chapter 4.2 --- Parameters Initialization --- p.37 / Chapter 4.3 --- Application of ForeNet: the Process of Time Series Prediction --- p.38 / Chapter 4.4 --- Some Implications --- p.39 / Chapter 5 --- ForeNet: Initialization --- p.40 / Chapter 5.1 --- Unfolded Form of ForeNet --- p.40 / Chapter 5.2 --- Coefficients Analysis --- p.43 / Chapter 5.2.1 --- "Analysis of the Coefficients Set, vn " --- p.43 / Chapter 5.2.2 --- "Analysis of the Coefficients Set, μn(d) " --- p.44 / Chapter 5.3 --- Experiments of ForeNet Initialization --- p.47 / Chapter 5.3.1 --- Objective and Experiment Setting --- p.47 / Chapter 5.3.2 --- Prediction of Sunspot Series --- p.49 / Chapter 5.3.3 --- Prediction of Mackey-Glass Series --- p.53 / Chapter 5.3.4 --- Prediction of Laser Data --- p.56 / Chapter 5.3.5 --- Three More Series --- p.59 / Chapter 5.4 --- Some Implications on the Proposed Initialization Method --- p.63 / Chapter 6 --- ForeNet: Learning Algorithms --- p.67 / Chapter 6.1 --- Complex Real Time Recurrent Learning (CRTRL) --- p.68 / Chapter 6.2 --- Batch-mode Learning --- p.70 / Chapter 6.3 --- Time Complexity --- p.71 / Chapter 6.4 --- Property Analysis and Experimental Results --- p.72 / Chapter 6.4.1 --- Efficient initialization:compared with random initialization --- p.74 / Chapter 6.4.2 --- Complex-valued network:compared with real-valued net- work --- p.78 / Chapter 6.4.3 --- Simple architecture:compared with ring-structure RNN . --- p.79 / Chapter 6.4.4 --- Linear model: compared with nonlinear ForeNet --- p.80 / Chapter 6.4.5 --- Small number of hidden units --- p.88 / Chapter 6.5 --- Comparison with Some Other Models --- p.89 / Chapter 6.5.1 --- Comparison with AR model --- p.91 / Chapter 6.5.2 --- Comparison with TDNN Networks and FIR Networks . --- p.93 / Chapter 6.5.3 --- Comparison to a few more results --- p.94 / Chapter 6.6 --- Summarization --- p.95 / Chapter 7 --- Learning and Prediction: On-Line Training --- p.98 / Chapter 7.1 --- On-Line Learning Algorithm --- p.98 / Chapter 7.1.1 --- Advantages and Disadvantages --- p.98 / Chapter 7.1.2 --- Training Process --- p.99 / Chapter 7.2 --- Experiments --- p.101 / Chapter 7.3 --- Predicting Stock Time Series --- p.105 / Chapter 8 --- Discussions and Conclusions --- p.109 / Chapter 8.1 --- Limitations of ForeNet --- p.109 / Chapter 8.2 --- Advantages of ForeNet --- p.111 / Chapter 8.3 --- Future Works --- p.112 / Bibliography --- p.115
|
216 |
Exploiting the GPU power for image-based relighting and neural network.January 2006 (has links)
Wei Dan. / Thesis submitted in: October 2005. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 93-101). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Our applications --- p.1 / Chapter 1.3 --- Structure of the thesis --- p.2 / Chapter 2 --- The Programmable Graphics Hardware --- p.4 / Chapter 2.1 --- Introduction --- p.4 / Chapter 2.2 --- The evolution of programmable graphics hardware --- p.4 / Chapter 2.3 --- Benefit of GPU --- p.6 / Chapter 2.4 --- Architecture of programmable graphics hardware --- p.9 / Chapter 2.4.1 --- The graphics hardware pipeline --- p.9 / Chapter 2.4.2 --- Programmable graphics hardware --- p.10 / Chapter 2.5 --- Data Mapping in GPU --- p.12 / Chapter 2.6 --- Some limitations of current GPU --- p.13 / Chapter 2.7 --- Application and Related Work --- p.16 / Chapter 3 --- Image-based Relighting on GPU --- p.18 / Chapter 3.1 --- Introduction --- p.18 / Chapter 3.2 --- Image based relighting --- p.20 / Chapter 3.2.1 --- The plenoptic illumination function --- p.20 / Chapter 3.2.2 --- Sampling and Relighting --- p.21 / Chapter 3.3 --- Linear Approximation Function --- p.22 / Chapter 3.3.1 --- Spherical harmonics basis function --- p.22 / Chapter 3.3.2 --- Radial basis function --- p.23 / Chapter 3.4 --- Data Representation --- p.23 / Chapter 3.5 --- Relighting on GPU --- p.24 / Chapter 3.5.1 --- Directional light source relighting --- p.27 / Chapter 3.5.2 --- Point light source relighting --- p.28 / Chapter 3.6 --- Experiment --- p.32 / Chapter 3.6.1 --- Visual Evaluation --- p.32 / Chapter 3.6.2 --- Statistic Evaluation --- p.33 / Chapter 3.7 --- Conclusion --- p.34 / Chapter 4 --- Texture Compression on GPU --- p.40 / Chapter 4.1 --- Introduction --- p.40 / Chapter 4.2 --- The Feature of Texture Compression --- p.41 / Chapter 4.3 --- Implementation --- p.42 / Chapter 4.3.1 --- Encoding --- p.43 / Chapter 4.3.2 --- Decoding --- p.46 / Chapter 4.4 --- The Texture Compression based Relighting on GPU --- p.46 / Chapter 4.5 --- An improvement of the existing compression techniques --- p.48 / Chapter 4.6 --- Experiment Evaluation --- p.50 / Chapter 4.7 --- Conclusion --- p.51 / Chapter 5 --- Environment Relighting on GPU --- p.55 / Chapter 5.1 --- Overview --- p.55 / Chapter 5.2 --- Related Work --- p.56 / Chapter 5.3 --- Linear Approximation Algorithm --- p.58 / Chapter 5.3.1 --- Basic Architecture --- p.58 / Chapter 5.3.2 --- Relighting on SH --- p.60 / Chapter 5.3.3 --- Relighting on RBF --- p.61 / Chapter 5.3.4 --- Sampling the Environment --- p.63 / Chapter 5.4 --- Implementation on GPU --- p.64 / Chapter 5.5 --- Evaluation --- p.66 / Chapter 5.5.1 --- Visual evaluation --- p.66 / Chapter 5.5.2 --- Statistic evaluation --- p.67 / Chapter 5.6 --- Conclusion --- p.69 / Chapter 6 --- Neocognitron on GPU --- p.70 / Chapter 6.1 --- Overview --- p.70 / Chapter 6.2 --- Neocognitron --- p.72 / Chapter 6.3 --- Neocognitron on GPU --- p.75 / Chapter 6.3.1 --- Data Mapping and Connection Texture --- p.76 / Chapter 6.3.2 --- Convolution and Offset Computation --- p.77 / Chapter 6.3.3 --- Recognition Pipeline --- p.80 / Chapter 6.4 --- Experiments and Results --- p.81 / Chapter 6.4.1 --- Performance Evaluation --- p.81 / Chapter 6.4.2 --- Feature Visualization of Intermediate-Layer --- p.84 / Chapter 6.4.3 --- A Real-Time Tracking Test --- p.84 / Chapter 6.5 --- Conclusion --- p.87 / Chapter 7 --- Conclusion --- p.90 / Bibliography --- p.93
|
217 |
Dynamical analysis of complex-valued recurrent neural networks with time-delays. / CUHK electronic theses & dissertations collectionJanuary 2013 (has links)
Hu, Jin. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 140-153). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese.
|
218 |
Stability analysis and control applications of recurrent neural networks. / CUHK electronic theses & dissertations collectionJanuary 2001 (has links)
Hu San-qing. / "December 2001." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (p. 181-192). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
219 |
Analysis and design of recurrent neural networks and their applications to control and robotic systems. / CUHK electronic theses & dissertations collection / Digital dissertation consortiumJanuary 2002 (has links)
Zhang Yu-nong. / "November 2002." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 161-176). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
220 |
Design methodology and stability analysis of recurrent neural networks for constrained optimization. / CUHK electronic theses & dissertations collectionJanuary 2000 (has links)
Xia You-sheng. / "June 2000." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (p. 152-165). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
Page generated in 0.0984 seconds