211 |
Applying Levenberg-Marquardt algorithm with block-diagonal Hessian approximation to recurrent neural network training.January 1999 (has links)
by Chi-cheong Szeto. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 162-165). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgment --- p.ii / Table of Contents --- p.iii / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Time series prediction --- p.1 / Chapter 1.2 --- Forecasting models --- p.1 / Chapter 1.2.1 --- Networks using time delays --- p.2 / Chapter 1.2.1.1 --- Model description --- p.2 / Chapter 1.2.1.2 --- Limitation --- p.3 / Chapter 1.2.2 --- Networks using context units --- p.3 / Chapter 1.2.2.1 --- Model description --- p.3 / Chapter 1.2.2.2 --- Limitation --- p.6 / Chapter 1.2.3 --- Layered fully recurrent networks --- p.6 / Chapter 1.2.3.1 --- Model description --- p.6 / Chapter 1.2.3.2 --- Our selection and motivation --- p.8 / Chapter 1.2.4 --- Other models --- p.8 / Chapter 1.3 --- Learning methods --- p.8 / Chapter 1.3.1 --- First order and second order methods --- p.9 / Chapter 1.3.2 --- Nonlinear least squares methods --- p.11 / Chapter 1.3.2.1 --- Levenberg-Marquardt method ´ؤ our selection and motivation --- p.13 / Chapter 1.3.2.2 --- Levenberg-Marquardt method - algorithm --- p.13 / Chapter 1.3.3 --- "Batch mode, semi-sequential mode and sequential mode of updating" --- p.15 / Chapter 1.4 --- Jacobian matrix calculations in recurrent networks --- p.15 / Chapter 1.4.1 --- RTBPTT-like Jacobian matrix calculation --- p.15 / Chapter 1.4.2 --- RTRL-like Jacobian matrix calculation --- p.17 / Chapter 1.4.3 --- Comparison between RTBPTT-like and RTRL-like calculations --- p.18 / Chapter 1.5 --- Computation complexity reduction techniques in recurrent networks --- p.19 / Chapter 1.5.1 --- Architectural approach --- p.19 / Chapter 1.5.1.1 --- Recurrent connection reduction method --- p.20 / Chapter 1.5.1.2 --- Treating the feedback signals as additional inputs method --- p.20 / Chapter 1.5.1.3 --- Growing network method --- p.21 / Chapter 1.5.2 --- Algorithmic approach --- p.21 / Chapter 1.5.2.1 --- History cutoff method --- p.21 / Chapter 1.5.2.2 --- Changing the updating frequency from sequential mode to semi- sequential mode method --- p.22 / Chapter 1.6 --- Motivation for using block-diagonal Hessian matrix --- p.22 / Chapter 1.7 --- Objective --- p.23 / Chapter 1.8 --- Organization of the thesis --- p.24 / Chapter Chapter 2 --- Learning with the block-diagonal Hessian matrix --- p.25 / Chapter 2.1 --- Introduction --- p.25 / Chapter 2.2 --- General form and factors of block-diagonal Hessian matrices --- p.25 / Chapter 2.2.1 --- General form of block-diagonal Hessian matrices --- p.25 / Chapter 2.2.2 --- Factors of block-diagonal Hessian matrices --- p.27 / Chapter 2.3 --- Four particular block-diagonal Hessian matrices --- p.28 / Chapter 2.3.1 --- Correlation block-diagonal Hessian matrix --- p.29 / Chapter 2.3.2 --- One-unit block-diagonal Hessian matrix --- p.35 / Chapter 2.3.3 --- Sub-network block-diagonal Hessian matrix --- p.35 / Chapter 2.3.4 --- Layer block-diagonal Hessian matrix --- p.36 / Chapter 2.4 --- Updating methods --- p.40 / Chapter Chapter 3 --- Data set and setup of experiments --- p.41 / Chapter 3.1 --- Introduction --- p.41 / Chapter 3.2 --- Data set --- p.41 / Chapter 3.2.1 --- Single sine --- p.41 / Chapter 3.2.2 --- Composite sine --- p.42 / Chapter 3.2.3 --- Sunspot --- p.43 / Chapter 3.3 --- Choices of recurrent neural network parameters and initialization methods --- p.44 / Chapter 3.3.1 --- "Choices of numbers of input, hidden and output units" --- p.45 / Chapter 3.3.2 --- Initial hidden states --- p.45 / Chapter 3.3.3 --- Weight initialization method --- p.45 / Chapter 3.4 --- Method of dealing with over-fitting --- p.47 / Chapter Chapter 4 --- Updating methods --- p.48 / Chapter 4.1 --- Introduction --- p.48 / Chapter 4.2 --- Asynchronous updating method --- p.49 / Chapter 4.2.1 --- Algorithm --- p.49 / Chapter 4.2.2 --- Method of study --- p.50 / Chapter 4.2.3 --- Performance --- p.51 / Chapter 4.2.4 --- Investigation on poor generalization --- p.52 / Chapter 4.2.4.1 --- Hidden states --- p.52 / Chapter 4.2.4.2 --- Incoming weight magnitudes of the hidden units --- p.54 / Chapter 4.2.4.3 --- Weight change against time --- p.56 / Chapter 4.3 --- Asynchronous updating with constraint method --- p.68 / Chapter 4.3.1 --- Algorithm --- p.68 / Chapter 4.3.2 --- Method of study --- p.69 / Chapter 4.3.3 --- Performance --- p.70 / Chapter 4.3.3.1 --- Generalization performance --- p.70 / Chapter 4.3.3.2 --- Training time performance --- p.71 / Chapter 4.3.4 --- Hidden states and incoming weight magnitudes of the hidden units --- p.73 / Chapter 4.3.4.1 --- Hidden states --- p.73 / Chapter 4.3.4.2 --- Incoming weight magnitudes of the hidden units --- p.73 / Chapter 4.4 --- Synchronous updating methods --- p.84 / Chapter 4.4.1 --- Single λ and multiple λ's synchronous updating methods --- p.84 / Chapter 4.4.1.1 --- Algorithm of single λ synchronous updating method --- p.84 / Chapter 4.4.1.2 --- Algorithm of multiple λ's synchronous updating method --- p.85 / Chapter 4.4.1.3 --- Method of study --- p.87 / Chapter 4.4.1.4 --- Performance --- p.87 / Chapter 4.4.1.5 --- Investigation on long training time: analysis of λ --- p.89 / Chapter 4.4.2 --- Multiple λ's with line search synchronous updating method --- p.97 / Chapter 4.4.2.1 --- Algorithm --- p.97 / Chapter 4.4.2.2 --- Performance --- p.98 / Chapter 4.4.2.3 --- Comparison of λ --- p.100 / Chapter 4.5 --- Comparison between asynchronous and synchronous updating methods --- p.101 / Chapter 4.5.1 --- Final training time --- p.101 / Chapter 4.5.2 --- Computation load per complete weight update --- p.102 / Chapter 4.5.3 --- Convergence speed --- p.103 / Chapter 4.6 --- Comparison between our proposed methods and the gradient descent method with adaptive learning rate and momentum --- p.111 / Chapter Chapter 5 --- Number and sizes of the blocks --- p.113 / Chapter 5.1 --- Introduction --- p.113 / Chapter 5.2 --- Performance --- p.113 / Chapter 5.2.1 --- Method of study --- p.113 / Chapter 5.2.2 --- Trend of performance --- p.115 / Chapter 5.2.2.1 --- Asynchronous updating method --- p.115 / Chapter 5.2.2.2 --- Synchronous updating method --- p.116 / Chapter 5.3 --- Computation load per complete weight update --- p.116 / Chapter 5.4 --- Convergence speed --- p.117 / Chapter 5.4.1 --- Trend of inverse of convergence speed --- p.117 / Chapter 5.4.2 --- Factors affecting the convergence speed --- p.117 / Chapter Chapter 6 --- Weight-grouping methods --- p.125 / Chapter 6.1 --- Introduction --- p.125 / Chapter 6.2 --- Training time and generalization performance of different weight-grouping methods --- p.125 / Chapter 6.2.1 --- Method of study --- p.125 / Chapter 6.2.2 --- Performance --- p.126 / Chapter 6.3 --- Degree of approximation of block-diagonal Hessian matrix with different weight- grouping methods --- p.128 / Chapter 6.3.1 --- Method of study --- p.128 / Chapter 6.3.2 --- Performance --- p.128 / Chapter Chapter 7 --- Discussion --- p.150 / Chapter 7.1 --- Advantages and disadvantages of using block-diagonal Hessian matrix --- p.150 / Chapter 7.1.1 --- Advantages --- p.150 / Chapter 7.1.2 --- Disadvantages --- p.151 / Chapter 7.2 --- Analysis of computation complexity --- p.151 / Chapter 7.2.1 --- Trend of computation complexity of each calculation --- p.154 / Chapter 7.2.2 --- Batch mode of updating --- p.155 / Chapter 7.2.3 --- Sequential mode of updating --- p.155 / Chapter 7.3 --- Analysis of storage complexity --- p.156 / Chapter 7.3.1 --- Trend of storage complexity of each set of variables --- p.157 / Chapter 7.3.2 --- Trend of overall storage complexity --- p.157 / Chapter 7.4 --- Parallel implementation --- p.158 / Chapter 7.5 --- Alternative implementation of weight change constraint --- p.158 / Chapter Chapter 8 --- Conclusions --- p.160 / References --- p.162
|
212 |
ForeNet: fourier recurrent neural networks for time series prediction.January 2001 (has links)
Ying-Qian Zhang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 115-124). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Objective --- p.2 / Chapter 1.3 --- Contributions --- p.3 / Chapter 1.4 --- Thesis Overview --- p.4 / Chapter 2 --- Literature Review --- p.6 / Chapter 2.1 --- Takens' Theorem --- p.6 / Chapter 2.2 --- Linear Models for Prediction --- p.7 / Chapter 2.2.1 --- Autoregressive Model --- p.7 / Chapter 2.2.2 --- Moving Average Model --- p.8 / Chapter 2.2.3 --- Autoregressive-moving Average Model --- p.9 / Chapter 2.2.4 --- Fitting a Linear Model to a Given Time Series --- p.9 / Chapter 2.2.5 --- State-space Reconstruction --- p.10 / Chapter 2.3 --- Neural Network Models for Time Series Processing --- p.11 / Chapter 2.3.1 --- Feed-forward Neural Networks --- p.11 / Chapter 2.3.2 --- Recurrent Neural Networks --- p.14 / Chapter 2.3.3 --- Training Algorithms for Recurrent Networks --- p.18 / Chapter 2.4 --- Combining Neural Networks and other approximation techniques --- p.22 / Chapter 3 --- ForeNet: Model and Representation --- p.24 / Chapter 3.1 --- Fourier Recursive Prediction Equation --- p.24 / Chapter 3.1.1 --- Fourier Analysis of Time Series --- p.25 / Chapter 3.1.2 --- Recursive Form --- p.25 / Chapter 3.2 --- Fourier Recurrent Neural Network Model (ForeNet) --- p.27 / Chapter 3.2.1 --- Neural Networks Representation --- p.28 / Chapter 3.2.2 --- Architecture of ForeNet --- p.29 / Chapter 4 --- ForeNet: Implementation --- p.32 / Chapter 4.1 --- Improvement on ForeNet --- p.33 / Chapter 4.1.1 --- Number of Hidden Neurons --- p.33 / Chapter 4.1.2 --- Real-valued Outputs --- p.34 / Chapter 4.2 --- Parameters Initialization --- p.37 / Chapter 4.3 --- Application of ForeNet: the Process of Time Series Prediction --- p.38 / Chapter 4.4 --- Some Implications --- p.39 / Chapter 5 --- ForeNet: Initialization --- p.40 / Chapter 5.1 --- Unfolded Form of ForeNet --- p.40 / Chapter 5.2 --- Coefficients Analysis --- p.43 / Chapter 5.2.1 --- "Analysis of the Coefficients Set, vn " --- p.43 / Chapter 5.2.2 --- "Analysis of the Coefficients Set, μn(d) " --- p.44 / Chapter 5.3 --- Experiments of ForeNet Initialization --- p.47 / Chapter 5.3.1 --- Objective and Experiment Setting --- p.47 / Chapter 5.3.2 --- Prediction of Sunspot Series --- p.49 / Chapter 5.3.3 --- Prediction of Mackey-Glass Series --- p.53 / Chapter 5.3.4 --- Prediction of Laser Data --- p.56 / Chapter 5.3.5 --- Three More Series --- p.59 / Chapter 5.4 --- Some Implications on the Proposed Initialization Method --- p.63 / Chapter 6 --- ForeNet: Learning Algorithms --- p.67 / Chapter 6.1 --- Complex Real Time Recurrent Learning (CRTRL) --- p.68 / Chapter 6.2 --- Batch-mode Learning --- p.70 / Chapter 6.3 --- Time Complexity --- p.71 / Chapter 6.4 --- Property Analysis and Experimental Results --- p.72 / Chapter 6.4.1 --- Efficient initialization:compared with random initialization --- p.74 / Chapter 6.4.2 --- Complex-valued network:compared with real-valued net- work --- p.78 / Chapter 6.4.3 --- Simple architecture:compared with ring-structure RNN . --- p.79 / Chapter 6.4.4 --- Linear model: compared with nonlinear ForeNet --- p.80 / Chapter 6.4.5 --- Small number of hidden units --- p.88 / Chapter 6.5 --- Comparison with Some Other Models --- p.89 / Chapter 6.5.1 --- Comparison with AR model --- p.91 / Chapter 6.5.2 --- Comparison with TDNN Networks and FIR Networks . --- p.93 / Chapter 6.5.3 --- Comparison to a few more results --- p.94 / Chapter 6.6 --- Summarization --- p.95 / Chapter 7 --- Learning and Prediction: On-Line Training --- p.98 / Chapter 7.1 --- On-Line Learning Algorithm --- p.98 / Chapter 7.1.1 --- Advantages and Disadvantages --- p.98 / Chapter 7.1.2 --- Training Process --- p.99 / Chapter 7.2 --- Experiments --- p.101 / Chapter 7.3 --- Predicting Stock Time Series --- p.105 / Chapter 8 --- Discussions and Conclusions --- p.109 / Chapter 8.1 --- Limitations of ForeNet --- p.109 / Chapter 8.2 --- Advantages of ForeNet --- p.111 / Chapter 8.3 --- Future Works --- p.112 / Bibliography --- p.115
|
213 |
Exploiting the GPU power for image-based relighting and neural network.January 2006 (has links)
Wei Dan. / Thesis submitted in: October 2005. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 93-101). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Our applications --- p.1 / Chapter 1.3 --- Structure of the thesis --- p.2 / Chapter 2 --- The Programmable Graphics Hardware --- p.4 / Chapter 2.1 --- Introduction --- p.4 / Chapter 2.2 --- The evolution of programmable graphics hardware --- p.4 / Chapter 2.3 --- Benefit of GPU --- p.6 / Chapter 2.4 --- Architecture of programmable graphics hardware --- p.9 / Chapter 2.4.1 --- The graphics hardware pipeline --- p.9 / Chapter 2.4.2 --- Programmable graphics hardware --- p.10 / Chapter 2.5 --- Data Mapping in GPU --- p.12 / Chapter 2.6 --- Some limitations of current GPU --- p.13 / Chapter 2.7 --- Application and Related Work --- p.16 / Chapter 3 --- Image-based Relighting on GPU --- p.18 / Chapter 3.1 --- Introduction --- p.18 / Chapter 3.2 --- Image based relighting --- p.20 / Chapter 3.2.1 --- The plenoptic illumination function --- p.20 / Chapter 3.2.2 --- Sampling and Relighting --- p.21 / Chapter 3.3 --- Linear Approximation Function --- p.22 / Chapter 3.3.1 --- Spherical harmonics basis function --- p.22 / Chapter 3.3.2 --- Radial basis function --- p.23 / Chapter 3.4 --- Data Representation --- p.23 / Chapter 3.5 --- Relighting on GPU --- p.24 / Chapter 3.5.1 --- Directional light source relighting --- p.27 / Chapter 3.5.2 --- Point light source relighting --- p.28 / Chapter 3.6 --- Experiment --- p.32 / Chapter 3.6.1 --- Visual Evaluation --- p.32 / Chapter 3.6.2 --- Statistic Evaluation --- p.33 / Chapter 3.7 --- Conclusion --- p.34 / Chapter 4 --- Texture Compression on GPU --- p.40 / Chapter 4.1 --- Introduction --- p.40 / Chapter 4.2 --- The Feature of Texture Compression --- p.41 / Chapter 4.3 --- Implementation --- p.42 / Chapter 4.3.1 --- Encoding --- p.43 / Chapter 4.3.2 --- Decoding --- p.46 / Chapter 4.4 --- The Texture Compression based Relighting on GPU --- p.46 / Chapter 4.5 --- An improvement of the existing compression techniques --- p.48 / Chapter 4.6 --- Experiment Evaluation --- p.50 / Chapter 4.7 --- Conclusion --- p.51 / Chapter 5 --- Environment Relighting on GPU --- p.55 / Chapter 5.1 --- Overview --- p.55 / Chapter 5.2 --- Related Work --- p.56 / Chapter 5.3 --- Linear Approximation Algorithm --- p.58 / Chapter 5.3.1 --- Basic Architecture --- p.58 / Chapter 5.3.2 --- Relighting on SH --- p.60 / Chapter 5.3.3 --- Relighting on RBF --- p.61 / Chapter 5.3.4 --- Sampling the Environment --- p.63 / Chapter 5.4 --- Implementation on GPU --- p.64 / Chapter 5.5 --- Evaluation --- p.66 / Chapter 5.5.1 --- Visual evaluation --- p.66 / Chapter 5.5.2 --- Statistic evaluation --- p.67 / Chapter 5.6 --- Conclusion --- p.69 / Chapter 6 --- Neocognitron on GPU --- p.70 / Chapter 6.1 --- Overview --- p.70 / Chapter 6.2 --- Neocognitron --- p.72 / Chapter 6.3 --- Neocognitron on GPU --- p.75 / Chapter 6.3.1 --- Data Mapping and Connection Texture --- p.76 / Chapter 6.3.2 --- Convolution and Offset Computation --- p.77 / Chapter 6.3.3 --- Recognition Pipeline --- p.80 / Chapter 6.4 --- Experiments and Results --- p.81 / Chapter 6.4.1 --- Performance Evaluation --- p.81 / Chapter 6.4.2 --- Feature Visualization of Intermediate-Layer --- p.84 / Chapter 6.4.3 --- A Real-Time Tracking Test --- p.84 / Chapter 6.5 --- Conclusion --- p.87 / Chapter 7 --- Conclusion --- p.90 / Bibliography --- p.93
|
214 |
Dynamical analysis of complex-valued recurrent neural networks with time-delays. / CUHK electronic theses & dissertations collectionJanuary 2013 (has links)
Hu, Jin. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 140-153). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese.
|
215 |
Stability analysis and control applications of recurrent neural networks. / CUHK electronic theses & dissertations collectionJanuary 2001 (has links)
Hu San-qing. / "December 2001." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (p. 181-192). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
216 |
Analysis and design of recurrent neural networks and their applications to control and robotic systems. / CUHK electronic theses & dissertations collection / Digital dissertation consortiumJanuary 2002 (has links)
Zhang Yu-nong. / "November 2002." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (p. 161-176). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
217 |
Design methodology and stability analysis of recurrent neural networks for constrained optimization. / CUHK electronic theses & dissertations collectionJanuary 2000 (has links)
Xia You-sheng. / "June 2000." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (p. 152-165). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
218 |
Neural network based control for nonlinear systems. / CUHK electronic theses & dissertations collectionJanuary 2001 (has links)
Wang Dan. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (p. 128-138). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
219 |
Neural network with multiple-valued activation function. / CUHK electronic theses & dissertations collectionJanuary 1996 (has links)
by Chen, Zhong-Yu. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (p. 146-[154]). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
|
220 |
Phone-based speech synthesis using neural network with articulatory control.January 1996 (has links)
by Lo Wai Kit. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 151-160). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Applications of Speech Synthesis --- p.2 / Chapter 1.1.1 --- Human Machine Interface --- p.2 / Chapter 1.1.2 --- Speech Aids --- p.3 / Chapter 1.1.3 --- Text-To-Speech (TTS) system --- p.4 / Chapter 1.1.4 --- Speech Dialogue System --- p.4 / Chapter 1.2 --- Current Status in Speech Synthesis --- p.6 / Chapter 1.2.1 --- Concatenation Based --- p.6 / Chapter 1.2.2 --- Parametric Based --- p.7 / Chapter 1.2.3 --- Articulatory Based --- p.7 / Chapter 1.2.4 --- Application of Neural Network in Speech Synthesis --- p.8 / Chapter 1.3 --- The Proposed Neural Network Speech Synthesis --- p.9 / Chapter 1.3.1 --- Motivation --- p.9 / Chapter 1.3.2 --- Objectives --- p.9 / Chapter 1.4 --- Thesis outline --- p.11 / Chapter 2 --- Linguistic Basics for Speech Synthesis --- p.12 / Chapter 2.1 --- Relations between Linguistic and Speech Synthesis --- p.12 / Chapter 2.2 --- Basic Phonology and Phonetics --- p.14 / Chapter 2.2.1 --- Phonology --- p.14 / Chapter 2.2.2 --- Phonetics --- p.15 / Chapter 2.2.3 --- Prosody --- p.16 / Chapter 2.3 --- Transcription Systems --- p.17 / Chapter 2.3.1 --- The Employed Transcription System --- p.18 / Chapter 2.4 --- Cantonese Phonology --- p.20 / Chapter 2.4.1 --- Some Properties of Cantonese --- p.20 / Chapter 2.4.2 --- Initial --- p.21 / Chapter 2.4.3 --- Final --- p.23 / Chapter 2.4.4 --- Lexical Tone --- p.25 / Chapter 2.4.5 --- Variations --- p.26 / Chapter 2.5 --- The Vowel Quadrilaterals --- p.29 / Chapter 3 --- Speech Synthesis Technology --- p.32 / Chapter 3.1 --- The Human Speech Production --- p.32 / Chapter 3.2 --- Important Issues in Speech Synthesis System --- p.34 / Chapter 3.2.1 --- Controllability --- p.34 / Chapter 3.2.2 --- Naturalness --- p.34 / Chapter 3.2.3 --- Complexity --- p.35 / Chapter 3.2.4 --- Information Storage --- p.35 / Chapter 3.3 --- Units for Synthesis --- p.37 / Chapter 3.4 --- Type of Synthesizer --- p.40 / Chapter 3.4.1 --- Copy Concatenation --- p.40 / Chapter 3.4.2 --- Vocoder --- p.41 / Chapter 3.4.3 --- Articulatory Synthesis --- p.44 / Chapter 4 --- Neural Network Speech Synthesis with Articulatory Control --- p.47 / Chapter 4.1 --- Neural Network Approximation --- p.48 / Chapter 4.1.1 --- The Approximation Problem --- p.48 / Chapter 4.1.2 --- Network Approach for Approximation --- p.49 / Chapter 4.2 --- Artificial Neural Network for Phone-based Speech Synthesis --- p.53 / Chapter 4.2.1 --- Network Approximation for Speech Signal Synthesis --- p.53 / Chapter 4.2.2 --- Feed forward Backpropagation Neural Network --- p.56 / Chapter 4.2.3 --- Radial Basis Function Network --- p.58 / Chapter 4.2.4 --- Parallel Operating Synthesizer Networks --- p.59 / Chapter 4.3 --- Template Storage and Control for the Synthesizer Network --- p.61 / Chapter 4.3.1 --- Implicit Template Storage --- p.61 / Chapter 4.3.2 --- Articulatory Control Parameters --- p.61 / Chapter 4.4 --- Summary --- p.65 / Chapter 5 --- Prototype Implementation of the Synthesizer Network --- p.66 / Chapter 5.1 --- Implementation of the Synthesizer Network --- p.66 / Chapter 5.1.1 --- Network Architectures --- p.68 / Chapter 5.1.2 --- Spectral Templates for Training --- p.74 / Chapter 5.1.3 --- System requirement --- p.76 / Chapter 5.2 --- Subjective Listening Test --- p.79 / Chapter 5.2.1 --- Sample Selection --- p.79 / Chapter 5.2.2 --- Test Procedure --- p.81 / Chapter 5.2.3 --- Result --- p.83 / Chapter 5.2.4 --- Analysis --- p.86 / Chapter 5.3 --- Summary --- p.88 / Chapter 6 --- Simplified Articulatory Control for the Synthesizer Network --- p.89 / Chapter 6.1 --- Coarticulatory Effect in Speech Production --- p.90 / Chapter 6.1.1 --- Acoustic Effect --- p.90 / Chapter 6.1.2 --- Prosodic Effect --- p.91 / Chapter 6.2 --- Control in various Synthesis Techniques --- p.92 / Chapter 6.2.1 --- Copy Concatenation --- p.92 / Chapter 6.2.2 --- Formant Synthesis --- p.93 / Chapter 6.2.3 --- Articulatory synthesis --- p.93 / Chapter 6.3 --- Articulatory Control Model based on Vowel Quad --- p.94 / Chapter 6.3.1 --- Modeling of Variations with the Articulatory Control Model --- p.95 / Chapter 6.4 --- Voice Correspondence : --- p.97 / Chapter 6.4.1 --- For Nasal Sounds ´ؤ Inter-Network Correspondence --- p.98 / Chapter 6.4.2 --- In Flat-Tongue Space - Intra-Network Correspondence --- p.101 / Chapter 6.5 --- Summary --- p.108 / Chapter 7 --- Pause Duration Properties in Cantonese Phrases --- p.109 / Chapter 7.1 --- The Prosodic Feature - Inter-Syllable Pause --- p.110 / Chapter 7.2 --- Experiment for Measuring Inter-Syllable Pause of Cantonese Phrases --- p.111 / Chapter 7.2.1 --- Speech Material Selection --- p.111 / Chapter 7.2.2 --- Experimental Procedure --- p.112 / Chapter 7.2.3 --- Result --- p.114 / Chapter 7.3 --- Characteristics of Inter-Syllable Pause in Cantonese Phrases --- p.117 / Chapter 7.3.1 --- Pause Duration Characteristics for Initials after Pause --- p.117 / Chapter 7.3.2 --- Pause Duration Characteristic for Finals before Pause --- p.119 / Chapter 7.3.3 --- General Observations --- p.119 / Chapter 7.3.4 --- Other Observations --- p.121 / Chapter 7.4 --- Application of Pause-duration Statistics to the Synthesis System --- p.124 / Chapter 7.5 --- Summary --- p.126 / Chapter 8 --- Conclusion and Further Work --- p.127 / Chapter 8.1 --- Conclusion --- p.127 / Chapter 8.2 --- Further Extension Work --- p.130 / Chapter 8.2.1 --- Regularization Network Optimized on ISD --- p.130 / Chapter 8.2.2 --- Incorporation of Non-Articulatory Parameters to Control Space --- p.130 / Chapter 8.2.3 --- Experiment on Other Prosodic Features --- p.131 / Chapter 8.2.4 --- Application of Voice Correspondence to Cantonese Coda Discrim- ination --- p.131 / Chapter A --- Cantonese Initials and Finals --- p.132 / Chapter A.1 --- Tables of All Cantonese Initials and Finals --- p.132 / Chapter B --- Using Distortion Measure as Error Function in Neural Network --- p.135 / Chapter B.1 --- Formulation of Itakura-Saito Distortion Measure for Neural Network Error Function --- p.135 / Chapter B.2 --- Formulation of a Modified Itakura-Saito Distortion (MISD) Measure for Neural Network Error Function --- p.137 / Chapter C --- Orthogonal Least Square Algorithm for RBFNet Training --- p.138 / Chapter C.l --- Orthogonal Least Squares Learning Algorithm for Radial Basis Function Network Training --- p.138 / Chapter D --- Phrase Lists --- p.140 / Chapter D.1 --- Two-Syllable Phrase List for the Pause Duration Experiment --- p.140 / Chapter D.1.1 --- 兩字詞 --- p.140 / Chapter D.2 --- Three/Four-Syllable Phrase List for the Pause Duration Experiment --- p.144 / Chapter D.2.1 --- 片語 --- p.144
|
Page generated in 0.0947 seconds