201 |
Adaptive pattern recognition in a real-world environmentBairaktaris, Dimitrios January 1991 (has links)
This thesis introduces and explores the notion of a real-world environment with respect to adaptive pattern recognition and neural network systems. It then examines the individual properties of a real-world environment and proposes Continuous Adaptation, Persistence of information and Context-sensitive recognition to be the major design criteria a neural network system in a real-world environment should satisfy. Based on these criteria, it then assesses the performance of Hopfield networks and Associative Memory systems and identifies their operational limitations. This leads to the introduction of Randomized Internal Representations, a novel class of neural network systems which stores information in a fully distributed way yet is capable of encoding and utilizing context. It then assesses the performance of Competitive Learning and Adaptive Resonance Theory systems and again having identified their operational weakness, it describes the Dynamic Adaptation Scheme which satisfies all three design criteria for a real-world environment.
|
202 |
Enhancement of Deep Neural Networks and Their Application to Text MiningUnknown Date (has links)
Many current application domains of machine learning and arti cial intelligence
involve knowledge discovery from text, such as sentiment analysis, document
ontology, and spam detection. Humans have years of experience and training with
language, enabling them to understand complicated, nuanced text passages with relative
ease. A text classi er attempts to emulate or replicate this knowledge so that
computers can discriminate between concepts encountered in text; however, learning
high-level concepts from text, such as those found in many applications of text classi-
cation, is a challenging task due to the many challenges associated with text mining
and classi cation. Recently, classi ers trained using arti cial neural networks have
been shown to be e ective for a variety of text mining tasks. Convolutional neural
networks have been trained to classify text from character-level input, automatically
learn high-level abstract representations and avoiding the need for human engineered
features.
This dissertation proposes two new techniques for character-level learning,
log(m) character embedding and convolutional window classi cation. Log(m) embedding
is a new character-vector representation for text data that is more compact and memory e cient than previous embedding vectors. Convolutional window classi
cation is a technique for classifying long documents, i.e. documents with lengths
exceeding the input dimension of the neural network. Additionally, we investigate the
performance of convolutional neural networks combined with long short-term memory
networks, explore how document length impacts classi cation performance and
compare performance of neural networks against non-neural network-based learners
in text classi cation tasks. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
203 |
Parallel Distributed Deep Learning on Cluster ComputersUnknown Date (has links)
Deep Learning is an increasingly important subdomain of arti cial intelligence.
Deep Learning architectures, arti cial neural networks characterized by having both
a large breadth of neurons and a large depth of layers, bene ts from training on Big
Data. The size and complexity of the model combined with the size of the training
data makes the training procedure very computationally and temporally expensive.
Accelerating the training procedure of Deep Learning using cluster computers faces
many challenges ranging from distributed optimizers to the large communication overhead
speci c to a system with o the shelf networking components. In this thesis, we
present a novel synchronous data parallel distributed Deep Learning implementation
on HPCC Systems, a cluster computer system. We discuss research that has been
conducted on the distribution and parallelization of Deep Learning, as well as the
concerns relating to cluster environments. Additionally, we provide case studies that
evaluate and validate our implementation. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
|
204 |
A BCU scalable sensory acquisition system for EEG embedded applicationsUnknown Date (has links)
Electroencephalogram (EEG) Recording has been through a lot of changes and modification since it was first introduced in 1929 due to rising technologies and signal processing advancements. The EEG Data acquisition stage is the first and most valuable component in any EEG recording System, it has the role of gathering and conditioning its input and outputting reliable data to be effectively analyzed and studied by digital signal processors using sophisticated and advanced algorithms which help in numerous medical and consumer applications. We have designed a low noise low power EEG data acquisition system that can be set to act as a standalone mobile EEG data processing unit providing data preprocessing functions; it can also be a very reliable high speed data acquisition interface to an EEG processing unit. / by Sherif S. Fathalla. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
|
205 |
Improved recurrent neural networks for convex optimization. / CUHK electronic theses & dissertations collectionJanuary 2008 (has links)
Constrained optimization problems arise widely in scientific research and engineering applications. In the past two decades, solving optimization problems using recurrent neural network methods have been extensively investigated due to the advantages of massively parallel operations and rapid convergence. In real applications, neural networks with simple architecture and good performance are desired. However, most existing neural networks have some limitations and disadvantages in the convergence conditions or architecture complexity. This thesis is concentrated on analysis and design of recurrent neural networks with simplified architecture but for solving more general convex optimization problems. In this thesis, some improved recurrent neural networks have been proposed for solving smooth and non-smooth convex optimization problems and applied to some selected applications. / In Part I, we first propose a one-layer recurrent neural network for solving linear programming problems. Compared with other neural networks for linear programming, the proposed neural network has simpler architecture and better convergence properties. Second, a one-layer recurrent neural network is proposed for solving quadratic programming problems. The global convergence of the neural network can be guaranteed if only the objective function of the programming problem is convex on the equality constraints and not necessarily convex everywhere. Compared with the other neural networks for quadratic programming, such as the Lagrangian network and projection neural network, the proposed neural network has simpler architecture which neurons is the same as the number of the optimization problems. Third, combining the projection and penalty parameter methods, a one-layer recurrent neural network is proposed for solving general convex optimization problems with linear constraints. / In Part II, some improved recurrent neural networks are proposed for solving non-smooth convex optimization problems. We first proposed a one-layer recurrent neural network for solving the non-smooth convex programming problems with only equality constraints. This neural network simplifies the Lagrangian network and extend the neural network to solve non-smooth convex optimization problems. Then, a two-layers recurrent neural network is proposed for the non-smooth convex optimization subject to linear equality and bound constraints. / In Part III, some selected applications of the proposed neural networks are also discussed. The k-winners-take-all (kWTA) operation is first converted to equivalent linear and quadratic optimization problems and two kWTA network models are tailed to do the kWTA operation. Then, the proposed neural networks are applied to some other problems, such as the linear assignment, support vector machine learning and curve fitting problems. / Liu, Qingshan. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3606. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 133-145). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
206 |
Associative neural networks: properties, learning, and applications.January 1994 (has links)
by Chi-sing Leung. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 236-244). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background of Associative Neural Networks --- p.1 / Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3 / Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6 / Chapter 1.4 --- Scope and Organization --- p.9 / Chapter 1.5 --- Summary of Publications --- p.13 / Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17 / Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18 / Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18 / Chapter 2.2 --- Recall Process of BAM --- p.20 / Chapter 2.3 --- Stability of BAM --- p.22 / Chapter 2.4 --- Memory Capacity of BAM --- p.24 / Chapter 2.5 --- Error Correction Capability of BAM --- p.28 / Chapter 2.6 --- Chapter Summary --- p.29 / Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31 / Chapter 3.1 --- Introduction --- p.31 / Chapter 3.2 --- Existence of Energy Barrier --- p.34 / Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44 / Chapter 3.4 --- Confidence Dynamics --- p.49 / Chapter 3.5 --- Numerical Results from the Dynamics --- p.63 / Chapter 3.6 --- Chapter Summary --- p.68 / Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70 / Chapter 4.1 --- Introduction --- p.70 / Chapter 4.2 --- Second order BAM and its Stability --- p.71 / Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75 / Chapter 4.4 --- Numerical Results --- p.82 / Chapter 4.5 --- Extension to higher order BAM --- p.90 / Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94 / Chapter 4.7 --- Chapter Summary --- p.95 / Chapter 5 --- Enhancement of BAM --- p.97 / Chapter 5.1 --- Background --- p.97 / Chapter 5.2 --- Review on Modifications of BAM --- p.101 / Chapter 5.2.1 --- Change of the encoding method --- p.101 / Chapter 5.2.2 --- Change of the topology --- p.105 / Chapter 5.3 --- Householder Encoding Algorithm --- p.107 / Chapter 5.3.1 --- Construction from Householder Transforms --- p.107 / Chapter 5.3.2 --- Construction from iterative method --- p.109 / Chapter 5.3.3 --- Remarks on HCA --- p.111 / Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112 / Chapter 5.4.1 --- Construction of EHCA --- p.112 / Chapter 5.4.2 --- Remarks on EHCA --- p.114 / Chapter 5.5 --- Bidirectional Learning --- p.115 / Chapter 5.5.1 --- Construction of BL --- p.115 / Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116 / Chapter 5.5.3 --- Remarks on BL --- p.120 / Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121 / Chapter 5.6.1 --- Construction of AHKBL --- p.121 / Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124 / Chapter 5.6.3 --- Remarks on AHKBL --- p.125 / Chapter 5.7 --- Computer Simulations --- p.126 / Chapter 5.7.1 --- Memory Capacity --- p.126 / Chapter 5.7.2 --- Error Correction Capability --- p.130 / Chapter 5.7.3 --- Learning Speed --- p.157 / Chapter 5.8 --- Chapter Summary --- p.158 / Chapter 6 --- BAM under Forgetting Learning --- p.160 / Chapter 6.1 --- Introduction --- p.160 / Chapter 6.2 --- Properties of Forgetting Learning --- p.162 / Chapter 6.3 --- Computer Simulations --- p.168 / Chapter 6.4 --- Chapter Summary --- p.168 / Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170 / Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171 / Chapter 7.1 --- Background on Vector quantization --- p.171 / Chapter 7.2 --- Introduction to LBG algorithm --- p.173 / Chapter 7.3 --- Introduction to Kohonen Map --- p.174 / Chapter 7.4 --- Chapter Summary --- p.179 / Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181 / Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188 / Chapter 8.1.3 --- Computer Simulations --- p.191 / Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195 / Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195 / Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198 / Chapter 8.2.3 --- Computer Simulations --- p.200 / Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213 / Chapter 8.3.1 --- Motivation and Background --- p.214 / Chapter 8.3.2 --- Trellis Coded Modulation --- p.216 / Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220 / Chapter 8.3.4 --- Computer Simulations --- p.223 / Chapter 8.4 --- Chapter Summary --- p.226 / Chapter 9 --- Conclusion --- p.232 / Bibliography --- p.236
|
207 |
Soft self-organizing map.January 1995 (has links)
by John Pui-fai Sum. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 99-104). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Idea of SSOM --- p.3 / Chapter 1.3 --- Other Approaches --- p.3 / Chapter 1.4 --- Contribution of the Thesis --- p.4 / Chapter 1.5 --- Outline of Thesis --- p.5 / Chapter 2 --- Self-Organizing Map --- p.7 / Chapter 2.1 --- Introduction --- p.7 / Chapter 2.2 --- Algorithm of SOM --- p.8 / Chapter 2.3 --- Illustrative Example --- p.10 / Chapter 2.4 --- Property of SOM --- p.14 / Chapter 2.4.1 --- Convergence property --- p.14 / Chapter 2.4.2 --- Topological Order --- p.15 / Chapter 2.4.3 --- Objective Function of SOM --- p.15 / Chapter 2.5 --- Conclusion --- p.17 / Chapter 3 --- Algorithms for Soft Self-Organizing Map --- p.18 / Chapter 3.1 --- Competitive Learning and Soft Competitive Learning --- p.19 / Chapter 3.2 --- How does SOM generate ordered map? --- p.21 / Chapter 3.3 --- Algorithms of Soft SOM --- p.23 / Chapter 3.4 --- Simulation Results --- p.25 / Chapter 3.4.1 --- One dimensional map under uniform distribution --- p.25 / Chapter 3.4.2 --- One dimensional map under Gaussian distribution --- p.27 / Chapter 3.4.3 --- Two dimensional map in a unit square --- p.28 / Chapter 3.5 --- Conclusion --- p.30 / Chapter 4 --- Application to Uncover Vowel Relationship --- p.31 / Chapter 4.1 --- Experiment Set Up --- p.32 / Chapter 4.1.1 --- Network structure --- p.32 / Chapter 4.1.2 --- Training procedure --- p.32 / Chapter 4.1.3 --- Relationship Construction Scheme --- p.34 / Chapter 4.2 --- Results --- p.34 / Chapter 4.2.1 --- Hidden-unit labeling for SSOM2 --- p.34 / Chapter 4.2.2 --- Hidden-unit labeling for SOM --- p.35 / Chapter 4.3 --- Conclusion --- p.37 / Chapter 5 --- Application to vowel data transmission --- p.42 / Chapter 5.1 --- Introduction --- p.42 / Chapter 5.2 --- Simulation --- p.45 / Chapter 5.2.1 --- Setup --- p.45 / Chapter 5.2.2 --- Noise model and demodulation scheme --- p.46 / Chapter 5.2.3 --- Performance index --- p.46 / Chapter 5.2.4 --- Control experiment: random coding scheme --- p.46 / Chapter 5.3 --- Results --- p.47 / Chapter 5.3.1 --- Null channel noise (σ = 0) --- p.47 / Chapter 5.3.2 --- Small channel noise (0 ≤ σ ≤1) --- p.49 / Chapter 5.3.3 --- Large channel noise (1 ≤σ ≤7) --- p.49 / Chapter 5.3.4 --- Very large channel noise (σ > 7) --- p.49 / Chapter 5.4 --- Conclusion --- p.50 / Chapter 6 --- Convergence Analysis --- p.53 / Chapter 6.1 --- Kushner and Clark Lemma --- p.53 / Chapter 6.2 --- Condition for the Convergence of Jou's Algorithm --- p.54 / Chapter 6.3 --- Alternative Proof on the Convergence of Competitive Learning --- p.56 / Chapter 6.4 --- Convergence of Soft SOM --- p.58 / Chapter 6.5 --- Convergence of SOM --- p.60 / Chapter 7 --- Conclusion --- p.61 / Chapter 7.1 --- Limitations of SSOM --- p.62 / Chapter 7.2 --- Further Research --- p.63 / Chapter A --- Proof of Corollary1 --- p.65 / Chapter A.l --- Mean Average Update --- p.66 / Chapter A.2 --- Case 1: Uniform Distribution --- p.68 / Chapter A.3 --- Case 2: Logconcave Distribution --- p.70 / Chapter A.4 --- Case 3: Loglinear Distribution --- p.72 / Chapter B --- Different Senses of neighborhood --- p.79 / Chapter B.l --- Static neighborhood: Kohonen's sense --- p.79 / Chapter B.2 --- Dynamic neighborhood --- p.80 / Chapter B.2.1 --- Mou-Yeung Definition --- p.80 / Chapter B.2.2 --- Martinetz et al. Definition --- p.81 / Chapter B.2.3 --- Tsao-Bezdek-Pal Definition --- p.81 / Chapter B.3 --- Example --- p.82 / Chapter B.4 --- Discussion --- p.84 / Chapter C --- Supplementary to Chapter4 --- p.86 / Chapter D --- Quadrature Amplitude Modulation --- p.92 / Chapter D.l --- Amplitude Modulation --- p.92 / Chapter D.2 --- QAM --- p.93 / Bibliography --- p.99
|
208 |
Function approximation in high-dimensional spaces using lower-dimensional Gaussian RBF networks.January 1992 (has links)
by Jones Chui. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 62-[66]). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Fundamentals of Artificial Neural Networks --- p.2 / Chapter 1.1.1 --- Processing Unit --- p.2 / Chapter 1.1.2 --- Topology --- p.3 / Chapter 1.1.3 --- Learning Rules --- p.4 / Chapter 1.2 --- Overview of Various Neural Network Models --- p.6 / Chapter 1.3 --- Introduction to the Radial Basis Function Networks (RBFs) --- p.8 / Chapter 1.3.1 --- Historical Development --- p.9 / Chapter 1.3.2 --- Some Intrinsic Problems --- p.9 / Chapter 1.4 --- Objective of the Thesis --- p.10 / Chapter 2 --- Low-dimensional Gaussian RBF networks (LowD RBFs) --- p.13 / Chapter 2.1 --- Architecture of LowD RBF Networks --- p.13 / Chapter 2.1.1 --- Network Structure --- p.13 / Chapter 2.1.2 --- Learning Rules --- p.17 / Chapter 2.2 --- Construction of LowD RBF Networks --- p.19 / Chapter 2.2.1 --- Growing Heuristic --- p.19 / Chapter 2.2.2 --- Pruning Heuristic --- p.27 / Chapter 2.2.3 --- Summary --- p.31 / Chapter 3 --- Application examples --- p.34 / Chapter 3.1 --- Chaotic Time Series Prediction --- p.35 / Chapter 3.1.1 --- Performance Comparison --- p.39 / Chapter 3.1.2 --- Sensitivity Analysis of MSE THRESHOLDS --- p.41 / Chapter 3.1.3 --- Effects of Increased Embedding Dimension --- p.41 / Chapter 3.1.4 --- Comparison with Tree-Structured Network --- p.46 / Chapter 3.1.5 --- Overfitting Problem --- p.46 / Chapter 3.2 --- Nonlinear prediction of speech signal --- p.49 / Chapter 3.2.1 --- Comparison with Linear Predictive Coding (LPC) --- p.54 / Chapter 3.2.2 --- Performance Test in Noisy Conditions --- p.55 / Chapter 3.2.3 --- Iterated Prediction of Speech --- p.59 / Chapter 4 --- Conclusion --- p.60 / Chapter 4.1 --- Discussions --- p.60 / Chapter 4.2 --- Limitations and Suggestions for Further Research --- p.61 / Bibliography --- p.62
|
209 |
An integration of hidden Markov model and neural network for phoneme recognition.January 1993 (has links)
by Patrick Shu Pui Ko. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 77-78). / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Introduction to Speech Recognition --- p.1 / Chapter 1.2 --- Classifications and Constraints of Speech Recognition Systems --- p.1 / Chapter 1.2.1 --- Isolated Subword Unit Recognition --- p.1 / Chapter 1.2.2 --- Isolated Word Recognition --- p.2 / Chapter 1.2.3 --- Continuous Speech Recognition --- p.2 / Chapter 1.3 --- Objective of the Thesis --- p.3 / Chapter 1.3.1 --- What is the Problem --- p.3 / Chapter 1.3.2 --- How the Problem is Approached --- p.3 / Chapter 1.3.3 --- The Organization of this Thesis --- p.3 / Chapter 2. --- Literature Review --- p.5 / Chapter 2.1 --- Approaches to the Problem of Speech Recognition --- p.5 / Chapter 2.1.1 --- Template-Based Approaches --- p.6 / Chapter 2.1.2 --- Knowledge-Based Approaches --- p.9 / Chapter 2.1.3 --- Stochastic Approaches --- p.10 / Chapter 2.1.4 --- Connectionist Approaches --- p.14 / Chapter 3. --- Discrimination Issues of HMM --- p.16 / Chapter 3.1 --- Maximum Likelihood Estimation (MLE) --- p.16 / Chapter 3.2 --- Maximum Mutual Information (MMI) --- p.17 / Chapter 4. --- Neural Networks --- p.19 / Chapter 4.1 --- History --- p.19 / Chapter 4.2 --- Basic Concepts --- p.20 / Chapter 4.3 --- Learning --- p.21 / Chapter 4.3.1 --- Supervised Training --- p.21 / Chapter 4.3.2 --- Reinforcement Training --- p.22 / Chapter 4.3.3 --- Self-Organization --- p.22 / Chapter 4.4 --- Error Back-propagation --- p.22 / Chapter 5. --- Proposal of a Discriminative Neural Network Layer --- p.25 / Chapter 5.1 --- Rationale --- p.25 / Chapter 5.2 --- HMM Parameters --- p.27 / Chapter 5.3 --- Neural Network Layer --- p.28 / Chapter 5.4 --- Decision Rules --- p.29 / Chapter 6. --- Data Preparation --- p.31 / Chapter 6.1 --- TIMIT --- p.31 / Chapter 6.2 --- Feature Extraction --- p.34 / Chapter 6.3 --- Training --- p.43 / Chapter 7. --- Experiments and Results --- p.52 / Chapter 7.1 --- Experiments --- p.52 / Chapter 7.2 --- Experiment I --- p.52 / Chapter 7.3 --- Experiment II --- p.55 / Chapter 7.4 --- Experiment III --- p.57 / Chapter 7.5 --- Experiment IV --- p.58 / Chapter 7.6 --- Experiment V --- p.60 / Chapter 7.7 --- Computational Issues --- p.62 / Chapter 7.8 --- Limitations --- p.63 / Chapter 8. --- Conclusion --- p.64 / Chapter 9. --- Future Directions --- p.67 / Appendix / Chapter A. --- Linear Predictive Coding --- p.69 / Chapter B. --- Implementation of a Vector Quantizer --- p.70 / Chapter C. --- Implementation of HMM --- p.73 / Chapter C.1 --- Calculations Underflow --- p.73 / Chapter C.2 --- Zero-lising Effect --- p.75 / Chapter C.3 --- Training With Multiple Observation Sequences --- p.76 / References --- p.77
|
210 |
Continuous speech phoneme recognition using neural networks and grammar correction.January 1995 (has links)
by Wai-Tat Fu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 104-[109]). / Chapter 1 --- INTRODUCTION --- p.1 / Chapter 1.1 --- Problem of Speech Recognition --- p.1 / Chapter 1.2 --- Why continuous speech recognition? --- p.5 / Chapter 1.3 --- Current status of continuous speech recognition --- p.6 / Chapter 1.4 --- Research Goal --- p.10 / Chapter 1.5 --- Thesis outline --- p.10 / Chapter 2 --- Current Approaches to Continuous Speech Recognition --- p.12 / Chapter 2.1 --- BASIC STEPS FOR CONTINUOUS SPEECH RECOGNITION --- p.12 / Chapter 2.2 --- THE HIDDEN MARKOV MODEL APPROACH --- p.16 / Chapter 2.2.1 --- Introduction --- p.16 / Chapter 2.2.2 --- Segmentation and Pattern Matching --- p.18 / Chapter 2.2.3 --- Word Formation and Syntactic Processing --- p.22 / Chapter 2.2.4 --- Discussion --- p.23 / Chapter 2.3 --- NEURAL NETWORK APPROACH --- p.24 / Chapter 2.3.1 --- Introduction --- p.24 / Chapter 2.3.2 --- Segmentation and Pattern Matching --- p.25 / Chapter 2.3.3 --- Discussion --- p.27 / Chapter 2.4 --- MLP/HMM HYBRID APPROACH --- p.28 / Chapter 2.4.1 --- Introduction --- p.28 / Chapter 2.4.2 --- Architecture of Hybrid MLP/HMM Systems --- p.29 / Chapter 2.4.3 --- Discussions --- p.30 / Chapter 2.5 --- SYNTACTIC GRAMMAR --- p.30 / Chapter 2.5.1 --- Introduction --- p.30 / Chapter 2.5.2 --- Word formation and Syntactic Processing --- p.31 / Chapter 2.5.3 --- Discussion --- p.32 / Chapter 2.6 --- SUMMARY --- p.32 / Chapter 3 --- Neural Network As Pattern Classifier --- p.34 / Chapter 3.1 --- INTRODUCTION --- p.34 / Chapter 3.2 --- TRAINING ALGORITHMS AND TOPOLOGIES --- p.35 / Chapter 3.2.1 --- Multilayer Perceptrons --- p.35 / Chapter 3.2.2 --- Recurrent Neural Networks --- p.39 / Chapter 3.2.3 --- Self-organizing Maps --- p.41 / Chapter 3.2.4 --- Learning Vector Quantization --- p.43 / Chapter 3.3 --- EXPERIMENTS --- p.44 / Chapter 3.3.1 --- The Data Set --- p.44 / Chapter 3.3.2 --- Preprocessing of the Speech Data --- p.45 / Chapter 3.3.3 --- The Pattern Classifiers --- p.50 / Chapter 3.4 --- RESULTS AND DISCUSSIONS --- p.53 / Chapter 4 --- High Level Context Information --- p.56 / Chapter 4.1 --- INTRODUCTION --- p.56 / Chapter 4.2 --- HIDDEN MARKOV MODEL APPROACH --- p.57 / Chapter 4.3 --- THE DYNAMIC PROGRAMMING APPROACH --- p.59 / Chapter 4.4 --- THE SYNTACTIC GRAMMAR APPROACH --- p.60 / Chapter 5 --- Finite State Grammar Network --- p.62 / Chapter 5.1 --- INTRODUCTION --- p.62 / Chapter 5.2 --- THE GRAMMAR COMPILATION --- p.63 / Chapter 5.2.1 --- Introduction --- p.63 / Chapter 5.2.2 --- K-Tails Clustering Method --- p.66 / Chapter 5.2.3 --- Inference of finite state grammar --- p.67 / Chapter 5.2.4 --- Error Correcting Parsing --- p.69 / Chapter 5.3 --- EXPERIMENT --- p.71 / Chapter 5.4 --- RESULTS AND DISCUSSIONS --- p.73 / Chapter 6 --- The Integrated System --- p.81 / Chapter 6.1 --- INTRODUCTION --- p.81 / Chapter 6.2 --- POSTPROCESSING OF NEURAL NETWORK OUTPUT --- p.82 / Chapter 6.2.1 --- Activation Threshold --- p.82 / Chapter 6.2.2 --- Duration Threshold --- p.85 / Chapter 6.2.3 --- Merging of Phoneme boundaries --- p.88 / Chapter 6.3 --- THE ERROR CORRECTING PARSER --- p.90 / Chapter 6.4 --- RESULTS AND DISCUSSIONS --- p.96 / Chapter 7 --- Conclusions --- p.101 / Bibliography --- p.105
|
Page generated in 0.1267 seconds