• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 780
  • 103
  • 65
  • 32
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 9
  • 4
  • 3
  • 2
  • Tagged with
  • 1136
  • 1136
  • 1136
  • 1136
  • 252
  • 152
  • 135
  • 135
  • 125
  • 121
  • 109
  • 109
  • 109
  • 108
  • 103
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Fuzzy rule base identification via singular value decomposition. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 1999 (has links)
by Stephen Chi-tin Yang. / "Sept. 28, 1999." / Thesis (Ph.D.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (p. 158-163). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
522

Large vocabulary Cantonese speech recognition using neural networks.

January 1994 (has links)
Tsik Chung Wai Benjamin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 67-70). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Automatic Speech Recognition --- p.1 / Chapter 1.2 --- Cantonese Speech Recognition --- p.3 / Chapter 1.3 --- Neural Networks --- p.4 / Chapter 1.4 --- About this Thesis --- p.5 / Chapter 2 --- The Phonology of Cantonese --- p.6 / Chapter 2.1 --- The Syllabic Structure of Cantonese Syllable --- p.7 / Chapter 2.2 --- The Tone System of Cantonese --- p.9 / Chapter 3 --- Review of Automatic Speech Recognition Systems --- p.12 / Chapter 3.1 --- Hidden Markov Model Approach --- p.12 / Chapter 3.2 --- Neural Networks Approach --- p.13 / Chapter 3.2.1 --- Multi-Layer Perceptrons (MLP) --- p.13 / Chapter 3.2.2 --- Time-Delay Neural Networks (TDNN) --- p.15 / Chapter 3.2.3 --- Recurrent Neural Networks --- p.17 / Chapter 3.3 --- Integrated Approach --- p.18 / Chapter 3.4 --- Mandarin and Cantonese Speech Recognition Systems --- p.19 / Chapter 4 --- The Speech Corpus and Database --- p.21 / Chapter 4.1 --- Design of the Speech Corpus --- p.21 / Chapter 4.2 --- Speech Database Acquisition --- p.23 / Chapter 5 --- Feature Parameters Extraction --- p.24 / Chapter 5.1 --- Endpoint Detection --- p.25 / Chapter 5.2 --- Speech Processing --- p.26 / Chapter 5.3 --- Speech Segmentation --- p.27 / Chapter 5.4 --- Phoneme Feature Extraction --- p.29 / Chapter 5.5 --- Tone Feature Extraction --- p.30 / Chapter 6 --- The Design of the System --- p.33 / Chapter 6.1 --- Towards Large Vocabulary System --- p.34 / Chapter 6.2 --- Overview of the Isolated Cantonese Syllable Recognition System --- p.36 / Chapter 6.3 --- The Primary Level: Phoneme Classifiers and Tone Classifier --- p.38 / Chapter 6.4 --- The Intermediate Level: Ending Corrector --- p.42 / Chapter 6.5 --- The Secondary Level: Syllable Classifier --- p.43 / Chapter 6.5.1 --- Concatenation with Correction Approach --- p.44 / Chapter 6.5.2 --- Fuzzy ART Approach --- p.45 / Chapter 7 --- Computer Simulation --- p.49 / Chapter 7.1 --- Experimental Conditions --- p.49 / Chapter 7.2 --- Experimental Results of the Primary Level Classifiers --- p.50 / Chapter 7.3 --- Overall Performance of the System --- p.57 / Chapter 7.4 --- Discussions --- p.61 / Chapter 8 --- Further Works --- p.62 / Chapter 8.1 --- Enhancement on Speech Segmentation --- p.62 / Chapter 8.2 --- Towards Speaker-Independent System --- p.63 / Chapter 8.3 --- Towards Speech-to-Text System --- p.64 / Chapter 9 --- Conclusions --- p.65 / Bibliography --- p.67 / Appendix A. Cantonese Syllable Full Set List --- p.71
523

Locally connected recurrent neural networks.

January 1993 (has links)
by Evan, Fung-yu Young. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 161-166). / List of Figures --- p.vi / List of Tables --- p.vii / List of Graphs --- p.viii / Abstract --- p.ix / Chapter Part I --- Learning Algorithms / Chapter 1 --- Representing Time in Connectionist Models --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Temporal Sequences --- p.2 / Chapter 1.2.1 --- Recognition Tasks --- p.2 / Chapter 1.2.2 --- Reproduction Tasks --- p.3 / Chapter 1.2.3 --- Generation Tasks --- p.4 / Chapter 1.3 --- Discrete Time v.s. Continuous Time --- p.4 / Chapter 1.4 --- Time Delay Neural Network (TDNN) --- p.4 / Chapter 1.4.1 --- Delay Elements in the Connections --- p.5 / Chapter 1.4.2 --- NETtalk: An Application of TDNN --- p.7 / Chapter 1.4.3 --- Drawbacks of TDNN --- p.8 / Chapter 1.5 --- Networks with Context Units --- p.8 / Chapter 1.5.1 --- Jordan's Network --- p.9 / Chapter 1.5.2 --- Elman's Network --- p.10 / Chapter 1.5.3 --- Other Architectures --- p.14 / Chapter 1.5.4 --- Drawbacks of Using Context Units --- p.15 / Chapter 1.6 --- Recurrent Neural Networks --- p.16 / Chapter 1.6.1 --- Hopfield Models --- p.17 / Chapter 1.6.2 --- Fully Recurrent Neural Networks --- p.20 / Chapter A. --- EXAMPLES OF USING RECURRENT NETWORKS --- p.22 / Chapter 1.7 --- Our Objective --- p.25 / Chapter 2 --- Learning Algorithms for Recurrent Neural Networks --- p.27 / Chapter 2.1 --- Introduction --- p.27 / Chapter 2.2 --- Gradient Descent Methods --- p.29 / Chapter 2.2.1 --- Backpropagation Through Time (BPTT) --- p.29 / Chapter 2.2.2 --- Real Time Recurrent Learning Rule (RTRL) --- p.30 / Chapter A. --- RTRL WITH TEACHER FORCING --- p.32 / Chapter B. --- TERMINAL TEACHER FORCING --- p.33 / Chapter C. --- CONTINUOUS TIME RTRL --- p.33 / Chapter 2.2.3 --- Variants of RTRL --- p.34 / Chapter A. --- SUB GROUPED RTRL --- p.34 / Chapter B. --- A FIXED SIZE STORAGE 0(n3) TIME COMPLEXITY LEARNGING RULE --- p.35 / Chapter 2.3 --- Non-Gradient Descent Methods --- p.37 / Chapter 2.3.1 --- Neural Bucket Brigade (NBB) --- p.37 / Chapter 2.3.2 --- Temporal Driven Method (TO) --- p.38 / Chapter 2.4 --- Comparison between Different Approaches --- p.39 / Chapter 2.5 --- Conclusion --- p.41 / Chapter 3 --- Locally Connected Recurrent Networks --- p.43 / Chapter 3.1 --- Introduction --- p.43 / Chapter 3.2 --- Locally Connected Recurrent Networks --- p.44 / Chapter 3.2.1 --- Network Topology --- p.44 / Chapter 3.2.2 --- Subgrouping --- p.46 / Chapter 3.2.3 --- Learning Algorithm --- p.47 / Chapter 3.2.4 --- Continuous Time Learning Algorithm --- p.50 / Chapter 3.3 --- Analysis --- p.51 / Chapter 3.3.1 --- Time Complexity --- p.51 / Chapter 3.3.2 --- Space Complexity --- p.51 / Chapter 3.3.3 --- Local Computations in Time and Space --- p.51 / Chapter 3.4 --- Running on Parallel Architectures --- p.52 / Chapter 3.4.1 --- Mapping the Algorithm to Parallel Architectures --- p.52 / Chapter 3.4.2 --- Parallel Learning Algorithm --- p.53 / Chapter 3.4.3 --- Analysis --- p.54 / Chapter 3.5 --- Ring-Structured Recurrent Network (RRN) --- p.55 / Chapter 3.6 --- Comparison between RRN and RTRL in Sequence Recognition --- p.55 / Chapter 3.6.1 --- Training Sets and Testing Sequences --- p.56 / Chapter 3.6.2 --- Comparison in Training Speed --- p.58 / Chapter 3.6.3 --- Comparison in Recalling Power --- p.59 / Chapter 3.7 --- Comparison between RRN and RTRL in Time Series Prediction --- p.59 / Chapter 3.7.1 --- Comparison in Training Speed --- p.62 / Chapter 3.7.2 --- Comparison in Predictive Power --- p.63 / Chapter 3.8 --- Conclusion --- p.65 / Chapter Part II --- Applications / Chapter 4 --- Sequence Recognition by Ring-Structured Recurrent Networks --- p.67 / Chapter 4.1 --- Introduction --- p.67 / Chapter 4.2 --- Related Works --- p.68 / Chapter 4.2.1 --- Feedback Multilayer Perceptron (FMLP) --- p.68 / Chapter 4.2.2 --- Back Propagation Unfolded Recurrent Rule (BURR) --- p.69 / Chapter 4.3 --- Experimental Details --- p.71 / Chapter 4.3.1 --- Network Architecture --- p.71 / Chapter 4.3.2 --- Input/Output Representations --- p.72 / Chapter 4.3.3 --- Training Phase --- p.73 / Chapter 4.3.4 --- Recalling Phase --- p.73 / Chapter 4.4 --- Experimental Results --- p.74 / Chapter 4.4.1 --- Temporal Memorizing Power --- p.74 / Chapter 4.4.2 --- Time Warping Performance --- p.80 / Chapter 4.4.3 --- Fault Tolerance --- p.85 / Chapter 4.4.4 --- Learning Rate --- p.87 / Chapter 4.5 --- Time Delay --- p.88 / Chapter 4.6 --- Conclusion --- p.91 / Chapter 5 --- Time Series Prediction --- p.92 / Chapter 5.1 --- Introduction --- p.92 / Chapter 5.2 --- Modelling in Feedforward Networks --- p.93 / Chapter 5.3 --- Methodology with Recurrent Networks --- p.94 / Chapter 5.3.1 --- Network Structure --- p.94 / Chapter 5.3.2 --- Model Building - Training --- p.95 / Chapter 5.3.3 --- Model Diagnosis - Testing --- p.95 / Chapter 5.4 --- Training Paradigms --- p.96 / Chapter 5.4.1 --- A Quasiperiodic Series with White Noise --- p.96 / Chapter 5.4.2 --- A Chaotic Series --- p.97 / Chapter 5.4.3 --- Sunspots Numbers --- p.98 / Chapter 5.4.4 --- Hang Seng Index --- p.99 / Chapter 5.5 --- Experimental Results and Discussions --- p.99 / Chapter 5.5.1 --- A Quasiperiodic Series with White Noise --- p.101 / Chapter 5.5.2 --- Logistic Map --- p.103 / Chapter 5.5.3 --- Sunspots Numbers --- p.105 / Chapter 5.5.4 --- Hang Seng Index --- p.109 / Chapter 5.6 --- Conclusion --- p.112 / Chapter 6 --- Chaos in Recurrent Networks --- p.114 / Chapter 6.1 --- Introduction --- p.114 / Chapter 6.2 --- Important Features of Chaos --- p.115 / Chapter 6.2.1 --- First Return Map --- p.115 / Chapter 6.2.2 --- Long Term Unpredictability --- p.117 / Chapter 6.2.3 --- Sensitivity to Initial Conditions (SIC) --- p.118 / Chapter 6.2.4 --- Strange Attractor --- p.119 / Chapter 6.3 --- Chaotic Behaviour in Recurrent Networks --- p.120 / Chapter 6.3.1 --- Network Structure --- p.121 / Chapter 6.3.2 --- Dynamics in Training --- p.121 / Chapter 6.3.3 --- Dynamics in Testing --- p.122 / Chapter 6.4 --- Experiments and Discussions --- p.123 / Chapter 6.4.1 --- Henon Model --- p.123 / Chapter 6.4.2 --- Lorenz Model --- p.127 / Chapter 6.5 --- Conclusion --- p.134 / Chapter 7 --- Conclusion --- p.135 / Appendix A Series 1 Sine Function with White Noise --- p.137 / Appendix B Series 2 Logistic Map --- p.138 / Appendix C Series 3 Sunspots Numbers from 1700 to 1979 --- p.139 / Appendix D A Quasiperiodic Series with White Noise --- p.141 / Appendix E Hang Seng Daily Closing Index in 1991 --- p.142 / Appendix F Network Model for the Quasiperiodic Series with White Noise --- p.143 / Appendix G Network Model for the Logistic Map --- p.144 / Appendix H Network Model for the Sunspots Numbers --- p.145 / Appendix I Network Model for the Hang Seng Index --- p.146 / Appendix J Henon Model --- p.147 / Appendix K Network Model for the Henon Map --- p.150 / Appendix L Lorenz Model --- p.151 / Appendix M Network Model for the Lorenz Map --- p.159 / Bibliography --- p.161
524

On the training of feedforward neural networks.

January 1993 (has links)
by Hau-san Wong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves [178-183]). / Chapter 1 --- INTRODUCTION / Chapter 1.1 --- Learning versus Explicit Programming --- p.1-1 / Chapter 1.2 --- Artificial Neural Networks --- p.1-2 / Chapter 1.3 --- Learning in ANN --- p.1-3 / Chapter 1.4 --- Problems of Learning in BP Networks --- p.1-5 / Chapter 1.5 --- Dynamic Node Architecture for BP Networks --- p.1-7 / Chapter 1.6 --- Incremental Learning --- p.1-10 / Chapter 1.7 --- Research Objective and Thesis Organization --- p.1-11 / Chapter 2 --- THE FEEDFORWARD MULTILAYER NEURAL NETWORK / Chapter 2.1 --- The Perceptron --- p.2-1 / Chapter 2.2 --- The Generalization of the Perceptron --- p.2-4 / Chapter 2.3 --- The Multilayer Feedforward Network --- p.2-5 / Chapter 3 --- SOLUTIONS TO THE BP LEARNING PROBLEM / Chapter 3.1 --- Introduction --- p.3-1 / Chapter 3.2 --- Attempts in the Establishment of a Viable Hidden Representation Model --- p.3-5 / Chapter 3.3 --- Dynamic Node Creation Algorithms --- p.3-9 / Chapter 3.4 --- Concluding Remarks --- p.3-15 / Chapter 4 --- THE GROWTH ALGORITHM FOR NEURAL NETWORKS / Chapter 4.1 --- Introduction --- p.4-2 / Chapter 4.2 --- The Radial Basis Function --- p.4-6 / Chapter 4.3 --- The Additional Input Node and the Modified Nonlinearity --- p.4-9 / Chapter 4.4 --- The Initialization of the New Hidden Node --- p.4-11 / Chapter 4.5 --- Initialization of the First Node --- p.4-15 / Chapter 4.6 --- Practical Considerations for the Growth Algorithm --- p.4-18 / Chapter 4.7 --- The Convergence Proof for the Growth Algorithm --- p.4-20 / Chapter 4.8 --- The Flow of the Growth Algorithm --- p.4-21 / Chapter 4.9 --- Experimental Results and Performance Analysis --- p.4-21 / Chapter 4.10 --- Concluding Remarks --- p.4-33 / Chapter 5 --- KNOWLEDGE REPRESENTATION IN NEURAL NETWORKS / Chapter 5.1 --- An Alternative Perspective to Knowledge Representation in Neural Network: The Temporal Vector (T-Vector) Approach --- p.5-1 / Chapter 5.2 --- Prior Research Works in the T-Vector Approach --- p.5-2 / Chapter 5.3 --- Formulation of the T-Vector Approach --- p.5-3 / Chapter 5.4 --- Relation of the Hidden T-Vectors to the Output T-Vectors --- p.5-6 / Chapter 5.5 --- Relation of the Hidden T-Vectors to the Input T-Vectors --- p.5-10 / Chapter 5.6 --- An Inspiration for a New Training Algorithm from the Current Model --- p.5-12 / Chapter 6 --- THE DETERMINISTIC TRAINING ALGORITHM FOR NEURAL NETWORKS / Chapter 6.1 --- Introduction --- p.6-1 / Chapter 6.2 --- The Linear Independency Requirement for the Hidden T-Vectors --- p.6-3 / Chapter 6.3 --- Inspiration of the Current Work from the Barmann T-Vector Model --- p.6-5 / Chapter 6.4 --- General Framework of Dynamic Node Creation Algorithm --- p.6-10 / Chapter 6.5 --- The Deterministic Initialization Scheme for the New Hidden Nodes / Chapter 6.5.1 --- Introduction --- p.6-12 / Chapter 6.5.2 --- Determination of the Target T-Vector / Chapter 6.5.2.1 --- Introduction --- p.6-15 / Chapter 6.5.2.2 --- Modelling of the Target Vector βQhQ --- p.6-16 / Chapter 6.5.2.3 --- Near-Linearity Condition for the Sigmoid Function --- p.6-18 / Chapter 6.5.3 --- Preparation for the BP Fine-Tuning Process --- p.6-24 / Chapter 6.5.4 --- Determination of the Target Hidden T-Vector --- p.6-28 / Chapter 6.5.5 --- Determination of the Hidden Weights --- p.6-29 / Chapter 6.5.6 --- Determination of the Output Weights --- p.6-30 / Chapter 6.6 --- Linear Independency Assurance for the New Hidden T-Vector --- p.6-30 / Chapter 6.7 --- Extension to the Multi-Output Case --- p.6-32 / Chapter 6.8 --- Convergence Proof for the Deterministic Algorithm --- p.6-35 / Chapter 6.9 --- The Flow of the Deterministic Dynamic Node Creation Algorithm --- p.6-36 / Chapter 6.10 --- Experimental Results and Performance Analysis --- p.6-36 / Chapter 6.11 --- Concluding Remarks --- p.6-50 / Chapter 7 --- THE GENERALIZATION MEASURE MONITORING SCHEME / Chapter 7.1 --- The Problem of Generalization for Neural Networks --- p.7-1 / Chapter 7.2 --- Prior Attempts in Solving the Generalization Problem --- p.7-2 / Chapter 7.3 --- The Generalization Measure --- p.7-4 / Chapter 7.4 --- The Adoption of the Generalization Measure to the Deterministic Algorithm --- p.7-5 / Chapter 7.5 --- Monitoring of the Generalization Measure --- p.7-6 / Chapter 7.6 --- Correspondence between the Generalization Measure and the Generalization Capability of the Network --- p.7-8 / Chapter 7.7 --- Experimental Results and Performance Analysis --- p.7-12 / Chapter 7.8 --- Concluding Remarks --- p.7-16 / Chapter 8 --- THE ESTIMATION OF THE INITIAL HIDDEN LAYER SIZE / Chapter 8.1 --- The Need for an Initial Hidden Layer Size Estimation --- p.8-1 / Chapter 8.2 --- The Initial Hidden Layer Estimation Scheme --- p.8-2 / Chapter 8.3 --- The Extension of the Estimation Procedure to the Multi-Output Network --- p.8-6 / Chapter 8.4 --- Experimental Results and Performance Analysis --- p.8-6 / Chapter 8.5 --- Concluding Remarks --- p.8-16 / Chapter 9 --- CONCLUSION / Chapter 9.1 --- Contributions --- p.9-1 / Chapter 9.2 --- Suggestions for Further Research --- p.9-3 / REFERENCES --- p.R-1 / APPENDIX --- p.A-1
525

Integrating artificial neural networks and constraint logic programming.

January 1995 (has links)
by Vincent Wai-leuk Tam. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 74-80). / Chapter 1 --- Introduction and Summary --- p.1 / Chapter 1.1 --- The Task --- p.1 / Chapter 1.2 --- The Thesis --- p.2 / Chapter 1.2.1 --- Thesis --- p.2 / Chapter 1.2.2 --- Antithesis --- p.3 / Chapter 1.2.3 --- Synthesis --- p.5 / Chapter 1.3 --- Results --- p.6 / Chapter 1.4 --- Contributions --- p.6 / Chapter 1.5 --- Chapter Summaries --- p.7 / Chapter 1.5.1 --- Chapter 2: An ANN-Based Constraint-Solver --- p.8 / Chapter 1.5.2 --- Chapter 3: A Theoretical Framework of PROCLANN --- p.8 / Chapter 1.5.3 --- Chapter 4: The Prototype Implementation --- p.8 / Chapter 1.5.4 --- Chapter 5: Benchmarking --- p.9 / Chapter 1.5.5 --- Chapter 6: Conclusion --- p.9 / Chapter 2 --- An ANN-Based Constraint-Solver --- p.10 / Chapter 2.1 --- Notations --- p.11 / Chapter 2.2 --- Criteria for ANN-based Constraint-solver --- p.11 / Chapter 2.3 --- A Generic Neural Network: GENET --- p.13 / Chapter 2.3.1 --- Network Structure --- p.13 / Chapter 2.3.2 --- Network Convergence --- p.17 / Chapter 2.3.3 --- Energy Perspective --- p.22 / Chapter 2.4 --- Properties of GENET --- p.23 / Chapter 2.5 --- Incremental GENET --- p.27 / Chapter 3 --- A Theoretical Framework of PROCLANN --- p.29 / Chapter 3.1 --- Syntax and Declarative Semantics --- p.30 / Chapter 3.2 --- Unification in PROCLANN --- p.33 / Chapter 3.3 --- PROCLANN Computation Model --- p.38 / Chapter 3.4 --- Soundness and Weak Completeness of the PROCLANN Compu- tation Model --- p.40 / Chapter 3.5 --- Probabilistic Non-determinism --- p.46 / Chapter 4 --- The Prototype Implementation --- p.48 / Chapter 4.1 --- Prototype Design --- p.48 / Chapter 4.2 --- Implementation Issues --- p.52 / Chapter 5 --- Benchmarking --- p.58 / Chapter 5.1 --- N-Queens --- p.59 / Chapter 5.1.1 --- Benchmarking --- p.59 / Chapter 5.1.2 --- Analysis --- p.59 / Chapter 5.2 --- Graph-coloring --- p.63 / Chapter 5.2.1 --- Benchmarking --- p.63 / Chapter 5.2.2 --- Analysis --- p.64 / Chapter 5.3 --- Exceptionally Hard Problem --- p.66 / Chapter 5.3.1 --- Benchmarking --- p.67 / Chapter 5.3.2 --- Analysis --- p.67 / Chapter 6 --- Conclusion --- p.68 / Chapter 6.1 --- Contributions --- p.68 / Chapter 6.2 --- Limitations --- p.70 / Chapter 6.3 --- Future Work --- p.71 / Chapter 6.3.1 --- Parallel Implementation --- p.71 / Chapter 6.3.2 --- General Constraint Handling --- p.72 / Chapter 6.3.3 --- Other ANN Models --- p.73 / Chapter 6.3.4 --- Other Domains --- p.73 / Bibliography --- p.74 / Appendix A The Hard Graph-coloring Problems --- p.81 / Appendix B An Exceptionally Hard Problem (EHP) --- p.182
526

Motion detection: a neural network approach.

January 1992 (has links)
by Yip Pak Ching. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 97-100). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- The Objective of Machine Vision --- p.3 / Chapter 1.3 --- Our Goal --- p.4 / Chapter 1.4 --- Previous Works and Current Research --- p.5 / Chapter 1.5 --- Organization of the Thesis --- p.8 / Chapter 2 --- Human Movement Perception --- p.11 / Chapter 2.1 --- Basic Mechanisms of Vision --- p.11 / Chapter 2.2 --- Functions of Movement Perception --- p.12 / Chapter 2.3 --- Five Ways to Make a Spot of Light Appear to Move --- p.14 / Chapter 2.4 --- Real Movement --- p.15 / Chapter 2.5 --- Mechanisms for the Perception of Real Movement --- p.16 / Chapter 2.6 --- Apparent Motion --- p.18 / Chapter 3 --- Machine Movement Perception --- p.21 / Chapter 3.1 --- Introduction --- p.21 / Chapter 3.2 --- Perspective Transformation --- p.21 / Chapter 3.3 --- Motion Detection by Difference Image --- p.22 / Chapter 3.4 --- Accumulative Difference --- p.24 / Chapter 3.5 --- Establishing a Reference Image --- p.26 / Chapter 3.6 --- Optical Flow --- p.27 / Chapter 4 --- Neural Networks for Machine Vision --- p.30 / Chapter 4.1 --- Introduction --- p.30 / Chapter 4.2 --- Perceptron --- p.30 / Chapter 4.3 --- The Back-Propagation Training Algorithm --- p.33 / Chapter 4.4 --- Object Identification --- p.34 / Chapter 4.5 --- Special Technique for Improving the Learning Time and Recognition Rate --- p.36 / Chapter 5 --- Neural Networks by Supervised Learning for Motion Detection --- p.39 / Chapter 5.1 --- Introduction --- p.39 / Chapter 5.2 --- Three-Level Network Architecture --- p.40 / Chapter 5.3 --- Four-Level Network Architecture --- p.45 / Chapter 6 --- Rough Motion Detection --- p.50 / Chapter 6.1 --- Introduction --- p.50 / Chapter 6.2 --- The Rough Motion Detection Network --- p.51 / Chapter 6.3 --- The Correlation Network --- p.54 / Chapter 6.4 --- Modified Rough Motion Detection Network --- p.56 / Chapter 7 --- Moving Object Extraction --- p.59 / Chapter 7.1 --- Introduction --- p.59 / Chapter 7.2 --- Three Types of Images for Moving Object Extraction --- p.59 / Chapter 7.3 --- Edge Enhancement Network --- p.62 / Chapter 7.4 --- Background Remover --- p.63 / Chapter 8 --- Motion Parameter Extraction --- p.66 / Chapter 8.1 --- Introduction --- p.66 / Chapter 8.2 --- 2-D Motion Detection --- p.66 / Chapter 8.3 --- Normalization Network --- p.67 / Chapter 8.4 --- 3-D Motion Parameter Extraction --- p.70 / Chapter 8.5 --- Object Identification --- p.70 / Chapter 9 --- Motion Parameter Extraction from Overlapped Object Images --- p.72 / Chapter 9.1 --- Introduction --- p.72 / Chapter 9.2 --- Decision Network --- p.72 / Chapter 9.3 --- Motion Direction Extraction from Overlapped Object Images by Three-Level Network Model with Supervised Learning --- p.75 / Chapter 9.4 --- Readjustment Network for Motion Parameter Extraction from Overlapped Object Images --- p.79 / Chapter 9.5 --- Reconstruction of the Overlapped object Image --- p.82 / Chapter 10 --- The Integrated Motion Detection System --- p.87 / Chapter 10.1 --- Introduction --- p.87 / Chapter 10.2 --- System Architecture --- p.88 / Chapter 10.3 --- Results and Concluding Remarks --- p.91 / Chapter 11 --- Conclusion --- p.93 / References --- p.97
527

Automatic recognition of isolated Cantonese syllables using neural networks =: 利用神經網絡識別粤語單音節. / 利用神經網絡識別粤語單音節 / Automatic recognition of isolated Cantonese syllables using neural networks =: Li yong shen jing wang luo shi bie yue yu dan yin jie. / Li yong shen jing wang luo shi bie yue yu dan yin jie

January 1996 (has links)
by Tan Lee. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references. / by Tan Lee. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Conventional Pattern Recognition Approaches for Speech Recognition --- p.3 / Chapter 1.2 --- A Review on Neural Network Applications in Speech Recognition --- p.6 / Chapter 1.2.1 --- Static Pattern Classification --- p.7 / Chapter 1.2.2 --- Hybrid Approaches --- p.9 / Chapter 1.2.3 --- Dynamic Neural Networks --- p.12 / Chapter 1.3 --- Automatic Recognition of Cantonese Speech --- p.16 / Chapter 1.4 --- Organization of the Thesis --- p.18 / References --- p.20 / Chapter 2 --- Phonological and Acoustical Properties of Cantonese Syllables --- p.29 / Chapter 2.1 --- Phonology of Cantonese --- p.29 / Chapter 2.1.1 --- Basic Phonetic Units --- p.30 / Chapter 2.1.2 --- Syllabic Structure --- p.32 / Chapter 2.1.3 --- Lexical Tones --- p.33 / Chapter 2.2 --- Acoustical Properties of Cantonese Syllables --- p.35 / Chapter 2.2.1 --- Spectral Features --- p.35 / Chapter 2.2.2 --- Energy and Zero-Crossing Rate --- p.39 / Chapter 2.2.3 --- Pitch --- p.40 / Chapter 2.2.4 --- Duration --- p.41 / Chapter 2.3 --- Acoustic Feature Extraction for Speech Recognition of Cantonese --- p.42 / References --- p.46 / Chapter 3 --- Tone Recognition of Isolated Cantonese Syllables --- p.48 / Chapter 3.1 --- Acoustic Pre-processing --- p.48 / Chapter 3.1.1 --- Voiced Portion Detection --- p.48 / Chapter 3.1.2 --- Pitch Extraction --- p.51 / Chapter 3.2 --- Supra-Segmental Feature Parameters for Tone Recognition --- p.53 / Chapter 3.2.1 --- Pitch-Related Feature Parameters --- p.53 / Chapter 3.2.2 --- Duration and Energy Drop Rate --- p.55 / Chapter 3.2.3 --- Normalization of Feature Parameters --- p.57 / Chapter 3.3 --- An MLP Based Tone Classifier --- p.58 / Chapter 3.4 --- Simulation Experiments --- p.59 / Chapter 3.4.1 --- Speech Data --- p.59 / Chapter 3.4.2 --- Feature Extraction and Normalization --- p.61 / Chapter 3.4.3 --- Experimental Results --- p.61 / Chapter 3.5 --- Discussion and Conclusion --- p.64 / References --- p.65 / Chapter 4 --- Recurrent Neural Network Based Dynamic Speech Models --- p.67 / Chapter 4.1 --- Motivations and Rationales --- p.68 / Chapter 4.2 --- RNN Speech Model (RSM) --- p.71 / Chapter 4.2.1 --- Network Architecture and Dynamic Operation --- p.71 / Chapter 4.2.2 --- RNN for Speech Modeling --- p.72 / Chapter 4.2.3 --- Illustrative Examples --- p.75 / Chapter 4.3 --- Training of RNN Speech Models --- p.78 / Chapter 4.3.1 --- Real-Time-Recurrent-Learning (RTRL) Algorithm --- p.78 / Chapter 4.3.2 --- Iterative Re-segmentation Training of RSM --- p.80 / Chapter 4.4 --- Several Practical Issues in RSM Training --- p.85 / Chapter 4.4.1 --- Combining Adjacent Segments --- p.85 / Chapter 4.4.2 --- Hypothesizing Initial Segmentation --- p.86 / Chapter 4.4.3 --- Improving Temporal State Dependency --- p.89 / Chapter 4.5 --- Simulation Experiments --- p.90 / Chapter 4.5.1 --- Experiment 4.1 - Training with a Single Utterance --- p.91 / Chapter 4.5.2 --- Experiment 4.2 - Effect of Augmenting Recurrent Learning Rate --- p.93 / Chapter 4.5.3 --- Experiment 4.3 - Training with Multiple Utterances --- p.96 / Chapter 4.5.4 --- Experiment 4.4 一 Modeling Performance of RSMs --- p.99 / Chapter 4.6 --- Conclusion --- p.104 / References --- p.106 / Chapter 5 --- Isolated Word Recognition Using RNN Speech Models --- p.107 / Chapter 5.1 --- A Baseline System --- p.107 / Chapter 5.1.1 --- System Description --- p.107 / Chapter 5.1.2 --- Simulation Experiments --- p.110 / Chapter 5.1.3 --- Discussion --- p.117 / Chapter 5.2 --- Incorporating Duration Information --- p.118 / Chapter 5.2.1 --- Duration Screening --- p.118 / Chapter 5.2.2 --- Determination of Duration Bounds --- p.120 / Chapter 5.2.3 --- Simulation Experiments --- p.120 / Chapter 5.2.4 --- Discussion --- p.124 / Chapter 5.3 --- Discriminative Training --- p.125 / Chapter 5.3.1 --- The Minimum Classification Error Formulation --- p.126 / Chapter 5.3.2 --- Generalized Probabilistic Descent Algorithm --- p.127 / Chapter 5.3.3 --- Determination of Training Parameters --- p.128 / Chapter 5.3.4 --- Simulation Experiments --- p.129 / Chapter 5.3.5 --- Discussion --- p.133 / Chapter 5.4 --- Conclusion --- p.134 / References --- p.135 / Chapter 6 --- An Integrated Speech Recognition System for Cantonese Syllables --- p.137 / Chapter 6.1 --- System Architecture and Recognition Scheme --- p.137 / Chapter 6.2 --- Speech Corpus and Data Pre-processing --- p.140 / Chapter 6.3 --- Recognition Experiments and Results --- p.140 / Chapter 6.4 --- Discussion and Conclusion --- p.144 / References --- p.146 / Chapter 7 --- Conclusions and Suggestions for Future Work --- p.147 / Chapter 7.1 --- Conclusions --- p.147 / Chapter 7.2 --- Suggestions for Future Work --- p.151
528

Applications and implementation of neuro-connectionist architectures.

January 1996 (has links)
by H.S. Ng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 91-97). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Neuro-connectionist Network --- p.2 / Chapter 2 --- Related Works --- p.5 / Chapter 2.1 --- Introduction --- p.5 / Chapter 2.1.1 --- Kruskal's Algorithm --- p.5 / Chapter 2.1.2 --- Prim's algorithm --- p.6 / Chapter 2.1.3 --- Sollin's algorithm --- p.7 / Chapter 2.1.4 --- Bellman-Ford algorithm --- p.8 / Chapter 2.1.5 --- Floyd-Warshall algorithm --- p.9 / Chapter 3 --- Binary Relation Inference Network and Path Problems --- p.11 / Chapter 3.1 --- Introduction --- p.11 / Chapter 3.2 --- Topology --- p.12 / Chapter 3.3 --- Network structure --- p.13 / Chapter 3.3.1 --- Single-destination BRIN architecture --- p.14 / Chapter 3.3.2 --- Comparison between all-pair BRIN and single-destination BRIN --- p.18 / Chapter 3.4 --- Path Problems and BRIN Solution --- p.18 / Chapter 3.4.1 --- Minimax path problems --- p.18 / Chapter 3.4.2 --- BRIN solution --- p.19 / Chapter 4 --- Analog and Voltage-mode Approach --- p.22 / Chapter 4.1 --- Introduction --- p.22 / Chapter 4.2 --- Analog implementation --- p.24 / Chapter 4.3 --- Voltage-mode approach --- p.26 / Chapter 4.3.1 --- The site function --- p.26 / Chapter 4.3.2 --- The unit function --- p.28 / Chapter 4.3.3 --- The computational unit --- p.28 / Chapter 4.4 --- Conclusion --- p.29 / Chapter 5 --- Current-mode Approach --- p.32 / Chapter 5.1 --- Introduction --- p.32 / Chapter 5.2 --- Current-mode approach for analog VLSI Implementation --- p.33 / Chapter 5.2.1 --- Site and Unit output function --- p.33 / Chapter 5.2.2 --- Computational unit --- p.34 / Chapter 5.2.3 --- A complete network --- p.35 / Chapter 5.3 --- Conclusion --- p.37 / Chapter 6 --- Neural Network Compensation for Optimization Circuit --- p.40 / Chapter 6.1 --- Introduction --- p.40 / Chapter 6.2 --- A Neuro-connectionist Architecture for error correction --- p.41 / Chapter 6.2.1 --- Linear Relationship --- p.42 / Chapter 6.2.2 --- Output Deviation of Computational Unit --- p.44 / Chapter 6.3 --- Experimental Results --- p.46 / Chapter 6.3.1 --- Training Phase --- p.46 / Chapter 6.3.2 --- Generalization Phase --- p.48 / Chapter 6.4 --- Conclusion --- p.50 / Chapter 7 --- Precision-limited Analog Neural Network Compensation --- p.51 / Chapter 7.1 --- Introduction --- p.51 / Chapter 7.2 --- Analog Neural Network hardware --- p.53 / Chapter 7.3 --- Integration of analog neural network compensation of connectionist net- work for general path problems --- p.54 / Chapter 7.4 --- Experimental Results --- p.55 / Chapter 7.4.1 --- Convergence time --- p.56 / Chapter 7.4.2 --- The accuracy of the system --- p.57 / Chapter 7.5 --- Conclusion --- p.58 / Chapter 8 --- Transitive Closure Problems --- p.60 / Chapter 8.1 --- Introduction --- p.60 / Chapter 8.2 --- Different ways of implementation of BRIN for transitive closure --- p.61 / Chapter 8.2.1 --- Digital Implementation --- p.61 / Chapter 8.2.2 --- Analog Implementation --- p.61 / Chapter 8.3 --- Transitive Closure Problem --- p.63 / Chapter 8.3.1 --- A special case of maximum spanning tree problem --- p.64 / Chapter 8.3.2 --- Analog approach solution for transitive closure problem --- p.65 / Chapter 8.3.3 --- Current-mode approach solution for transitive closure problem --- p.67 / Chapter 8.4 --- Comparisons between the different forms of implementation of BRIN for transitive closure --- p.71 / Chapter 8.4.1 --- Convergence Time --- p.71 / Chapter 8.4.2 --- Circuit complexity --- p.72 / Chapter 8.5 --- Discussion --- p.73 / Chapter 9 --- Critical path problems --- p.74 / Chapter 9.1 --- Introduction --- p.74 / Chapter 9.2 --- Problem statement and single-destination BRIN solution --- p.75 / Chapter 9.3 --- Analog implementation --- p.76 / Chapter 9.3.1 --- Separated building block --- p.78 / Chapter 9.3.2 --- Combined building block --- p.79 / Chapter 9.4 --- Current-mode approach --- p.80 / Chapter 9.4.1 --- "Site function, unit output function and a completed network" --- p.80 / Chapter 9.5 --- Conclusion --- p.83 / Chapter 10 --- Conclusions --- p.85 / Chapter 10.1 --- Summary of Achievements --- p.85 / Chapter 10.2 --- Future development --- p.88 / Chapter 10.2.1 --- Application for financial problems --- p.88 / Chapter 10.2.2 --- Fabrication of VLSI Implementation --- p.88 / Chapter 10.2.3 --- Actual prototyping of Analog Integrated Circuits for critical path and transitive closure problems --- p.89 / Chapter 10.2.4 --- Other implementation platform --- p.89 / Chapter 10.2.5 --- On-line update of routing table inside the router for network com- munication using BRIN --- p.89 / Chapter 10.2.6 --- Other BRIN's applications --- p.90 / Bibliography --- p.91
529

Correlation basis function network and application to financial decision making.

January 1999 (has links)
by Kwok-Fai Cheung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 100-103). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.4 / Chapter 1.1 --- Summary of Contributions --- p.5 / Chapter 1.2 --- Organization of the Thesis --- p.6 / Chapter 2 --- Current Methods and Problems --- p.8 / Chapter 2.1 --- Statisticians --- p.8 / Chapter 2.1.1 --- ARMA --- p.8 / Chapter 2.1.1.1 --- Moving Average models --- p.8 / Chapter 2.1.1.2 --- Autoregressive models --- p.9 / Chapter 2.1.1.3 --- Stationary Process --- p.10 / Chapter 2.1.1.4 --- Autoregressive-Moving Average model --- p.10 / Chapter 2.1.1.5 --- Parameter Estimation --- p.11 / Chapter 2.2 --- Financial Researchers --- p.11 / Chapter 2.2.1 --- Efficient Market Theory --- p.11 / Chapter 2.3 --- Computer Scientists --- p.12 / Chapter 2.3.1 --- Expert System --- p.12 / Chapter 2.3.2 --- Neural Network --- p.14 / Chapter 2.3.2.1 --- Multilayer Perceptron --- p.14 / Chapter 2.3.2.2 --- Radial Basis Function Network (RBF) --- p.19 / Chapter 2.4 --- Research Apart from Prediction and Trading in Finance --- p.22 / Chapter 2.4.1 --- Derivatives Valuation and Hedging --- p.22 / Chapter 2.4.1.1 --- Volatility --- p.22 / Chapter 2.4.2 --- Pricing of Initial Public Offering --- p.24 / Chapter 2.4.3 --- Credit Rating --- p.25 / Chapter 2.4.4 --- Financial Health Assessment --- p.26 / Chapter 2.5 --- Discussion --- p.27 / Chapter 3 --- Correlation Basis Function Network --- p.28 / Chapter 3.1 --- Formulation of CBF network --- p.31 / Chapter 3.2 --- First Order Learning Algorithm --- p.32 / Chapter 3.3 --- Summary --- p.35 / Chapter 4 --- Applications of CBF Network in Stock trading --- p.36 / Chapter 4.1 --- Data Representation --- p.36 / Chapter 4.2 --- Data Pre-processing --- p.38 / Chapter 4.2.1 --- Input data pre-processing --- p.38 / Chapter 4.2.2 --- Output data pre-processing --- p.38 / Chapter 4.3 --- Multiple CBF Networks for Generation of Trading Signals --- p.41 / Chapter 4.4 --- Output Data Post-processing --- p.41 / Chapter 4.5 --- Trader's Interpretation --- p.43 / Chapter 4.6 --- Maximum profit trading system --- p.45 / Chapter 4.7 --- Performance Evaluation --- p.46 / Chapter 5 --- Applications of CBF Network in Warrant trading --- p.48 / Chapter 5.1 --- Option Model --- p.48 / Chapter 5.2 --- Warrant Model --- p.49 / Chapter 5.3 --- Black-Scholes Pricing Formula --- p.51 / Chapter 5.4 --- Using CBF Network for choosing warrants --- p.53 / Chapter 5.5 --- Trading System --- p.53 / Chapter 5.5.1 --- Trading System by Black-Scholes Model --- p.54 / Chapter 5.5.2 --- Trading System by Warrant Sensitivity --- p.55 / Chapter 5.6 --- Learning of Parameters in Warrant Sensitivity Model by Hierarchi- cal CBF Network --- p.57 / Chapter 5.7 --- Experimental Results --- p.59 / Chapter 5.7.1 --- Aggregate profit --- p.62 / Chapter 5.8 --- Summary and Discussion --- p.69 / Chapter 6 --- Analysis of CBF Network and other models --- p.72 / Chapter 6.1 --- Time and Space Complexity --- p.72 / Chapter 6.1.1 --- RBF Network --- p.72 / Chapter 6.1.2 --- CBF Network --- p.74 / Chapter 6.1.3 --- Black-Scholes Pricing Formula --- p.74 / Chapter 6.1.4 --- Warrant Sensitivity Model --- p.75 / Chapter 6.2 --- "Model Confidence, Prediction Confidence and Model Stability" --- p.76 / Chapter 6.2.1 --- Model and Prediction Confidence --- p.77 / Chapter 6.2.2 --- Model Stability --- p.77 / Chapter 6.2.3 --- Linear Model Analysis --- p.79 / Chapter 6.2.4 --- CBF Network Analysis --- p.82 / Chapter 6.2.5 --- Black-Scholes Pricing Formula Analysis --- p.84 / Chapter 7 --- Conclusion --- p.93 / Chapter 7.1 --- Neural Network and Statistical Modeling --- p.95 / Chapter 7.2 --- Financial Markets --- p.95 / Chapter A --- RBF Network Parameters Estimation --- p.101 / Chapter A.1 --- Least Squares --- p.101 / Chapter A.2 --- Gradient Descent Algorithm --- p.103 / Chapter B --- Further study on Black-Scholes Model --- p.104
530

Adaptive supervised learning decision network with low downside volatility.

January 1999 (has links)
Kei-Keung Hung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 127-128). / Abstract also in Chinese. / Abstract --- p.i / Acknowledgments --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Static Portfolio Techniques --- p.1 / Chapter 1.2 --- Neural Network Approach --- p.2 / Chapter 1.3 --- Contributions of this Thesis --- p.3 / Chapter 1.4 --- Application of this Research --- p.4 / Chapter 1.5 --- Organization of this Thesis --- p.4 / Chapter 2 --- Literature Review --- p.6 / Chapter 2.1 --- Standard Markowian Portfolio Optimization (SMPO) and Sharpe Ratio --- p.6 / Chapter 2.2 --- Downside Risk --- p.9 / Chapter 2.3 --- Augmented Lagrangian Method --- p.10 / Chapter 2.4 --- Adaptive Supervised Learning Decision (ASLD) System --- p.13 / Chapter I --- Static Portfolio Optimization Techniques --- p.19 / Chapter 3 --- Modified Portfolio Sharpe Ratio Maximization (MPSRM) --- p.20 / Chapter 3.1 --- Experiment Setting --- p.21 / Chapter 3.2 --- Downside Risk and Upside Volatility --- p.22 / Chapter 3.3 --- Investment Diversification --- p.24 / Chapter 3.4 --- Analysis of the Parameters H and B of MPSRM --- p.27 / Chapter 3.5 --- Risk Minimization with Control of Expected Return --- p.30 / Chapter 3.6 --- Return Maximization with Control of Expected Downside Risk --- p.32 / Chapter 4 --- Variations of Modified Portfolio Sharpe Ratio Maximization --- p.34 / Chapter 4.1 --- Soft-max Version of Modified Portfolio Sharpe Ratio Maximization (SMP- SRM) --- p.35 / Chapter 4.1.1 --- Applying Soft-max Technique to Modified Portfolio Sharpe Ratio Maximization (SMPSRM) --- p.35 / Chapter 4.1.2 --- Risk Minimization with Control of Expected Return --- p.37 / Chapter 4.1.3 --- Return Maximization with Control of Expected Downside Risk --- p.38 / Chapter 4.2 --- Soft-max Version of MPSRM with Entropy-like Regularization Term (SMPSRM-E) --- p.39 / Chapter 4.2.1 --- Using Entropy-like Regularization term in Soft-max version of Modified Portfolio Sharpe Ratio Maximization (SMPSRM-E) --- p.39 / Chapter 4.2.2 --- Risk Minimization with Control of Expected Return --- p.41 / Chapter 4.2.3 --- Return Maximization with Control of Expected Downside Risk --- p.43 / Chapter 4.3 --- Analysis of Parameters in SMPSRM and SMPSRM-E --- p.44 / Chapter II --- Neural Network Approach --- p.48 / Chapter 5 --- Investment on a Foreign Exchange Market using ASLD system --- p.49 / Chapter 5.1 --- Investment on A Foreign Exchange Portfolio --- p.49 / Chapter 5.2 --- Two Important Issues Revisited --- p.51 / Chapter 6 --- Investment on Stock market using ASLD System --- p.54 / Chapter 6.1 --- Investment on Hong Kong Hang Seng Index --- p.54 / Chapter 6.1.1 --- Performance of the Original ASLD System --- p.54 / Chapter 6.1.2 --- Performances After Adding Several Heuristic Strategies --- p.55 / Chapter 6.2 --- Investment on Six Different Stock Indexes --- p.61 / Chapter 6.2.1 --- Structure and Operation of the New System --- p.62 / Chapter 6.2.2 --- Experimental Results --- p.63 / Chapter III --- Combination of Static Portfolio Optimization techniques with Neural Network Approach --- p.67 / Chapter 7 --- Combining the ASLD system with Different Portfolio Optimizations --- p.68 / Chapter 7.1 --- Structure and Operation of the New System --- p.69 / Chapter 7.2 --- Combined with the Standard Markowian Portfolio Optimization (SMPO) --- p.70 / Chapter 7.3 --- Combined with the Modified Portfolio Sharpe Ratio Maximization (MP- SRM) --- p.72 / Chapter 7.4 --- Combined with the MPSRM ´ؤ Risk Minimization with Control of Ex- pected Return --- p.74 / Chapter 7.5 --- Combined with the MPSRM ´ؤ Return Maximization with Control of Expected Downside Risk --- p.76 / Chapter 7.6 --- Combined with the Soft-max Version of MPSRM (SMPSRM) --- p.77 / Chapter 7.7 --- Combined with the SMPSRM - Risk Minimization with Control of Ex- pected Return --- p.79 / Chapter 7.8 --- Combined with the SMPSRM ´ؤ Return Maximization with Control of Expected Downside Risk --- p.80 / Chapter 7.9 --- Combined with the Soft-max Version of MPSRM with Entropy-like Reg- ularization Term (SMPSRM-E) --- p.82 / Chapter 7.10 --- Combined with the SMPSRM-E ´ؤ Risk Minimization with Control of Expected Return --- p.84 / Chapter 7.11 --- Combined with the SMPSRM-E ´ؤ Return Maximization with Control of Expected Downside Risk --- p.86 / Chapter IV --- Software Developed --- p.93 / Chapter 8 --- Windows Application Developed --- p.94 / Chapter 8.1 --- Decision on Platform and Programming Language --- p.94 / Chapter 8.2 --- System Design --- p.96 / Chapter 8.3 --- Operation of our program --- p.97 / Chapter 9 --- Conclusion --- p.103 / Chapter A --- Algorithm for Portfolio Sharpe Ratio Maximization (PSRM) --- p.105 / Chapter B --- Algorithm for Improved Portfolio Sharpe Ratio Maximization (ISRM) --- p.107 / Chapter C --- Proof of Regularization Term Y --- p.109 / Chapter D --- Algorithm for Modified Portfolio Sharpe Ratio Maximization (MP- SRM) --- p.111 / Chapter E --- Algorithm for MPSRM with Control of Expected Return --- p.113 / Chapter F --- Algorithm for MPSRM with Control of Expected Downside Risk --- p.115 / Chapter G --- Algorithm for SMPSRM with Control of Expected Return --- p.117 / Chapter H --- Algorithm for SMPSRM with Control of Expected Downside Risk --- p.119 / Chapter I --- Proof of Entropy-like Regularization Term --- p.121 / Chapter J --- Algorithm for SMPSRM-E with Control of Expected Return --- p.123 / Chapter K --- Algorithm for SMPSRM-E with Control of Expected Downside Riskl25 Bibliography --- p.127

Page generated in 0.1064 seconds