• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2669
  • 1219
  • 191
  • 179
  • 120
  • 59
  • 35
  • 27
  • 26
  • 24
  • 24
  • 21
  • 20
  • 19
  • 18
  • Tagged with
  • 5683
  • 5683
  • 2012
  • 1732
  • 1466
  • 1370
  • 1254
  • 1198
  • 981
  • 753
  • 691
  • 669
  • 617
  • 530
  • 511
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

A neurodynamic optimization approach to constrained pseudoconvex optimization.

January 2011 (has links)
Guo, Zhishan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 71-82). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement i --- p.ii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1 / Chapter 1.2 --- Recurrent Neural Networks --- p.4 / Chapter 1.3 --- Thesis Organization --- p.7 / Chapter 2 --- Literature Review --- p.8 / Chapter 2.1 --- Pseudo convex Optimization --- p.8 / Chapter 2.2 --- Recurrent Neural Networks --- p.10 / Chapter 3 --- Model Description and Convergence Analysis --- p.17 / Chapter 3.1 --- Model Descriptions --- p.18 / Chapter 3.2 --- Global Convergence --- p.20 / Chapter 4 --- Numerical Examples --- p.27 / Chapter 4.1 --- Gaussian Optimization --- p.28 / Chapter 4.2 --- Quadratic Fractional Programming --- p.36 / Chapter 4.3 --- Nonlinear Convex Programming --- p.39 / Chapter 5 --- Real-time Data Reconciliation --- p.42 / Chapter 5.1 --- Introduction --- p.42 / Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44 / Chapter 5.3 --- Examples --- p.45 / Chapter 6 --- Real-time Portfolio Optimization --- p.53 / Chapter 6.1 --- Introduction --- p.53 / Chapter 6.2 --- Model Description --- p.54 / Chapter 6.3 --- Theoretical Analysis --- p.56 / Chapter 6.4 --- Illustrative Examples --- p.58 / Chapter 7 --- Conclusions and Future Works --- p.67 / Chapter 7.1 --- Concluding Remarks --- p.67 / Chapter 7.2 --- Future Works --- p.68 / Chapter A --- Publication List --- p.69 / Bibliography --- p.71
302

Applications of neural networks in the binary classification problem.

January 1997 (has links)
by Chan Pak Kei, Bernard. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 125-127). / Chapter 1 --- Introduction --- p.10 / Chapter 1.1 --- Overview --- p.10 / Chapter 1.2 --- Classification Approaches --- p.11 / Chapter 1.3 --- The Use of Neural Network --- p.12 / Chapter 1.4 --- Motivations --- p.14 / Chapter 1.5 --- Organization of Thesis --- p.16 / Chapter 2 --- Related Work --- p.19 / Chapter 2.1 --- Overview --- p.19 / Chapter 2.2 --- Neural Network --- p.20 / Chapter 2.2.1 --- Backpropagation Feedforward Neural Network --- p.20 / Chapter 2.2.2 --- Training of a Backpropagation Feedforward Neural Network --- p.22 / Chapter 2.2.3 --- Single Hidden-layer Model --- p.27 / Chapter 2.2.4 --- Data Preprocessing --- p.27 / Chapter 2.3 --- Fuzzy Sets --- p.29 / Chapter 2.3.1 --- Fuzzy Linear Regression Analysis --- p.29 / Chapter 2.4 --- Network Architecture Altering Algorithms --- p.31 / Chapter 2.4.1 --- Pruning Algorithms --- p.32 / Chapter 2.4.2 --- Constructive/Growing Algorithms --- p.35 / Chapter 2.5 --- Summary --- p.38 / Chapter 3 --- Hybrid Classification Systems --- p.39 / Chapter 3.1 --- Overview --- p.39 / Chapter 3.2 --- Literature Review --- p.41 / Chapter 3.2.1 --- Fuzzy Linear Regression(FLR) with Fuzzy Interval Analysis --- p.41 / Chapter 3.3 --- Data Sample and Methodology --- p.44 / Chapter 3.4 --- Hybrid Model --- p.46 / Chapter 3.4.1 --- Construction of Model --- p.46 / Chapter 3.5 --- Experimental Results --- p.50 / Chapter 3.5.1 --- Experimental Results on Breast Cancer Database --- p.50 / Chapter 3.5.2 --- Experimental Results on Synthetic Data --- p.53 / Chapter 3.6 --- Conclusion --- p.55 / Chapter 4 --- Searching for Suitable Network Size Automatically --- p.59 / Chapter 4.1 --- Overview --- p.59 / Chapter 4.2 --- Literature Review --- p.61 / Chapter 4.2.1 --- Pruning Algorithm --- p.61 / Chapter 4.2.2 --- Constructive Algorithms (Growing) --- p.66 / Chapter 4.2.3 --- Integration of methods --- p.67 / Chapter 4.3 --- Methodology and Approaches --- p.68 / Chapter 4.3.1 --- Growing --- p.68 / Chapter 4.3.2 --- Combinations of Growing and Pruning --- p.69 / Chapter 4.4 --- Experimental Results --- p.75 / Chapter 4.4.1 --- Breast-Cancer Cytology Database --- p.76 / Chapter 4.4.2 --- Tic-Tac-Toe Database --- p.82 / Chapter 4.5 --- Conclusion --- p.89 / Chapter 5 --- Conclusion --- p.91 / Chapter 5.1 --- Recall of Thesis Objectives --- p.91 / Chapter 5.2 --- Summary of Achievements --- p.92 / Chapter 5.2.1 --- Data Preprocessing --- p.92 / Chapter 5.2.2 --- Network Size --- p.93 / Chapter 5.3 --- Future Works --- p.94 / Chapter A --- Experimental Results of Ch3 --- p.95 / Chapter B --- Experimental Results of Ch4 --- p.112 / Bibliography --- p.125
303

Radial basis function of neural network in performance attribution.

January 2003 (has links)
Wong Hing-Kwok. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 34-35). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Radial Basis Function (RBF) of Neural Network --- p.5 / Chapter 2.1 --- Neural Network --- p.6 / Chapter 2.2 --- Radial Basis Function (RBF) Network --- p.8 / Chapter 2.3 --- Model Specification --- p.10 / Chapter 2.4 --- Estimation --- p.12 / Chapter 3 --- RBF in Performance Attribution --- p.17 / Chapter 3.1 --- Background of Data Set --- p.18 / Chapter 3.2 --- Portfolio Construction --- p.20 / Chapter 3.3 --- Portfolio Rebalance --- p.22 / Chapter 3.4 --- Result --- p.23 / Chapter 4 --- Comparison --- p.26 / Chapter 4.1 --- Standard Linear Model --- p.27 / Chapter 4.2 --- Fixed Additive Model --- p.28 / Chapter 4.3 --- Refined Additive Model --- p.29 / Chapter 4.4 --- Result --- p.30 / Chapter 5 --- Conclusion --- p.32 / Bibliography --- p.34
304

Neural networks for optimization

Cheung, Ka Kit 01 January 2001 (has links)
No description available.
305

Neurodynamic approaches to model predictive control.

January 2009 (has links)
Pan, Yunpeng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (p. 98-107). / Abstract also in Chinese. / Abstract --- p.i / p.iii / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.2 / Chapter 1.1 --- Model Predictive Control --- p.2 / Chapter 1.2 --- Neural Networks --- p.3 / Chapter 1.3 --- Existing studies --- p.6 / Chapter 1.4 --- Thesis structure --- p.7 / Chapter 2 --- Two Recurrent Neural Networks Approaches to Linear Model Predictive Control --- p.9 / Chapter 2.1 --- Problem Formulation --- p.9 / Chapter 2.1.1 --- Quadratic Programming Formulation --- p.10 / Chapter 2.1.2 --- Linear Programming Formulation --- p.13 / Chapter 2.2 --- Neural Network Approaches --- p.15 / Chapter 2.2.1 --- Neural Network Model 1 --- p.15 / Chapter 2.2.2 --- Neural Network Model 2 --- p.16 / Chapter 2.2.3 --- Control Scheme --- p.17 / Chapter 2.3 --- Simulation Results --- p.18 / Chapter 3 --- Model Predictive Control for Nonlinear Affine Systems Based on the Simplified Dual Neural Network --- p.22 / Chapter 3.1 --- Problem Formulation --- p.22 / Chapter 3.2 --- A Neural Network Approach --- p.25 / Chapter 3.2.1 --- The Simplified Dual Network --- p.26 / Chapter 3.2.2 --- RNN-based MPC Scheme --- p.28 / Chapter 3.3 --- Simulation Results --- p.28 / Chapter 3.3.1 --- Example 1 --- p.28 / Chapter 3.3.2 --- Example 2 --- p.29 / Chapter 3.3.3 --- Example 3 --- p.33 / Chapter 4 --- Nonlinear Model Predictive Control Using a Recurrent Neural Network --- p.36 / Chapter 4.1 --- Problem Formulation --- p.36 / Chapter 4.2 --- A Recurrent Neural Network Approach --- p.40 / Chapter 4.2.1 --- Neural Network Model --- p.40 / Chapter 4.2.2 --- Learning Algorithm --- p.41 / Chapter 4.2.3 --- Control Scheme --- p.41 / Chapter 4.3 --- Application to Mobile Robot Tracking --- p.42 / Chapter 4.3.1 --- Example 1 --- p.44 / Chapter 4.3/2 --- Example 2 --- p.44 / Chapter 4.3.3 --- Example 3 --- p.46 / Chapter 4.3.4 --- Example 4 --- p.48 / Chapter 5 --- Model Predictive Control of Unknown Nonlinear Dynamic Sys- tems Based on Recurrent Neural Networks --- p.50 / Chapter 5.1 --- MPC System Description --- p.51 / Chapter 5.1.1 --- Model Predictive Control --- p.51 / Chapter 5.1.2 --- Dynamical System Identification --- p.52 / Chapter 5.2 --- Problem Formulation --- p.54 / Chapter 5.3 --- Dynamic Optimization --- p.58 / Chapter 5.3.1 --- The Simplified Dual Neural Network --- p.59 / Chapter 5.3.2 --- A Recursive Learning Algorithm --- p.60 / Chapter 5.3.3 --- Convergence Analysis --- p.61 / Chapter 5.4 --- RNN-based MPC Scheme --- p.65 / Chapter 5.5 --- Simulation Results --- p.67 / Chapter 5.5.1 --- Example 1 --- p.67 / Chapter 5.5.2 --- Example 2 --- p.68 / Chapter 5.5.3 --- Example 3 --- p.76 / Chapter 6 --- Model Predictive Control for Systems With Bounded Uncertainties Using a Discrete-Time Recurrent Neural Network --- p.81 / Chapter 6.1 --- Problem Formulation --- p.82 / Chapter 6.1.1 --- Process Model --- p.82 / Chapter 6.1.2 --- Robust. MPC Design --- p.82 / Chapter 6.2 --- Recurrent Neural Network Approach --- p.86 / Chapter 6.2.1 --- Neural Network Model --- p.86 / Chapter 6.2.2 --- Convergence Analysis --- p.88 / Chapter 6.2.3 --- Control Scheme --- p.90 / Chapter 6.3 --- Simulation Results --- p.91 / Chapter 7 --- Summary and future works --- p.95 / Chapter 7.1 --- Summary --- p.95 / Chapter 7.2 --- Future works --- p.96 / Bibliography --- p.97
306

Solutions of linear equations and a class of nonlinear equations using recurrent neural networks

Mathia, Karl 01 January 1996 (has links)
Artificial neural networks are computational paradigms which are inspired by biological neural networks (the human brain). Recurrent neural networks (RNNs) are characterized by neuron connections which include feedback paths. This dissertation uses the dynamics of RNN architectures for solving linear and certain nonlinear equations. Neural network with linear dynamics (variants of the well-known Hopfield network) are used to solve systems of linear equations, where the network structure is adapted to match properties of the linear system in question. Nonlinear equations inturn are solved using the dynamics of nonlinear RNNs, which are based on feedforward multilayer perceptrons. Neural networks are well-suited for implementation on special parallel hardware, due to their intrinsic parallelism. The RNNs developed here are implemented on a neural network processor (NNP) designed specifically for fast neural type processing, and are applied to the inverse kinematics problem in robotics, demonstrating their superior performance over alternative approaches.
307

Speech recognition using hybrid system of neural networks and knowledge sources.

Darjazini, Hisham, University of Western Sydney, College of Health and Science, School of Engineering January 2006 (has links)
In this thesis, a novel hybrid Speech Recognition (SR) system called RUST (Recognition Using Syntactical Tree) is developed. RUST combines Artificial Neural Networks (ANN) with a Statistical Knowledge Source (SKS) for a small topic focused database. The hypothesis of this research work was that the inclusion of syntactic knowledge represented in the form of probability of occurrence of phones in words and sentences improves the performance of an ANN-based SR system. The lexicon of the first version of RUST (RUST-I) was developed with 1357 words of which 549 were unique. These words were extracted from three topics (finance, physics and general reading material), and could be expanded or reduced (specialised). The results of experiments carried out on RUST showed that by including basic statistical phonemic/syntactic knowledge with an ANN phone recognisor, the phone recognition rate was increased to 87% and word recognition rate to 78%. The first implementation of RUST was not optimal. Therefore, a second version of RUST (RUST-II) was implemented with an incremental learning algorithm and it has been shown to improve the phone recognition rate to 94%. The introduction of incremental learning to ANN-based speech recognition can be considered as the most innovative feature of this research. In conclusion this work has proved the hypothesis that inclusion of a phonemic syntactic knowledge of probabilistic nature and topic related statistical data using an adaptive phone recognisor based on neural networks has the potential to improve the performance of a speech recognition system. / Doctor of Philosophy (PhD)
308

On evolving modular neural networks

Salama, Rameri January 2000 (has links)
The basis of this thesis is the presumption that while neural networks are useful structures that can be used to model complex, highly non-linear systems, current methods of training the neural networks are inadequate in some problem domains. Genetic algorithms have been used to optimise both the weights and architectures of neural networks, but these approaches do not treat the neural network in a sensible manner. In this thesis, I define the basis of computation within a neural network as a single neuron and its associated input connections. Sets of these neurons, stored in a matrix representation, comprise the building blocks that are transferred during one or more epochs of a genetic algorithm. I develop the concept of a Neural Building Block and two new genetic algorithms are created that utilise this concept. The first genetic algorithm utilises the micro neural building block (micro-NBB); a unit consisting of one or more neurons and their input connections. The micro-NBB is a unit that is transmitted through the process of crossover and hence requires the introduction of a new crossover operator. However the micro NBB can not be stored as a reusable component and must exist only as the product of the crossover operator. The macro neural building block (macro-NBB) is utilised in the second genetic algorithm, and encapsulates the idea that fit neural networks contain fit sub-networks, that need to be preserved across multiple epochs. A macro-NBB is a micro-NBB that exists across multiple epochs. Macro-NBBs must exist across multiple epochs, and this necessitates the use of a genetic store, and a new operator to introduce macro-NBBs back into the population at random intervals. Once the theoretical presentation is completed the newly developed genetic algorithms are used to evolve weights for a variety of architectures of neural networks to demonstrate the feasibility of the approach. Comparison of the new genetic algorithm with other approaches is very favourable on two problems: a multiplexer problem and a robot control problem.
309

Improving Time Efficiency of Feedforward Neural Network Learning

Batbayar, Batsukh, S3099885@student.rmit.edu.au January 2009 (has links)
Feedforward neural networks have been widely studied and used in many applications in science and engineering. The training of this type of networks is mainly undertaken using the well-known backpropagation based learning algorithms. One major problem with this type of algorithms is the slow training convergence speed, which hinders their applications. In order to improve the training convergence speed of this type of algorithms, many researchers have developed different improvements and enhancements. However, the slow convergence problem has not been fully addressed. This thesis makes several contributions by proposing new backpropagation learning algorithms based on the terminal attractor concept to improve the existing backpropagation learning algorithms such as the gradient descent and Levenberg-Marquardt algorithms. These new algorithms enable fast convergence both at a distance from and in a close range of the ideal weights. In particular, a new fast convergence mechanism is proposed which is based on the fast terminal attractor concept. Comprehensive simulation studies are undertaken to demonstrate the effectiveness of the proposed backpropagataion algorithms with terminal attractors. Finally, three practical application cases of time series forecasting, character recognition and image interpolation are chosen to show the practicality and usefulness of the proposed learning algorithms with comprehensive comparative studies with existing algorithms.
310

Resilient modulus prediction using neural network algorithm

Hanittinan, Wichai. January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 142-149).

Page generated in 0.0422 seconds