Return to search

Optimal and adaptive radial basis function neural networks

The optimisation and adaptation of single hidden layer feed-forward neural networks employing radial basis activation functions (RBFNs) was investigated. Previous work on RBFNs has mainly focused on problems with large data sets. The training algorithms developed with large data sets prove unreliable for problems with a small number of observations, a situation frequently encountered in process engineering. The primary objective of this study was the development of efficient and reliable learning algorithms for the training of RJBFNs with small and noisy data sets. It was demonstrated that regularisation is essential in order to filter out the noise and prevent over-fitting. The selection of the appropriate level of regularisation, lambda*, with small data sets presents a major challenge. The leave-one-out cross validation technique was considered as a potential means for automatic selection of lambda*. The computational burden of selecting lambda* was significantly reduced by a novel application of the generalised singular value decomposition. The exact solution of the multivariate linear regularisation problem can be represented as a single hidden layer neural network, the Regularisation Network, with one neurone for each distinct exemplar. A new formula was developed for automatic selection of the regularisation level for a Regularisation Network with given non-linearities. It was shown that the performance of a Regularisation Network is critically dependent on the non-linear parameters of the activation function employed; a point which has received surprisingly little attention. It was demonstrated that a measure of the effective degrees of freedom df(lambda*,alpha) of a Regularisation Network can be used to select the appropriate width of the local radial basis functions, alpha, based on the data alone. The one-to-one correspondence between the number of exemplars and the number of hidden neurones of a Regularisation Network may prove computationally prohibitive. The remedy is to use a network with a smaller number of neurones, the Generalised Radial Basis Function Network (GRBFN). The training of a GRBFN ultimately settles down to a large-scale non-linear optimisation problem. A novel sequential back-fit algorithm was developed for training the GRBFNs, which enabled the optimisation to proceed one neurone at a time. The new algorithm was tested with very promising results and its application to a simple chemical engineering process was demonstrated In some applications the overall response is composed of sharp localised features superimposed on a gently varying global background. Existing multivariate regression techniques as well as conventional neural networks are aimed at filtering the noise and recovering the overall response. An initial attempt was made at developing an Adaptive GRBFN to separate the local and global features. An efficient algorithm was developed simply by insisting that all the activation functions which are responsible for capturing the global trend should lie in the null space of the differential operator generating the activation function of the kernel based neurones. It was demonstrated that the proposed algorithm performs extremely well in the absence of strong global input interactions.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:326880
Date January 2000
CreatorsShahsavand, Akbar
PublisherUniversity of Surrey
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://epubs.surrey.ac.uk/844452/

Page generated in 0.0016 seconds