Return to search

Optimisation of the predictive ability of artificial neural network (ANN) models: A comparison of three ANN programs and four classes of training algorithm.

No / The purpose of this study was to determine whether artificial neural network (ANN) programs implementing different backpropagation algorithms and default settings are capable of generating equivalent highly predictive models. Three ANN packages were used: INForm, CAD/Chem and MATLAB. Twenty variants of gradient descent, conjugate gradient, quasi-Newton and Bayesian regularisation algorithms were used to train networks containing a single hidden layer of 3¿12 nodes.

All INForm and CAD/Chem models trained satisfactorily for tensile strength, disintegration time and percentage dissolution at 15, 30, 45 and 60 min. Similarly, acceptable training was obtained for MATLAB models using Bayesian regularisation. Training of MATLAB models with other algorithms was erratic. This effect was attributed to a tendency for the MATLAB implementation of the algorithms to attenuate training in local minima of the error surface. Predictive models for tablet capping and friability could not be generated.

The most predictive models from each ANN package varied with respect to the optimum network architecture and training algorithm. No significant differences were found in the predictive ability of these models. It is concluded that comparable models are obtainable from different ANN programs provided that both the network architecture and training algorithm are optimised. A broad strategy for optimisation of the predictive ability of an ANN model is proposed.

Identiferoai:union.ndltd.org:BRADFORD/oai:bradscholars.brad.ac.uk:10454/3011
Date January 2005
CreatorsRowe, Raymond C., Plumb, A.P., York, Peter, Brown, M.
Source SetsBradford Scholars
LanguageEnglish
Detected LanguageEnglish
TypeArticle, No full-text available in the repository

Page generated in 0.0036 seconds