Return to search

Negative feedback as an organising principle for artificial neural networks

We investigate the properties of an unsupervised neural network which uses simple Hebbian learning and negative feedback of activation in order to self-organise. The negative feedback circumvents the well-known difficulty of positive feedback in Hebbian learning systems which causes the networks' weights to increase without bound. We show, both analytically and experimentally, that not only do the weights of networks with this architecture converge, they do so to values which give the networks important information processing properties: linear versions of the model are shown to perform a Principal Component Analysis of the input data while a non-linear version is shown to be capable of Exploratory Projection Pursuit. While there is no claim that the networks described herein represent the complexity found in biological networks, we believe that the networks investigated are not incompatible with known neurobiology. However, the main thrust of the thesis is a mathematical analysis of the emergent properties of the network; such analysis is backed by empirical evidence at all times.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:363303
Date January 1995
CreatorsFyfe, Colin
PublisherUniversity of Strathclyde
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=21390

Page generated in 0.002 seconds