Return to search

Phase space techniques in neural network models

We present here two calculations based on the <i>phase-space of interactions</i> treatment of neural network models. As a way of introduction we begin by discussing the type of neural network models we wish to study, and the analytical techniques available to us from the branch of disordered systems in statistical mechanics. We then detail a neural network which models a <i>content addressable memory</i>, and sketch the mathematical methods we shall use. The model is a mathematical realisation of a neural network with its synaptic efficacies optimised in its phase space of interactions through some <i>training function</i>. The first model looks at how the basin of attraction of such a content addressable memory can be enlarged by the use of noisy external fields. These fields are used separately during the training and retrieval phases, and their influences compared. Expressed in terms of the number of memory patterns which the network's dynamics can retrieve with a microscopic initial overlap, we shall show that content addressability can be substantially improved. The second calculation concerns the use of <i>dual distribution functions</i> for two networks with different constraints on their synapses, but required to store the same set of memory patterns. This technique allows us to see how the two networks accommodate the demands imposed on them, and whether they arrive at radically different solutions. The problem we choose is aimed at, and eventually succeeds in, resolving a paradox in the sign-constrained model.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:664121
Date January 1992
CreatorsYau, Hon Wah
PublisherUniversity of Edinburgh
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://hdl.handle.net/1842/14713

Page generated in 0.0035 seconds