Most artificial neural networks suffer from the problem of catastrophic forgetting, where previously learnt information is suddenly and completely lost when new information is learnt. Memory in real neural systems does not appear to suffer from this unusual behaviour. In this thesis we discuss the problem of catastrophic forgetting in Hopfield networks, and investigate various potential solutions. We extend the pseudorehearsal solution of Robins (1995) enabling it to work in this attractor network, and compare the results with the unlearning procedure proposed by Crick and Mitchison (1983). We then explore a familiarity measure based on the energy profile of the learnt patterns. By using the ratio of high energy to low energy parts of the network we can robustly distinguish the learnt patterns from the large number of spurious "fantasy" patterns that are common in these networks. This energy ratio measure is then used to improve the pseudorehearsal solution so that it can store 0.3N patterns in the Hopfield network, significantly more than previous proposed solutions to catastrophic forgetting. Finally, we explore links between the mechanisms investigated in this thesis and the consolidation of newly learnt material during sleep.
Identifer | oai:union.ndltd.org:ADTP/217851 |
Date | January 2007 |
Creators | McCallum, Simon, n/a |
Publisher | University of Otago. Department of Computer Sciences |
Source Sets | Australiasian Digital Theses Program |
Language | English |
Detected Language | English |
Rights | http://policy01.otago.ac.nz/policies/FMPro?-db=policies.fm&-format=viewpolicy.html&-lay=viewpolicy&-sortfield=Title&Type=Academic&-recid=33025&-find), Copyright Simon McCallum |
Page generated in 0.0019 seconds