The modelling of strictly sequential experimental tasks, such as serial list learning, has underscored a potential problem for connectionism: namely, the inability of connectionist networks to retain old information during the acquisition of new material (McCloskey & Cohen, 1989; Ratcliff, 1990). While humans also suffer from interference, connectionist networks experience a much greater loss of old material; this excessive retroactive interference is termed the sequential learning problem. This paper reviews two papers arguing that connectionist networks are unable to overcome the sequential learning problem, and five papers offering potential solutions. Simulations exploring issues arising from these reviews are described in the later part of the paper. It is true that connectionist models do suffer from the sequential learning problem. However, it appears that the problem is found only with simulations employing a strictly sequential training regime and involving small, unstructured item sets. Hence, there is no reason to believe that more realistic simulations of large, structured domains, such as language, will suffer from the sequential learning problem.
Identifer | oai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:QMM.60076 |
Date | January 1991 |
Creators | Hetherington, Phil A. (Phillip Alan) |
Publisher | McGill University |
Source Sets | Library and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada |
Language | English |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Format | application/pdf |
Coverage | Master of Arts (Department of Psychology.) |
Rights | All items in eScholarship@McGill are protected by copyright with all rights reserved unless otherwise indicated. |
Relation | alephsysno: 001226377, proquestno: AAIMM67811, Theses scanned by UMI/ProQuest. |
Page generated in 0.0009 seconds