1 |
Storing information through complex dynamics in recurrent neural networksMolter, Colin C 20 May 2005 (has links)
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in the desired item's basin of attraction. The second view shared by neural net researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The last assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli and being able to easily switch from one of these potential attractors to another in response to any coming stimulus.
In this thesis, it is shown experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the back, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause but the consequence of the learning. However, it appears as an helpful consequence that widens the net's encoding capacity. To learn the information to be stored, an unsupervised Hebbian learning algorithm is introduced. By leaving the semantics of the attractors to be associated with the feeding data unprescribed, promising results have been obtained in term of storing capacity.
|
2 |
Storing information through complex dynamics in recurrent neural networksMolter, Colin 20 May 2005 (has links)
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in the desired item's basin of attraction. The second view shared by neural net researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The last assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli and being able to easily switch from one of these potential attractors to another in response to any coming stimulus.<p><p>In this thesis, it is shown experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the back, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause but the consequence of the learning. However, it appears as an helpful consequence that widens the net's encoding capacity. To learn the information to be stored, an unsupervised Hebbian learning algorithm is introduced. By leaving the semantics of the attractors to be associated with the feeding data unprescribed, promising results have been obtained in term of storing capacity. / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0747 seconds