• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Some computational aspects of attractor memory

Rehn, Martin January 2005 (has links)
<p>In this thesis I present novel mechanisms for certain computational capabilities of the cerebral cortex, building on the established notion of attractor memory. A sparse binary coding network for generating efficient representation of sensory input is presented. It is demonstrated that this network model well reproduces receptive field shapes seen in primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realized on the microcircuit level -- and how it may be analyzed using similar tools as used experimentally. I demonstrate some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimized for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience.</p>
2

Some computational aspects of attractor memory

Rehn, Martin January 2005 (has links)
In this thesis I present novel mechanisms for certain computational capabilities of the cerebral cortex, building on the established notion of attractor memory. A sparse binary coding network for generating efficient representation of sensory input is presented. It is demonstrated that this network model well reproduces receptive field shapes seen in primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realized on the microcircuit level -- and how it may be analyzed using similar tools as used experimentally. I demonstrate some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimized for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience. / QC 20101220
3

Aspects of memory and representation in cortical computation

Rehn, Martin January 2006 (has links)
Denna avhandling i datalogi föreslår modeller för hur vissa beräkningsmässiga uppgifter kan utföras av hjärnbarken. Utgångspunkten är dels kända fakta om hur en area i hjärnbarken är uppbyggd och fungerar, dels etablerade modellklasser inom beräkningsneurobiologi, såsom attraktorminnen och system för gles kodning. Ett neuralt nätverk som producerar en effektiv gles kod i binär mening för sensoriska, särskilt visuella, intryck presenteras. Jag visar att detta nätverk, när det har tränats med naturliga bilder, reproducerar vissa egenskaper (receptiva fält) hos nervceller i lager IV i den primära synbarken och att de koder som det producerar är lämpliga för lagring i associativa minnesmodeller. Vidare visar jag hur ett enkelt autoassociativt minne kan modifieras till att fungera som ett generellt sekvenslärande system genom att utrustas med synapsdynamik. Jag undersöker hur ett abstrakt attraktorminnessystem kan implementeras i en detaljerad modell baserad på data om hjärnbarken. Denna modell kan sedan analyseras med verktyg som simulerar experiment som kan utföras på en riktig hjärnbark. Hypotesen att hjärnbarken till avsevärd del fungerar som ett attraktorminne undersöks och visar sig leda till prediktioner för dess kopplingsstruktur. Jag diskuterar också metodologiska aspekter på beräkningsneurobiologin idag. / In this thesis I take a modular approach to cortical function. I investigate how the cerebral cortex may realise a number of basic computational tasks, within the framework of its generic architecture. I present novel mechanisms for certain assumed computational capabilities of the cerebral cortex, building on the established notions of attractor memory and sparse coding. A sparse binary coding network for generating efficient representations of sensory input is presented. It is demonstrated that this network model well reproduces the simple cell receptive field shapes seen in the primary visual cortex and that its representations are efficient with respect to storage in associative memory. I show how an autoassociative memory, augmented with dynamical synapses, can function as a general sequence learning network. I demonstrate how an abstract attractor memory system may be realised on the microcircuit level -- and how it may be analysed using tools similar to those used experimentally. I outline some predictions from the hypothesis that the macroscopic connectivity of the cortex is optimised for attractor memory function. I also discuss methodological aspects of modelling in computational neuroscience. / QC 20100916
4

Connectionist modelling in cognitive science: an exposition and appraisal

Janeke, Hendrik Christiaan 28 February 2003 (has links)
This thesis explores the use of artificial neural networks for modelling cognitive processes. It presents an exposition of the neural network paradigm, and evaluates its viability in relation to the classical, symbolic approach in cognitive science. Classical researchers have approached the description of cognition by concentrating mainly on an abstract, algorithmic level of description in which the information processing properties of cognitive processes are emphasised. The approach is founded on seminal ideas about computation, and about algorithmic description emanating, amongst others, from the work of Alan Turing in mathematical logic. In contrast to the classical conception of cognition, neural network approaches are based on a form of neurocomputation in which the parallel distributed processing mechanisms of the brain are highlighted. Although neural networks are generally accepted to be more neurally plausible than their classical counterparts, some classical researchers have argued that these networks are best viewed as implementation models, and that they are therefore not of much relevance to cognitive researchers because information processing models of cognition can be developed independently of considerations about implementation in physical systems. In the thesis I argue that the descriptions of cognitive phenomena deriving from neural network modelling cannot simply be reduced to classical, symbolic theories. The distributed representational mechanisms underlying some neural network models have interesting properties such as similarity-based representation, content-based retrieval, and coarse coding which do not have straightforward equivalents in classical systems. Moreover, by placing emphasis on how cognitive processes are carried out by brain-like mechanisms, neural network research has not only yielded a new metaphor for conceptualising cognition, but also a new methodology for studying cognitive phenomena. Neural network simulations can be lesioned to study the effect of such damage on the behaviour of the system, and these systems can be used to study the adaptive mechanisms underlying learning processes. For these reasons, neural network modelling is best viewed as a significant theoretical orientation in the cognitive sciences, instead of just an implementational endeavour. / Psychology / D. Litt. et Phil. (Psychology)
5

Connectionist modelling in cognitive science: an exposition and appraisal

Janeke, Hendrik Christiaan 28 February 2003 (has links)
This thesis explores the use of artificial neural networks for modelling cognitive processes. It presents an exposition of the neural network paradigm, and evaluates its viability in relation to the classical, symbolic approach in cognitive science. Classical researchers have approached the description of cognition by concentrating mainly on an abstract, algorithmic level of description in which the information processing properties of cognitive processes are emphasised. The approach is founded on seminal ideas about computation, and about algorithmic description emanating, amongst others, from the work of Alan Turing in mathematical logic. In contrast to the classical conception of cognition, neural network approaches are based on a form of neurocomputation in which the parallel distributed processing mechanisms of the brain are highlighted. Although neural networks are generally accepted to be more neurally plausible than their classical counterparts, some classical researchers have argued that these networks are best viewed as implementation models, and that they are therefore not of much relevance to cognitive researchers because information processing models of cognition can be developed independently of considerations about implementation in physical systems. In the thesis I argue that the descriptions of cognitive phenomena deriving from neural network modelling cannot simply be reduced to classical, symbolic theories. The distributed representational mechanisms underlying some neural network models have interesting properties such as similarity-based representation, content-based retrieval, and coarse coding which do not have straightforward equivalents in classical systems. Moreover, by placing emphasis on how cognitive processes are carried out by brain-like mechanisms, neural network research has not only yielded a new metaphor for conceptualising cognition, but also a new methodology for studying cognitive phenomena. Neural network simulations can be lesioned to study the effect of such damage on the behaviour of the system, and these systems can be used to study the adaptive mechanisms underlying learning processes. For these reasons, neural network modelling is best viewed as a significant theoretical orientation in the cognitive sciences, instead of just an implementational endeavour. / Psychology / D. Litt. et Phil. (Psychology)

Page generated in 0.047 seconds