• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Attractor Memory Model of Neocortex

Johansson, Christopher January 2006 (has links)
This thesis presents an abstract model of the mammalian neocortex. The model was constructed by taking a top-down view on the cortex, where it is assumed that cortex to a first approximation works as a system with attractor dynamics. The model deals with the processing of static inputs from the perspectives of biological mapping, algorithmic, and physical implementation, but it does not consider the temporal aspects of these inputs. The purpose of the model is twofold: Firstly, it is an abstract model of the cortex and as such it can be used to evaluate hypotheses about cortical function and structure. Secondly, it forms the basis of a general information processing system that may be implemented in computers. The characteristics of this model are studied both analytically and by simulation experiments, and we also discuss its parallel implementation on cluster computers as well as in digital hardware. The basic design of the model is based on a thorough literature study of the mammalian cortex’s anatomy and physiology. We review both the layered and columnar structure of cortex and also the long- and short-range connectivity between neurons. Characteristics of cortex that defines its computational complexity such as the time-scales of cellular processes that transport ions in and out of neurons and give rise to electric signals are also investigated. In particular we study the size of cortex in terms of neuron and synapse numbers in five mammals; mouse, rat, cat, macaque, and human. The cortical model is implemented with a connectionist type of network where the functional units correspond to cortical minicolumns and these are in turn grouped into hypercolumn modules. The learning-rules used in the model are local in space and time, which make them biologically plausible and also allows for efficient parallel implementation. We study the implemented model both as a single- and multi-layered network. Instances of the model with sizes up to that of a rat-cortex equivalent are implemented and run on cluster computers in 23% of real time. We demonstrate on tasks involving image-data that the cortical model can be used for meaningful computations such as noise reduction, pattern completion, prototype extraction, hierarchical clustering, classification, and content addressable memory, and we show that also the largest cortex equivalent instances of the model can perform these types of computations. Important characteristics of the model are that it is insensitive to limited errors in the computational hardware and noise in the input data. Furthermore, it can learn from examples and is self-organizing to some extent. The proposed model contributes to the quest of understanding the cortex and it is also a first step towards a brain-inspired computing system that can be implemented in the molecular scale computers of tomorrow. The main contributions of this thesis are: (i) A review of the size, modularization, and computational structure of the mammalian neocortex. (ii) An abstract generic connectionist network model of the mammalian cortex. (iii) A framework for a brain-inspired self-organizing information processing system. (iv) Theoretical work on the properties of the model when used as an autoassociative memory. (v) Theoretical insights on the anatomy and physiology of the cortex. (vi) Efficient implementation techniques and simulations of cortical sized instances. (vii) A fixed-point arithmetic implementation of the model that can be used in digital hardware. / QC 20100903
2

Effects of Network Size in a Recurrent Bayesian Confidence Propagating Neural Network With two Synaptic Traces

Laius Lundgren, William, Karlsson, Ludwig January 2021 (has links)
A modular Recurrent Bayesian Confidence PropagatingNeural Networks (BCPNN) with two synaptic time tracesis a computational neural network that can serve as a modelof biological short term memory. The units in the network aregrouped into modules called hypercolumns within which there isa competitive winner-takes-all mechanism.In this work, the network’s capacity to store sequentialmemories is investigated while varying the size of and numberof hyperocolumns in the network. The network is trained on setsof temporal sequences where each sequence consist of a set ofsymbols represented as semi-stable attractor state patterns in thenetwork and evaluated by its ability to later recall the sequences.For a given distribution of training sequence the networks’ability to store and recall sequences was seen to significantlyincrease with the size of the hypercolumns. As the number ofhypercolumns was increased, the storage capacity increased upto a clear level in most cases. After this point it was observedto remain constant and did not improve by adding any morehypercolumns (for a given sequence distribution). The storagecapacity was also seen to depend a lot on the distribution of thesequences. / Ett modulärt Recurrent Bayesian Confidence Propagating Neural Network (BCPNN) med två synaptiskatidsspår är ett neuronnät som kan användas som en modell förbiologiskt korttidsminne. Enheterna i nätverket är grupperade imoduler kallade hyperkolumner inom vilka enheterna konkurrerarenligt en ”winner-takes-all”-mekanism.I det här arbetet undersöktes hur nätverkets förmåga attlagra sekventiella minnen beror på storleken och antalet hyperkolumner.Nätverket tränades på ett antal temporala följderdär varje följd bestod av en mängd symboler som representeradesom attraktor-tillstånd i nätverket och bedömdes baserat på dessförmåga att komma ihåg följder det lärt sig under träning.För en given fördelning av träningsföljder ökade nätverketsförmåga att lagra och återkalla följder med storleken på hyperkolumnerna.Då antalet hyperkolumner ökades ökade ocks i de flesta fall lagringsförmågan upp till en viss nivå varefterytterligare hyperkolumner inte gav några vidare förbättringar(för en given fördelning av sekvenser). Lagringskapacitetenberodde också mycket på fördelningen av följder. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm

Page generated in 0.042 seconds