• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Attractor Memory Model of Neocortex

Johansson, Christopher January 2006 (has links)
This thesis presents an abstract model of the mammalian neocortex. The model was constructed by taking a top-down view on the cortex, where it is assumed that cortex to a first approximation works as a system with attractor dynamics. The model deals with the processing of static inputs from the perspectives of biological mapping, algorithmic, and physical implementation, but it does not consider the temporal aspects of these inputs. The purpose of the model is twofold: Firstly, it is an abstract model of the cortex and as such it can be used to evaluate hypotheses about cortical function and structure. Secondly, it forms the basis of a general information processing system that may be implemented in computers. The characteristics of this model are studied both analytically and by simulation experiments, and we also discuss its parallel implementation on cluster computers as well as in digital hardware. The basic design of the model is based on a thorough literature study of the mammalian cortex’s anatomy and physiology. We review both the layered and columnar structure of cortex and also the long- and short-range connectivity between neurons. Characteristics of cortex that defines its computational complexity such as the time-scales of cellular processes that transport ions in and out of neurons and give rise to electric signals are also investigated. In particular we study the size of cortex in terms of neuron and synapse numbers in five mammals; mouse, rat, cat, macaque, and human. The cortical model is implemented with a connectionist type of network where the functional units correspond to cortical minicolumns and these are in turn grouped into hypercolumn modules. The learning-rules used in the model are local in space and time, which make them biologically plausible and also allows for efficient parallel implementation. We study the implemented model both as a single- and multi-layered network. Instances of the model with sizes up to that of a rat-cortex equivalent are implemented and run on cluster computers in 23% of real time. We demonstrate on tasks involving image-data that the cortical model can be used for meaningful computations such as noise reduction, pattern completion, prototype extraction, hierarchical clustering, classification, and content addressable memory, and we show that also the largest cortex equivalent instances of the model can perform these types of computations. Important characteristics of the model are that it is insensitive to limited errors in the computational hardware and noise in the input data. Furthermore, it can learn from examples and is self-organizing to some extent. The proposed model contributes to the quest of understanding the cortex and it is also a first step towards a brain-inspired computing system that can be implemented in the molecular scale computers of tomorrow. The main contributions of this thesis are: (i) A review of the size, modularization, and computational structure of the mammalian neocortex. (ii) An abstract generic connectionist network model of the mammalian cortex. (iii) A framework for a brain-inspired self-organizing information processing system. (iv) Theoretical work on the properties of the model when used as an autoassociative memory. (v) Theoretical insights on the anatomy and physiology of the cortex. (vi) Efficient implementation techniques and simulations of cortical sized instances. (vii) A fixed-point arithmetic implementation of the model that can be used in digital hardware. / QC 20100903
2

Hierarchical Clustering using Brain-like Recurrent Attractor Neural Networks / Hierarkisk klustring med hjälp av Hjärnliknande återkommande attraktor Neurala nätverk

Kühn, Hannah January 2023 (has links)
Hierarchical clustering is a family of machine learning methods that has many applications, amongst other data science and data mining. This thesis belongs to the research area of brain-like computing and introduces a novel approach to hierarchical clustering using a brain-like recurrent neural network. Attractor networks can cluster samples by converging to the same network state. We modulate the network behaviour by varying a parameter in the activity propagation rule such that the granularity of the resulting clustering is changed. A hierarchical clustering is then created by combining multiple levels of granularity. The method is developed for two different datasets and evaluated on a variety of clustering metrics. Its performance is compared to standard clustering algorithms and the structure and composition of the clustering is inspected. We show that the method can produce clusterings for different levels of granularity and new data without retraining. As a novel clustering method, it is relevant to machine learning applications. As a model for hierarchical recall in a memory model, it is relevant to computational neuroscience and neuromorphic computing. / Hierarkiskt klusterarbete är en grupp av maskininlärningsmetoder som har många tillämpningar, bland annat datavetenskap och datagrävning. Denna avhandling tillhör forskningsområdet för hjärnlikt databehandling och introducerar ett nytt tillvägagångssätt för hierarkiskt klusterarbete med hjälp av ett hjärnlikt återkommande neuronnätverk. Attraktornätverk kan klustra prover genom att konvergera till samma nätverksstadium. Vi modulerar nätverkets beteende genom att variera en parameter i regeln för aktivitetspropagering så att granulariteten i det resulterande klusterarbetet förändras. Ett hierarkiskt klusterarbete skapas sedan genom att kombinera flera nivåer av granularitet. Metoden utvecklas för två olika datasets och utvärderas med hjälp av olika klustringsmått. Dess prestanda jämförs med standard klusteringsalgoritmer och strukturen och sammansättningen av klusterarbetet inspekteras. Vi visar att metoden kan producera klusterarbeten för olika nivåer av granularitet och nya data utan omträning. Som en ny klusteringsmetod är den relevant för maskininlärningsapplikationer. Som en modell för hierarkisk återkallelse i en minnesmodell är den relevant för beräkningsneurovetenskap och neuromorfisk databehandling.
3

Modelling synaptic rewiring in brain-like neural networks for representation learning / Modellering av synaptisk omkoppling i hjärnliknande neurala nätverk för representationsinlärning

Bhatnagar, Kunal January 2023 (has links)
This research investigated the concept of a sparsity method inspired by the principles of structural plasticity in the brain in order to create a sparse model of the Bayesian Confidence Propagation Neural Networks (BCPNN) during the training phase. This was done by extending the structural plasticity in the implementation of the BCPNN. While the initial algorithm presented two synaptic states (Active and Silent), this research extended it to three synaptic states (Active, Silent and Absent) with the aim to enhance sparsity configurability and emulate a more brain-like algorithm, drawing parallels with synaptic states observed in the brain. Benchmarking was conducted using the MNIST and Fashion-MNIST dataset, where the proposed threestate model was compared against the previous two-state model in terms of representational learning. The findings suggest that the three-state model not only provides added configurability but also, in certain low-sparsity settings, showcases similar representational learning abilities as the two-state model. Moreover, in high-sparsity settings, the three-state model demonstrates a commendable balance between accuracy and sparsity trade-off. / Denna forskning undersökte en konceptuell metod för gleshet inspirerad av principerna för strukturell plasticitet i hjärnan för att skapa glesa BCPNN. Forskningen utvidgade strukturell plasticitet i en implementering av BCPNN. Medan den ursprungliga algoritmen presenterade två synaptiska tillstånd (Aktiv och Tyst), utvidgade denna forskning den till tre synaptiska tillstånd (Aktiv, Tyst och Frånvarande) med målet att öka konfigurerbarheten av sparsitet och efterlikna en mer hjärnliknande algoritm, med paralleller till synaptiska tillstånd observerade i hjärnan. Jämförelse gjordes med hjälp av MNIST och Fashion-MNIST datasetet, där det föreslagna tre-tillståndsmodellen jämfördes med den tidigare tvåtillståndsmodellen med avseende på representationslärande. Resultaten tyder på att tre-tillståndsmodellen inte bara ger ökad konfigurerbarhet utan också, i vissa lågt glesa inställningar, visar samma inlärningsförmåga som två-tillståndsmodellen. Dessutom visar den tre-tillståndsmodellen i högsparsamma inställningar en anmärkningsvärd balans mellan noggrannhet och avvägningen mellan sparsitet.

Page generated in 0.0662 seconds