Return to search

Neural Tabula Rasa: Foundations for Realistic Memories and Learning

Understanding how neural systems perform memorization and inductive learning tasks are of key interest in the field of computational neuroscience. Similarly, inductive learning tasks are the focus within the field of machine learning, which has seen rapid growth and innovation utilizing feedforward neural networks. However, there have also been concerns regarding the precipitous nature of such efforts, specifically in the area of deep learning. As a result, we revisit the foundation of the artificial neural network to better incorporate current knowledge of the brain from computational neuroscience. More specifically, a random graph was chosen to model a neural system. This random graph structure was implemented along with an algorithm for storing information, allowing the network to create memories by creating subgraphs of the network. This implementation was derived from a proposed neural computation system, the Neural Tabula Rasa, by Leslie Valiant. Contributions of this work include a new approximation of memory size, several algorithms for implementing aspects of the Neural Tabula Rasa, and empirical evidence of the functional form for memory capacity of the system. This thesis intends to benefit the foundations of learning systems, as the ability to form memories is required for a system to inductively learn.

Identiferoai:union.ndltd.org:CALPOLY/oai:digitalcommons.calpoly.edu:theses-4339
Date01 June 2023
CreatorsPerrine, Patrick R
PublisherDigitalCommons@CalPoly
Source SetsCalifornia Polytechnic State University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceMaster's Theses

Page generated in 0.0168 seconds