Return to search

Towards biologically plausible mechanisms of predictive learning

Animals perform a myriad of behaviors such as object tracking and spatial navigation, primarily in the absence of explicit target signals. In the absence of targets, neural circuits must implement a different target function. One primary theory for self-supervised learning is predictive learning, in which a system predicts feedforward signals over time, and in which internal representations emerge to provide longer-term structural information. While such theories are inspired by neural properties, they often lack direct links to low-level neural mechanisms.

In the first study, a model of the formation of internal representations is presented. I introduce the canonical microcircuit of cortical structures, including general connectivity and unique physiological properties of neural subpopulations. I then introduce a learning rule based on the contrast of feedforward potentials in pyramidal neurons with their feedback-controlled burst rates. Utilizing these two signals the learning rule instantiates a feedback-gated temporal error minimization. Combined with a set of feedforward-only units and organized hierarchically, the model learns to tracks the dynamics of external stimuli with high accuracy, and successive regions are shown to code temporal derivatives of their feedforward inputs.

The second study presents an electrophysiological experiment which showed a novel functional cell type in the retrosplenial cortex of behaving Long-Evans rats. Through rigorous statistical analysis we show that these neurons contain a egocentric representation of boundary locations. Combined with their location in the cortical hierarchy, this suggests that the retrosplenial neurons provide a mechanism for translating self-centered sensory information to the map-like representations present in subcortical structures.

In the final study I integrate the basic modular architecture of the first study with the specific afferent stimuli and macroscale connectivity patterns involved in spatial navigation. I simulate an agent in a simple virtual environment and compare the learned representations to tuning curves from experiments such as study two. I find the expected development of neural responses corresponding to egocentric sensory representations (retrosplenial cortex), self-oriented allocentric coding (postrhinal cortex) and allocentric spatial representations (hippocampus).

Together, these modeling results show how self-gated and guided learning in pyramidal ensembles can form useful and stable internal representations depending on the task at hand.

Identiferoai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/48507
Date26 March 2024
CreatorsChapman IV, G. William
ContributorsHasselmo, Michael E.
Source SetsBoston University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation
RightsAttribution 4.0 International, http://creativecommons.org/licenses/by/4.0/

Page generated in 0.0025 seconds