Return to search

The geometry of neural representational spaces and the trade-off between generalization and separation

To make decisions, plan, and act appropriately in a complex world, the brain formulates internal models of how the world works. As we change our goals, shift to novel environments, or as the world simply evolves around us, these models must flexibly adapt. Impairments in the formation of such flexible models are present in numerous disorders such as schizophrenia, Parkinson’s, and Alzheimer’s Disease and plague even the most sophisticated artificial agents. Therefore, understanding how the brain structures efficient internal models is of critical importance.
Previous findings indicate that the information extracted from past experiences is organized and encoded into relational structured knowledge by the prefrontal cortex (PFC) and the medial temporal lobe (MTL). This process requires balancing two complementary computations in response to overlap. The first is to generalize the commonalities shared across overlapping experiences. The second is to separate overlapping experiences that fundamentally differed along a critical dimension, such as their outcomes, required actions, or time. How the brain balances these functions when faced with overlap remains poorly understood.
In this thesis, I proposed that the analysis of the geometry of neural representational spaces offers valuable insights into the brain’s solution to optimally disambiguate and generalize overlapping experiences. I tested this proposal through the analysis of three functional magnetic resonance imaging (fMRI) experiments leveraging data-driven multivariate techniques to probe the dimensionality, structure, and content of these spaces.
The first experiment (chapter 2) explored how PFC subregions respond to partially overlapping spatial environments during goal-directed virtual navigation. Based on previous research conducted in our lab that showed prefrontal activity in response to spatial overlap, we analyzed the dimensionality, structure, and content of prefrontal representations while participants learned a virtual navigation task. These analyses demonstrated compressed and highly orthogonalized codes early in learning that shifted over time towards more integrated and schematic codes. Critically, both prospective and retrospective information was bound to the representations of overlapping routes, with greater weight given to prospective information early in learning to help separate overlap. Based on these results, I advanced the idea that PFC subregions tune the geometry of their representations based on task demands and argued that prefrontal attention acts as a filter to promote both the separation and generalization of overlap.
Building on the first experiment, a second study (chapter 3) focused on the re-analysis of a high-resolution fMRI study centered on the MTL that examined how MTL subregions handle prospective spatial overlap when planning routes in the same task. Based on previous research in our lab, we knew that the parahippocampal cortex (PHC) and hippocampal subfields CA1 and CA3/DG likely play a role in disambiguating overlapping routes during planning. We probed the geometry of their representations using the same methods used in experiment 1. The results demonstrated a segregation of roles between compressed schematic codes in PHC and expanded orthogonalized codes in CA3/DG that formed over the course of learning in response to overlap. Importantly, the degree of pattern separation observed in CA3/DG depended on the amount of initial overlap. These findings lead to the conclusion that generalization and separation are balanced in the MTL by distributing these functions to different subregions. Furthermore, the results suggest that MTL integration is supported by compression, whereas its separation is achieved via expansion.
The third experiment (chapter 4) further examined how PFC and MTL regions balance generalization and separation processes when faced with abstract overlap between context-dependent rules. An analysis of the geometry of representational spaces in a context-dependent rule learning task found that successful rule learning was characterized by maintaining a balance between high and low dimensional spaces over learning. This equilibrium likely enabled the formation of relational knowledge representations that captured the latent structure of the task rules. Importantly, the only level of abstraction observed was one that perfectly matched the maximal amount of abstraction necessary to perform the task, and this structure only appeared later in learning in the hippocampus relative to extra-hippocampal regions. These results suggest that the brain employs an efficient and flexible coding scheme to respond to task demands. The results also suggest an important interplay between prefrontal and hippocampal codes over the course of learning.
These three experiments demonstrate the promise that representational geometries offer in understanding the computations of the brain. Specifically, the results show that the flexible equilibrium between generalization and separation is accompanied by the fine-tuning of the dimensionality, structure, and content of representational spaces across a distributed network of MTL and PFC subregions. In the conclusion chapter, I discuss how these insights fit into existing frameworks regarding efficient and distributed codes.

Identiferoai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/47958
Date25 January 2024
CreatorsLiapis, Stamatios
ContributorsStern, Chantal E., McGuire, Joseph T.
Source SetsBoston University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation

Page generated in 0.0059 seconds