Return to search

Flexible Computation in Neural Circuits

This dissertation presents two lines of research that are superficially at opposite ends of the computational neuroscience spectrum. While models of adaptive motion detection in fruit flies and simulations inspired by monkeys that learn to control brain machine interfaces might seem like they have little in common, these projects both attempt to address the broad question of how real neural circuits flexibly compute.

Sensory systems flexibly adapt their processing properties across a wide range of environmental and behavioral conditions. Such variable processing complicates attempts to extract mechanistic understanding of sensory computations. This is evident in the highly constrained, canonical Drosophila motion detection circuit, where the core computation underlying direction selectivity is still debated despite extensive studies.

The first part of this dissertation analyzes the filtering properties of four neural inputs to the OFF motion-detecting T5 cell in Drosophila. These four neurons, Tm1, Tm2, Tm4 and Tm9, exhibit state- and stimulus-dependent changes in the shape of their temporal responses, which become more biphasic under specific conditions. Summing these inputs within the framework of a connectomic-constrained model of the circuit demonstrates that these shapes are sufficient to explain T5 responses to various motion stimuli. Thus, the stimulus- and state-dependent measurements reconcile motion computation with the anatomy of the circuit. These findings provide a clear example of how a basic circuit supports flexible sensory computation.

The most flexible neural circuits are circuits that can learn. Despite extensive theoretical work on biologically plausible learning rules, however, it has been difficult to obtain clear evidence about whether and how such rules are implemented in the brain. In the second part of this dissertation, I consider biologically plausible supervised- and reinforcement-learning rules and ask whether biased changes in network activity during learning can be used to determine which learning rule is being used.

Supervised learning requires a credit-assignment model estimating the mapping from neural activity to behavior, and, in a biological organism, this model will inevitably be an imperfect approximation of the ideal mapping, leading to a bias in the direction of the weight updates relative to the true gradient. Reinforcement learning, on the other hand, requires no credit-assignment model and tends to make weight updates following the true gradient direction. I derive a metric to distinguish between learning rules by observing biased changes in the network activity during learning, given that the mapping from brain to behavior is known by the experimenter. Because brain-machine interface (BMI) experiments allow for perfect knowledge of this mapping, I focus on modeling a cursor-control BMI task using recurrent neural networks, and show that learning rules can be distinguished in simulated experiments using only observations that a neuroscience experimenter would plausibly have access to.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/h0nh-fa20
Date January 2022
CreatorsPortes, Jacob
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0019 seconds