Return to search

Recurrent computation in brains and machines

There are more neurons in the human brain than seconds in a lifetime. Given this incredible number how can we hope to understand the computations carried out by the full ensemble of neural firing patterns? And neural activity is not the only substrate available for computations. The incredible diversity of function found within biological organisms is matched by an equally rich reservoir available for computation. If we are interested in the metamorphosis of a caterpillar to a butterfly we could explore how DNA expression changes the cell. If we are interested in developing therapeutic drugs we could explore receptors and ion channels. And if we are interested in how humans and other animals interpret incoming streams of sensory information and process them to make moment-by-moment decisions then perhaps we can understand much of this behavior by studying the firing rates of neurons. This is the level and approach we will take in this thesis.
Given this diversity of potential reservoirs for computation, combined with limitations in recording technologies, it can be difficult to satisfactorily conclude that we are studying the full set of neural dynamics involved in a particular task. To overcome this limitation, we augment the study of neural activity with the study of artificial recurrent neural networks (RNNs) trained to mimic the behavior of humans and other animals performing experimental tasks. The inputs to the RNN are time-varying signals representing experimental stimuli and we adjust the parameters of the RNN so its time-varying outputs are the desired behavioral responses. In these artificial RNNs we have complete information about the network connectivity and moment-by-moment firing patterns and know, by design, that these are the only computational mechanisms being used to solve the tasks. If the artificial RNN and electrode recordings of real neurons have the same dynamics we can be more confident that we are studying the sufficient set of biological dynamics involved in the task. This is important if we want to make claims about the types of dynamics required, and observed, for various computational tasks, as is the case in Chapter 2 of this thesis.
In Chapter 2 we develop tests to identify several classes of neural dynamics. The specific neural dynamic regimes we focus on are interesting because they each have different computational capabilities, including, the ability to keep track of time, or preserve information robustly against the flow of time (working memory). We then apply these tests to electrode recordings from nonhuman primates and artificial RNNs to understand how neural networks are able to simultaneously keep track of time and remember previous experiences in working memory. To accomplish both computational goals the brain is thought to use distinct neural dynamics; stable neural trajectories can be used as a clock to coordinate cognitive activity whereas attractor dynamics provide a stable mechanism for memory storage but all timing information is lost. To identify these neural regimes we decode the passage of time from neural data. Additionally, to encode the passage of time, stabilized neural trajectories can be either high-dimensional as is the case for randomly connected recurrent networks (chaotic reservoir networks) or low-dimensional as is the case for artificial RNNs trained with backpropagation through time. To disambiguate these models we compute the cumulative dimensionality of the neural trajectory as it evolves over time.
Recurrent neural networks can also be used to generate hypotheses about neural computation. In Chapter 3 we use RNNs to generate hypotheses about the diverse set of neural response properties seen during spatial navigation, in particular, grid cells, and other spatial correlates, including border cells and band-like cells. The approach we take is 1) pick a task that requires navigation (spatial or mental), 2) create a RNN to solve the task, and 3) adjust the task or constraints on the neural network such that grid cells and other spatial response patterns emerge naturally as the network learns to perform the task. We trained RNNs to perform navigation tasks in 2D arenas based on velocity inputs. We find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. Surprisingly, the order of the emergence of grid-like and border cells during network training is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and other spatial correlates observed in the Entorhinal Cortex of the mammalian brain may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits.
All the tasks we have considered so far in this thesis require memory, but in Chapter 4 we explicitly explore the interactions between multiple memories in a recurrent neural network. Memory is the hallmark of recurrent neural networks, in contrast to standard feedforward neural networks where all signals travel in one direction from inputs to outputs and the network contains no memory of previous experiences. A recurrent neural network, as the name suggests, contains feedback loops giving the network the computational power of memory. In this chapter we train a RNN to perform a human psychophysics experiment and find that in order to reproduce human behavior, noise must be added to the network, causing the RNN to use more stable discrete memories to constrain less stable continuous memories.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/D8CN8MXK
Date January 2019
CreatorsCueva, Christopher
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.002 seconds