Return to search

Neural computation of visual motion in macaque area MT

<p>How does the visual system determine the direction and speed of moving objects? In the primate brain, visual motion is processed at several stages. Neurons in primary visual cortex (V1), filter incoming signals to extract the motion of oriented edges at a fine spatial scale. V1 neurons send these measurements to the extrastriate visual area MT, where neurons are selective for direction and speed in a manner that is invariant to simple or complex patterns.
Previous theoretical work proposed that MT neurons achieve selectivity to pattern motion by combining V1 inputs consistent with a common velocity. Here, we performed two sets of experiments to test this hypothesis. In the first experiment, we recorded single-unit V1 and MT responses to drifting sinusoidal gratings and plaids (two gratings superimposed). These stimuli either had jointly varying direction and drift rate (consistent with a constant velocity) or independently varying direction and drift rate. In the second experiment, we presented arbitrary, randomly chosen combinations of gratings in rapid succession, to sample as widely as possible the space of stimuli that could excite or suppress neural responses.
Responses to single gratings alone were insufficient to uniquely identify the organization of MT selectivity. To account for MT responses to both simple and compound stimuli, we developed new versions of an existing cascaded linear-nonlinear model in which each MT neuron pools inputs from V1. We fit these models to our data. By comparing the performance of the different model variants and examining their parameters that best accounted for the data, we showed that MT responses are best described when selectivity is organized along a common velocity. This confirms previous predictions that MT neurons are selective for the arbitrary motion of objects, independent of object shape or texture. We explore new model variants of MT computation that capture this behavior. These studies show that in order to characterize sensory computation, stimuli must be complex enough to engage the nonlinear aspects of neural selectivity. By exploring different linear-nonlinear model architectures, we identified the essential components of MT computation. Together, these provide an effective framework for characterizing changes in selectivity between connected sensory areas.
Supplementary materials: figures 3.4(a-e), 3.10(a-e), and 3.14(a-e) are rendered as movies.

Identiferoai:union.ndltd.org:PROQUEST/oai:pqdtoai.proquest.com:10192285
Date15 December 2016
CreatorsZaharia, Andrew D.
PublisherNew York University
Source SetsProQuest.com
LanguageEnglish
Detected LanguageEnglish
Typethesis

Page generated in 0.0028 seconds