1 |
A Search For Principles of Basal Ganglia FunctionTripp, Bryan January 2008 (has links)
The basal ganglia are a group of subcortical nuclei that contain about 100
million neurons in humans. Different modes of basal ganglia dysfunction lead to
Parkinson's disease and Huntington's disease, which have debilitating motor and
cognitive symptoms. However, despite intensive study, both the internal computational
mechanisms of the basal ganglia, and their contribution to normal brain
function, have been elusive. The goal of this thesis is to identify basic principles that
underlie basal ganglia function, with a focus on signal representation, computation,
dynamics, and plasticity.
This process begins with a review of two current hypotheses of normal basal
ganglia function, one being that they automatically select actions on the basis of
past reinforcement, and the other that they compress cortical signals that tend to
occur in conjunction with reinforcement. It is argued that a wide range of experimental
data are consistent with these mechanisms operating in series, and that in
this configuration, compression makes selection practical in natural environments.
Although experimental work is outside the present scope, an experimental means
of testing this proposal in the future is suggested.
The remainder of the thesis builds on Eliasmith & Anderson's Neural Engineering
Framework (NEF), which provides an integrated theoretical account of computation,
representation, and dynamics in large neural circuits. The NEF provides
considerable insight into basal ganglia function, but its explanatory power is potentially
limited by two assumptions that the basal ganglia violate. First, like most
large-network models, the NEF assumes that neurons integrate multiple synaptic
inputs in a linear manner. However, synaptic integration in the basal ganglia is
nonlinear in several respects. Three modes of nonlinearity are examined, including
nonlinear interactions between dendritic branches, nonlinear integration within terminal
branches, and nonlinear conductance-current relationships. The first mode
is shown to affect neuron tuning. The other two modes are shown to enable alternative
computational mechanisms that facilitate learning, and make computation
more flexible, respectively.
Secondly, while the NEF assumes that the feedforward dynamics of individual
neurons are dominated by the dynamics of post-synaptic current, many basal
ganglia neurons also exhibit prominent spike-generation dynamics, including adaptation,
bursting, and hysterses. Of these, it is shown that the NEF theory of
network dynamics applies fairly directly to certain cases of firing-rate adaptation.
However, more complex dynamics, including nonlinear dynamics that are diverse
across a population, can be described using the NEF equations for representation.
In particular, a neuron's response can be characterized in terms of a more complex
function that extends over both present and past inputs. It is therefore straightforward
to apply NEF methods to interpret the effects of complex cell dynamics at
the network level.
The role of spike timing in basal ganglia function is also examined. Although
the basal ganglia have been interpreted in the past to perform computations on
the basis of mean firing rates (over windows of tens or hundreds of milliseconds)
it has recently become clear that patterns of spikes on finer timescales are also
functionally relevant. Past work has shown that precise spike times in sensory
systems contain stimulus-related information, but there has been little study of how post-synaptic neurons might use this information. It is shown that essentially any neuron can use this information to perform flexible computations, and that these
computations do not require spike timing that is very precise. As a consequence,
irregular and highly-variable firing patterns can drive behaviour with which they
have no detectable correlation.
Most of the projection neurons in the basal ganglia are inhibitory, and the effect
of one nucleus on another is classically interpreted as subtractive or divisive. Theoretically, very flexible computations can be performed within a projection if each
presynaptic neuron can both excite and inhibit its targets, but this is hardly ever
the case physiologically. However, it is shown here that equivalent computational flexibility is supported by inhibitory projections in the basal ganglia, as a simple consequence of inhibitory collaterals in the target nuclei.
Finally, the relationship between population coding and synaptic plasticity is
discussed. It is shown that Hebbian plasticity, in conjunction with lateral connections, determines both the dimension of the population code and the tuning of
neuron responses within the coded space. These results permit a straightforward
interpretation of the effects of synaptic plasticity on information processing at the
network level.
Together with the NEF, these new results provide a rich set of theoretical principles
through which the dominant physiological factors that affect basal ganglia
function can be more clearly understood.
|
2 |
A Search For Principles of Basal Ganglia FunctionTripp, Bryan January 2008 (has links)
The basal ganglia are a group of subcortical nuclei that contain about 100
million neurons in humans. Different modes of basal ganglia dysfunction lead to
Parkinson's disease and Huntington's disease, which have debilitating motor and
cognitive symptoms. However, despite intensive study, both the internal computational
mechanisms of the basal ganglia, and their contribution to normal brain
function, have been elusive. The goal of this thesis is to identify basic principles that
underlie basal ganglia function, with a focus on signal representation, computation,
dynamics, and plasticity.
This process begins with a review of two current hypotheses of normal basal
ganglia function, one being that they automatically select actions on the basis of
past reinforcement, and the other that they compress cortical signals that tend to
occur in conjunction with reinforcement. It is argued that a wide range of experimental
data are consistent with these mechanisms operating in series, and that in
this configuration, compression makes selection practical in natural environments.
Although experimental work is outside the present scope, an experimental means
of testing this proposal in the future is suggested.
The remainder of the thesis builds on Eliasmith & Anderson's Neural Engineering
Framework (NEF), which provides an integrated theoretical account of computation,
representation, and dynamics in large neural circuits. The NEF provides
considerable insight into basal ganglia function, but its explanatory power is potentially
limited by two assumptions that the basal ganglia violate. First, like most
large-network models, the NEF assumes that neurons integrate multiple synaptic
inputs in a linear manner. However, synaptic integration in the basal ganglia is
nonlinear in several respects. Three modes of nonlinearity are examined, including
nonlinear interactions between dendritic branches, nonlinear integration within terminal
branches, and nonlinear conductance-current relationships. The first mode
is shown to affect neuron tuning. The other two modes are shown to enable alternative
computational mechanisms that facilitate learning, and make computation
more flexible, respectively.
Secondly, while the NEF assumes that the feedforward dynamics of individual
neurons are dominated by the dynamics of post-synaptic current, many basal
ganglia neurons also exhibit prominent spike-generation dynamics, including adaptation,
bursting, and hysterses. Of these, it is shown that the NEF theory of
network dynamics applies fairly directly to certain cases of firing-rate adaptation.
However, more complex dynamics, including nonlinear dynamics that are diverse
across a population, can be described using the NEF equations for representation.
In particular, a neuron's response can be characterized in terms of a more complex
function that extends over both present and past inputs. It is therefore straightforward
to apply NEF methods to interpret the effects of complex cell dynamics at
the network level.
The role of spike timing in basal ganglia function is also examined. Although
the basal ganglia have been interpreted in the past to perform computations on
the basis of mean firing rates (over windows of tens or hundreds of milliseconds)
it has recently become clear that patterns of spikes on finer timescales are also
functionally relevant. Past work has shown that precise spike times in sensory
systems contain stimulus-related information, but there has been little study of how post-synaptic neurons might use this information. It is shown that essentially any neuron can use this information to perform flexible computations, and that these
computations do not require spike timing that is very precise. As a consequence,
irregular and highly-variable firing patterns can drive behaviour with which they
have no detectable correlation.
Most of the projection neurons in the basal ganglia are inhibitory, and the effect
of one nucleus on another is classically interpreted as subtractive or divisive. Theoretically, very flexible computations can be performed within a projection if each
presynaptic neuron can both excite and inhibit its targets, but this is hardly ever
the case physiologically. However, it is shown here that equivalent computational flexibility is supported by inhibitory projections in the basal ganglia, as a simple consequence of inhibitory collaterals in the target nuclei.
Finally, the relationship between population coding and synaptic plasticity is
discussed. It is shown that Hebbian plasticity, in conjunction with lateral connections, determines both the dimension of the population code and the tuning of
neuron responses within the coded space. These results permit a straightforward
interpretation of the effects of synaptic plasticity on information processing at the
network level.
Together with the NEF, these new results provide a rich set of theoretical principles
through which the dominant physiological factors that affect basal ganglia
function can be more clearly understood.
|
3 |
A neurocomputational model of the mammalian fear conditioning circuitKolbeck, Carter January 2013 (has links)
In this thesis, I present a computational neural model that reproduces the high-level behavioural results of well-known fear conditioning experiments: first-order conditioning, second-order conditioning, sensory preconditioning, context conditioning, blocking, first-order extinction and renewal (AAB, ABC, ABA), and extinction and renewal after second-order conditioning and sensory preconditioning. The simulated neural populations used to account for the behaviour observed in these experiments correspond to known anatomical regions of the mammalian brain. Parts of the amygdala, periaqueductal gray, cortex and thalamus, and hippocampus are included and are connected to each other in a biologically plausible manner.
The model was built using the principles of the Neural Engineering Framework (NEF): a mathematical framework that allows information to be encoded and manipulated in populations of neurons. Each population represents information via the spiking activity of simulated neurons, and is connected to one or more other populations; these connections allow computations to be performed on the information being represented. By specifying which populations are connected to which, and what functions these connections perform, I developed an information processing system that behaves analogously to the fear conditioning circuit in the brain.
|
4 |
A neurocomputational model of the mammalian fear conditioning circuitKolbeck, Carter January 2013 (has links)
In this thesis, I present a computational neural model that reproduces the high-level behavioural results of well-known fear conditioning experiments: first-order conditioning, second-order conditioning, sensory preconditioning, context conditioning, blocking, first-order extinction and renewal (AAB, ABC, ABA), and extinction and renewal after second-order conditioning and sensory preconditioning. The simulated neural populations used to account for the behaviour observed in these experiments correspond to known anatomical regions of the mammalian brain. Parts of the amygdala, periaqueductal gray, cortex and thalamus, and hippocampus are included and are connected to each other in a biologically plausible manner.
The model was built using the principles of the Neural Engineering Framework (NEF): a mathematical framework that allows information to be encoded and manipulated in populations of neurons. Each population represents information via the spiking activity of simulated neurons, and is connected to one or more other populations; these connections allow computations to be performed on the information being represented. By specifying which populations are connected to which, and what functions these connections perform, I developed an information processing system that behaves analogously to the fear conditioning circuit in the brain.
|
5 |
Computational modeling of neuronal circuits: heterogeneous connectivity and nonlinear transformation in olfactory processingChou, Wen-Chuang 07 May 2014 (has links)
No description available.
|
6 |
Neural Computation and TimeNieters, Pascal 01 June 2022 (has links)
Time is not only the fundamental organizing principle of the universe, it is also the primary organizer of information about the world we perceive. Our brain encodes these perceptions in sequential patterns of spiking activity. But different stimuli lead to different information encoded on different timescales; sometimes the same stimulus carries information pertaining to different perceptions on different timescales. The orders of time are many and the computational circuits of the brain must disentangle these interwoven threads to decode the underlying structure. This thesis deals with solutions to this disentanglement problem implemented not at the network level, but in smaller systems and single neurons that represent the past by clever use of internal mechanisms. Often, these solutions involve the intricate tools of the neural dendrite or other peculiar aspects of neural circuits that are well known to physiologists and biologists but disregarded in favor of more homogeneous models by many theoreticians. It is at the intersection of the diverse biological reality of the brain and the difficulty of the computational problem to disentangle the threads of temporal order that we find new and powerful computational principles: Symbolic computation on the level of single neurons via dendritic plateau potentials, embedding history in delayed feedback dynamics or consecutive filter responses, or the idea that learning a generalized differential description of a systems can largely forgo the need to remember the past – instead, patterns can freely be generated. Together, the different challenges that information ordered in different, asynchronous times present require a diverse palette of solutions. At the same time, computation and the structure imposed by time are
deeply connected.
|
7 |
Cellular dynamics and stable chaos in balanced networksPuelma Touzel, Maximilian 30 January 2015 (has links)
No description available.
|
8 |
On the processing of vowels in the mammalian auditory systemHoney, Christian January 2013 (has links)
The mammalian auditory system generates representations of the physical world in terms of auditory objects. To decide which object class a particular sound belongs to, the auditory system must recognise the patterns of acoustic components that form the acoustic “fingerprint” of the sound’s auditory class. Where in the central auditory system such patterns are detected and what form the neural processing takes that underlies their detection are unanswered questions in sensory neurophysiology. In the research conducted for this thesis I used artificial vowel sounds to explore the neural and perceptual characteristics of auditory object recognition in rats. I recorded cortical responses from the primary auditory cortex (A1) in anaesthetised rats and determined how well the spiking responses, evoked by artificial vowels, resolve the spectral components that define vowel classes in human perception. The recognition of an auditory class rests on the ability to detect the combination of spectral components that all member sounds of the class share. I generated and evaluated models of the integration by A1 responses of the acoustic components that define human vowels classes. The hippocampus is a candidate area for neural responses that are specific to particular object classes. In this thesis I also report the results of a collaboration during which we investigated how the hippocampus responds to vowels in awake behaving animals. Finally, I explored the processing of vowels behaviourally, testing the perceptual ability of rats to discriminate and classify vowels and in particular whether rats use combinations of spectral components to recognise members of vowel classes. For the behavioural training I built a novel integrated housing and training cage that allows rats to train themselves in auditory recognition tasks. Combining the results and methods presented in this thesis will help reveal how the mammalian auditory system recognises auditory objects.
|
9 |
Changes in functional connectivity due to modulation by task and diseaseMadugula, Sasidhar January 2013 (has links)
Soon after the advent of signal-recording techniques in the brain, functional connectivity (FC), a measure of interregional neural interactions, became an important tool to assess brain function and its relation to structure. It was discovered that certain groups of regions in the brain corresponding to behavioural domains are organized into intrinsic networks of connectivity (ICNs). These networks were shown to exhibit high FC during rest, and also during task. ICNs are not only delineated by areas which correspond to various behaviours, but can be modulated in the long and short-term in their connectivity by disease conditions, learning, and task performance. The significance of changes in FC, permanent and transient, is poorly understood with respect to even the simplest ICNs corresponding to motor and visual regions. A better grasp on how to interpret these changes could elucidate the mechanisms and implications of patterns in FC changes during therapy and basic tasks. The aim of this work is to examine long-term changes in the connectivity of several ICNs as a result of modulation by stroke and rehabilitation, and to assess short term changes due to simple, continuous task performance in healthy volunteers. To explore long-term changes in ICN connectivity, fifteen hemiparetic stroke patients underwent resting state scanning and behavioural testing before and after a two-week session of Constraint Induced Movement Therapy (CIMT). It was found that therapy led to localized increases in FC within the sensorimotor ICN. To assess transient changes in FC with task, sixteen healthy volunteers underwent a series of scans during rest, continuous performance of a non-demanding finger-tapping task, viewing of a continuous visual stimulus, and a combined (but uncoupled) visual and motor task. Group Independent Component Analysis (ICA) revealed that canonical ICNs remained robustly connected during task conditions as well as during rest, and dual regression/seed analyses showed that visual and sensorimotor ICNs showed divergent patterns of changes in FC, with the former showing increased intra-network connectivity and the latter decreased intra-network connectivity. Additionally, it was found that task activation within ICNs has a relationship to these changes in FC. Overall, these results suggest that modulation of functional connectivity is a valuable and informative tool in the study of disease recovery and task performance.
|
10 |
The computational neuroscience of head direction cellsWalters, Daniel Matthew January 2011 (has links)
Head direction cells signal the orientation of the head in the horizontal plane. This thesis shows how some of the known head direction cell response properties might develop through learning. The research methodology employed is the computer simulation of neural network models of head direction cells that self-organize through learning. The preferred firing directions of head direction cells will change in response to the manipulation of distal visual cues, but not in response to the manipulation of proximal visual cues. Simulation results are presented of neural network models that learn to form separate representations of distal and proximal visual cues that are presented simultaneously as visual input to the network. These results demonstrate the computation required for a subpopulation of head direction cells to learn to preferentially respond to distal visual cues. Within a population of head direction cells, the angular distance between the preferred firing directions of any two cells is maintained across different environments. It is shown how a neural network model can learn to maintain the angular distance between the learned preferred firing directions of head direction cells across two different visual training environments. A population of head direction cells can update the population representation of the current head direction, in the absence of visual input, using internal idiothetic (self-generated) motion signals alone. This is called the path integration of head direction. It is important that the head direction cell system updates its internal representation of head direction at the same speed as the animal is rotating its head. Neural network models are simulated that learn to perform the path integration of head direction, using solely idiothetic signals, at the same speed as the head is rotating.
|
Page generated in 0.1156 seconds