• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 151
  • 114
  • 40
  • 22
  • 21
  • 17
  • 17
  • 17
  • 11
  • 11
  • 11
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A three-dimensional copuled microelectrode and microfluidic array for neuronal interfacing

Choi, Yoonsu. January 2005 (has links)
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2006. / Michaels, Thomas E., Committee Member ; LaPlaca, Michelle, Committee Member ; Frazier, A. Bruno, Committee Member ; DeWeerth, Stephen P., Committee Member ; Allen, Mark G., Committee Chair.
22

Flexible Computation in Neural Circuits

Portes, Jacob January 2022 (has links)
This dissertation presents two lines of research that are superficially at opposite ends of the computational neuroscience spectrum. While models of adaptive motion detection in fruit flies and simulations inspired by monkeys that learn to control brain machine interfaces might seem like they have little in common, these projects both attempt to address the broad question of how real neural circuits flexibly compute. Sensory systems flexibly adapt their processing properties across a wide range of environmental and behavioral conditions. Such variable processing complicates attempts to extract mechanistic understanding of sensory computations. This is evident in the highly constrained, canonical Drosophila motion detection circuit, where the core computation underlying direction selectivity is still debated despite extensive studies. The first part of this dissertation analyzes the filtering properties of four neural inputs to the OFF motion-detecting T5 cell in Drosophila. These four neurons, Tm1, Tm2, Tm4 and Tm9, exhibit state- and stimulus-dependent changes in the shape of their temporal responses, which become more biphasic under specific conditions. Summing these inputs within the framework of a connectomic-constrained model of the circuit demonstrates that these shapes are sufficient to explain T5 responses to various motion stimuli. Thus, the stimulus- and state-dependent measurements reconcile motion computation with the anatomy of the circuit. These findings provide a clear example of how a basic circuit supports flexible sensory computation. The most flexible neural circuits are circuits that can learn. Despite extensive theoretical work on biologically plausible learning rules, however, it has been difficult to obtain clear evidence about whether and how such rules are implemented in the brain. In the second part of this dissertation, I consider biologically plausible supervised- and reinforcement-learning rules and ask whether biased changes in network activity during learning can be used to determine which learning rule is being used. Supervised learning requires a credit-assignment model estimating the mapping from neural activity to behavior, and, in a biological organism, this model will inevitably be an imperfect approximation of the ideal mapping, leading to a bias in the direction of the weight updates relative to the true gradient. Reinforcement learning, on the other hand, requires no credit-assignment model and tends to make weight updates following the true gradient direction. I derive a metric to distinguish between learning rules by observing biased changes in the network activity during learning, given that the mapping from brain to behavior is known by the experimenter. Because brain-machine interface (BMI) experiments allow for perfect knowledge of this mapping, I focus on modeling a cursor-control BMI task using recurrent neural networks, and show that learning rules can be distinguished in simulated experiments using only observations that a neuroscience experimenter would plausibly have access to.
23

World Wide Web based layout synthesis for analogue modules

Nalbantis, Dimitris January 2001 (has links)
No description available.
24

The Role of the Clustered Protocadherins in the Assembly of Olfactory Neural Circuits

Mountoufaris, George January 2016 (has links)
The clustered protocadherins (Pcdh α, β & γ) provide individual neurons with cell surface diversity. However, the importance of Pcdh mediated diversity in neural circuit assembly and how it may promote neuronal connectivity remains largely unknown. Moreover, to date, Pcdh in vivo function has been studied at the level of individual gene clusters; whole cluster-wide function has not been addressed. Here I examine the role of all three Pcdh gene clusters in olfactory sensory neurons (OSNs); a neuronal type that expressed all three types of Pcdhs and in addition I address the role of Pcdh mediate diversity in their wiring. When OSNs share a dominant single Pcdh identity (α, β & γ) their axons fail to form distinct glomeruli, suggestive of inappropriate self-recognition of neighboring axons (loss of non-self-discrimination). By contrast, deletion of the entire α, β,γ Pcdh gene cluster, but not of each individual cluster alone, leads to loss of self-recognition and self-avoidance thus, OSN axons fail to properly arborize. I conclude that Pcdh-expression is necessary for self-recognition in OSNs, whereas its diversity allows distinction between self and non-self. Both of these functions are required for OSNs to connect and assembly into functional circuits in the olfactory bulb. My results, also reveal neuron-type specific differences in the requirement of specific Pcdh gene clusters and demonstrate significant redundancy between Pcdh isoforms in the olfactory system.
25

A Novel Circuit Model of Contextual Modulation and Normalization in Primary Visual Cortex

Rubin, Daniel Brett January 2012 (has links)
The response of a neuron encoding information about a sensory stimulus is influenced by the context in which that information is presented. In the primary visual cortex (area V1), neurons respond selectively to stimuli presented to a relatively constrained region of visual space known as the classical receptive field (CRF). These responses are influenced by stimuli in a much larger region of visual space known as the extra-classical receptive field (eCRF). In that they cannot directly evoke a response from the neuron, surround stimuli in the eCRF provide the context for the input to the CRF. Though the past few decades of research have revealed many details of the complex and nuanced interactions between the CRF and eCRF, the circuit mechanisms underlying these interactions are still unknown. In this thesis, we present a simple, novel cortical circuit model that can account for a surprisingly diverse array of eCRF properties. This model relies on extensive recurrent interactions between excitatory and inhibitory neurons, connectivity that is strongest between neurons with similar stimu- lus preferences, and an expansive input-output neuronal nonlinearity. There is substantial evidence for all of these features in V1. Through analytical and computational modeling techniques, we demonstrate how and why this circuit is able to account for such a comprehensive array of contextual modulations. In a linear network model, we demonstrate how surround suppression of both excitatory and inhibitory neurons is achieved through the selective amplification of spatially-periodic pat- terns of activity. This amplification relies on the network operating as an inhibition-stabilized network, a dynamic regime previously shown to account for the paradoxical decrease in in- hibition during surround suppression (Ozeki et al., 2009). With the addition of nonlinearity, effective connectivity strength scales with firing rate, and the network can transition be- tween different dynamic regimes as a function of input strength. By moving into and out of the inhibition-stabilized state, the model can reproduce a number of contrast-dependent changes in the eCRF without requiring any asymmetry in the intrinsic contrast-response properties of the cells. This same model also provides a biologically plausible mechanism for cortical normalization, an operation that has been shown to be ubiquitous in V1. Through a winner-take-all population response, we demonstrate how this network undergoes a strong reduction in trial-to-trial variability at stimulus onset. We also propose a novel mechanism for attentional modulation in visual cortex. We then go on to test several of the critical pre- dictions of the model using single unit electrophysiology. From these experiments, we find ample evidence for the spatially-periodic patterns of activity predicted by the model. Lastly, we show how this same circuit motif may underlie behavior in a higher cortical region, the lateral intraparietal area.
26

Modeling the impact of internal state on sensory processing

Lindsay, Grace Wilhelmina January 2018 (has links)
Perception is the result of more than just the unbiased processing of sensory stimuli. At each moment in time, sensory inputs enter a circuit already impacted by signals of arousal, attention, and memory. This thesis aims to understand the impact of such internal states on the processing of sensory stimuli. To do so, computational models meant to replicate known biological circuitry and activity were built and analyzed. Part one aims to replicate the neural activity changes observed in auditory cortex when an animal is passively versus actively listening. In part two, the impact of selective visual attention on performance is probed in two models: a large-scale abstract model of the visual system and a smaller, more biologically-realistic one. Finally in part three, a simplified model of Hebbian learning is used to explore how task context comes to impact prefrontal cortical activity. While the models used in this thesis range in scale and represent diverse brain areas, they are all designed to capture the physical processes by which internal brain states come to impact sensory processing.
27

Sparse algorithms for decoding and identification of neural circuits

Ukani, Nikul January 2018 (has links)
The brain, as an information processing machine, surpasses any man-made computational device, both in terms of its capabilities and its efficiency. Neuroscience research has made great strides since the foundational works of Cajal and Golgi. However, we still have very little understanding about the algorithmic underpinnings of the brain as an information processor. Identifying mechanistic models of the functional building blocks of the brain will have significant impact not just on neuroscience, but also on artificial computational systems. This provides the main motivation for the work presented in this thesis, summarily i) biologically-inspired algorithms that can be efficiently implemented in silico, ii) functional identification of the processing in certain types of neural circuits, and iii) a collaborative ecosystem for brain research in a model organism, towards the synergistic goal of understanding functional mechanisms employed by the brain. First, this thesis provides a highly parallelizable, biologically-inspired, motion detection algorithm that is based upon the temporal processing of the local (spatial) phase of a visual stimulus. The relation of the phase based motion detector to the widely studied Reichardt detector model, is discussed. Examples are provided comparing the performance of the proposed algorithm with the Reichardt detector as well as the optic flow algorithm, which is the workhorse for motion detection in computer vision. Further, it is shown through examples that the phase based motion detection model provides intuitive explanations for reverse-phi based illusory motion percepts. Then, tractable algorithms are presented for decoding with and identification of neural circuits, comprised of processing that can be described by a second-order Volterra kernel (quadratic filter). It is shown that the Reichardt detector, as well as models of cortical complex cells, can be described by this structure. Examples are provided for decoding of visual stimuli encoded by a population of Reichardt detector cells and complex cells, as well as their identification from observed spike times. Further, the phase based motion detection model is shown to be equivalent to a second-order Volterra kernel acting on two normalized inputs. Subsequently, a general model that computes the ratio of two non-linear functionals, each comprising linear (first order Volterra kernel) and quadratic (second-order Volterra kernel) filters, is proposed. It is shown that, even under these highly non-linear operations, a population of cells can encode stimuli faithfully using a number of measurements that are proportional to the bandwidth of the input stimulus. Tractable algorithms are devised to identify the divisive normalization model and examples of identification are provided for both simulated and biological data. Additionally, an extended framework, comprising parallel channels of divisively normalized cells each subjected to further divisive normalization from lateral feedback connections, is proposed. An algorithm is formulated for identifying all the components in this extended framework from controlled stimulus presentation and observed outputs samples. Finally, the thesis puts forward the Fruit Fly Brain Observatory (FFBO), an initiative to enable a collaborative ecosystem for fruit fly brain research. Key applications in FFBO, and the software and computational infrastructure enabling them, are described along with case studies.
28

Learning and generalization in cerebellum-like structures

Dempsey, Conor January 2019 (has links)
The study of cerebellum-like circuits allows many points of entry. These circuits are often involved in very specific systems not found in all animals (for example electrolocation in weakly electric fish) and thus can be studied with a neuroethological approach in mind. There are many cerebellum-like circuits found across the animal kingdom, and so studies of these systems allow us to make interesting comparative observations. Cerebellum-like circuits are involved in computations that touch many domains of theoretical interest - the formation of internal predictions, adaptive filtering, cancellation of self-generated sensory inputs. This latter is linked both conceptually and historically to philosophical questions about the nature of perception and the distinction between the self and the outside world. The computation thought to be performed in cerebellum-like structures is further related, especially through studies of the cerebellum, to theories of motor control and cognition. The cerebellum itself is known to be involved in much more than motor learning, its traditionally assumed function, with particularly interesting links to schizophrenia and to autism. The particular advantage of studying cerbellum-like structures is that they sit at such a rich confluence of interests while being involved in well-defined computations and being accessible at the synaptic, cellular, and circuit levels. In this thesis we present work on two cerebellum-like structures: the electrosensory lobe (ELL) of mormyrid fish and the dorsal cochlear nucleus (DCN) of mice. Recent work in ELL has shown that a temporal basis of granule cells allows the formation of predictions of the sensory consequences of a simple motor act - the electric organ discharge (EOD). Here we demonstrate that such predictions generalize between electric organ discharge rates - an ability crucial to the ethological relevance of such predictions. We develop a model of how such generalization is made possible at the circuit level. In a second section we show that the DCN is able to adaptively cancel self-generated sounds. In the conclusion we discuss some differences between DCN and ELL and suggest future studies of both structures motivated by a reading of different aspects of the machine learning literature.
29

Food for thought : examining the neural circuitry regulating food choices

Medic, Nenad January 2015 (has links)
No description available.
30

Subtype diversification and synaptic specificity of stem cell-derived spinal inhibitory interneurons

Hoang, Phuong Thi January 2017 (has links)
During nervous system development, thousands of distinct neuronal cell types are generated and assembled into highly precise circuits. The proper wiring of these circuits requires that developing neurons recognize their appropriate synaptic partners. Analysis of a vertebrate spinal circuit that controls motor behavior reveals distinct synaptic connections of two types of inhibitory interneurons, a ventral V1 class that synapses with motor neurons and a dorsal dI4 class that selectively synapses with proprioceptive sensory neuron terminals that are located on or in close proximity to motor neurons. What are the molecular and cellular programs that instruct this remarkable synaptic specificity? Are only subsets of these interneurons capable of integrating into this circuit, or do all neurons within the same class behave similarly? The ability to answer such questions, however, is hampered both by the complexity of the spinal cord, where many different neuronal cell types can be found synapsing in the same area; as well as by the challenge of obtaining enough neurons of a particular subtype for analysis. Meanwhile, pluripotent stem cells have emerged as powerful tools for studying neural development, particularly because they can be differentiated to produce large amounts of diverse neuronal populations. Mouse embryonic stem cell-derived neurons can thus be used in a simplified in vitro system to study the development of specific neuronal cell types as well the interactions between defined cell types in a controlled environment. Using stem cell-derived neurons, I investigated how the V1 and dI4 cardinal spinal classes differentiate into molecularly distinct subtypes and acquire cell type-specific functional properties, including synaptic connectivity. In Chapter Two, I describe the production of lineage-based reporter stem cell lines and optimized differentiation protocols for generating V1 and dI4 INs from mouse embryonic stem cells, including confirming that they have molecular and functional characteristics of their in vivo counterparts. In Chapter Three, I show that a well-known V1 interneuron subtype, the Renshaw cell, which mediates recurrent inhibition of motor neurons, can be efficiently generated from stem cell differentiation. Importantly, manipulation of the Notch signaling pathway in V1 progenitors impinges on V1 subtype differentiation and greatly enhances the generation of Renshaw cells. I further show that sustained retinoic acid signaling is critical for the specific development of the Renshaw cell subtype, suggesting that interneuron progenitor domain diversification may also be regulated by spatially-restricted cues during embryonic development. In Chapter Four, using a series of transplantation, rabies virus-based transsynaptic tracing, and optogenetics combined with whole-cell patch-clamp recording approaches, I demonstrate that stem cell-derived Renshaw cells exhibit significant differences in physiology and connectivity compared to other V1 subpopulations, suggesting that synaptic specificity of the Renshaw cell-motor neuron circuit can be modeled and studied in a simplified in vitro co-culture preparation. Finally, in Chapter Five, I describe ongoing investigations into molecular mechanisms of dI4 interneuron subtype diversification, as well as approaches to studying their synaptic specificity with proprioceptive sensory neurons. Overall, my results suggest that our stem cell-cell based system is well-positioned to probe the functional diversity of molecularly-defined cell types. This work represents a novel use of embryonic stem cell-derived neurons for studying inhibitory spinal circuit assembly and will contribute to further understanding of neural circuit formation and function during normal development and potentially in diseased states.

Page generated in 0.0218 seconds