• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5137
  • 1981
  • 420
  • 367
  • 312
  • 100
  • 73
  • 68
  • 66
  • 63
  • 56
  • 50
  • 44
  • 43
  • 39
  • Tagged with
  • 10698
  • 5795
  • 2836
  • 2720
  • 2637
  • 2389
  • 1656
  • 1614
  • 1545
  • 1523
  • 1336
  • 1114
  • 1030
  • 930
  • 898
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Adaptively-Halting RNN for Tunable Early Classification of Time Series

Hartvigsen, Thomas 11 November 2018 (has links)
Early time series classification is the task of predicting the class label of a time series before it is observed in its entirety. In time-sensitive domains where information is collected over time it is worth sacrificing some classification accuracy in favor of earlier predictions, ideally early enough for actions to be taken. However, since accuracy and earliness are contradictory objectives, a solution to this problem must find a task-dependent trade-off. There are two common state-of-the-art methods. The first involves an analyst selecting a timestep at which all predictions must be made. This does not capture earliness on a case-by-case basis, so if the selecting timestep is too early, all later signals are missed, and if a signal happens early, the classifier still waits to generate a prediction. The second method is the exhaustive search for signals, which encodes no timing information and is not scalable to high dimensions or long time series. We design the first early classification model called EARLIEST to tackle this multi-objective optimization problem, jointly learning (1) to decide at which time step to halt and generate predictions and (2) how to classify the time series. Each of these is learned based on the task and data features. We achieve an analyst-controlled balance between the goals of earliness and accuracy by pairing a recurrent neural network that learns to classify time series as a supervised learning task with a stochastic controller network that learns a halting-policy as a reinforcement learning task. The halting-policy dictates sequential decisions, one per timestep, of whether or not to halt the recurrent neural network and classify the time series early. This pairing of networks optimizes a global objective function that incorporates both earliness and accuracy. We validate our method via critical clinical prediction tasks in the MIMIC III database from the Beth Israel Deaconess Medical Center along with another publicly available time series classification dataset. We show that EARLIEST out-performs two state-of-the-art LSTM-based early classification methods. Additionally, we dig deeper into our model's performance using a synthetic dataset which shows that EARLIEST learns to halt when it observes signals without having explicit access to signal locations. The contributions of this work are three-fold. First, our method is the first neural network-based solution to early classification of time series, bringing the recent successes of deep learning to this problem. Second, we present the first reinforcement-learning based solution to the unsupervised nature of early classification, learning the underlying distributions of signals without access to this information through trial and error. Third, we propose the first joint-optimization of earliness and accuracy, allowing learning of complex relationships between these contradictory goals.
602

Extended Kalman filter based pruning algorithms and several aspects of neural network learning. / CUHK electronic theses & dissertations collection

January 1998 (has links)
by John Pui-Fai Sum. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (p. 155-[163]). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
603

Development and Application of pH-sensitive Fluorescent Probes to Study Synaptic Activity in the Brain

Dunn, Matthew R. January 2015 (has links)
This thesis describes efforts at the interface of chemistry and neuroscience to design and characterize fluorescent probes capable of tracing neurotransmitters from individual release sites in brain tissue. As part of the Fluorescent False Neurotransmitters (FFNs) program, small organic fluorophores have been developed that undergo uptake into specific presynaptic release sites and synaptic vesicles by utilizing the native protein machinery, which can then be released during neuronal firing. The most advanced generation of FFNs are pH-sensitive, and display an increase in fluorescence when released from the acidic vesicular lumen into the extracellular space, called a “FFN Flash.” In Chapter 2, the utility of the dopamine-selective and pH-sensitive functionality of FFN102 to study the mechanisms that regulate changes in pre-synaptic plasticity, a critical component of neurotransmission was explored. This included using the FFN flash to quantitatively trace dopamine release, changes in the release probability of individual release sites, and changes in vesicular loading that can affect quantal size. The second goal of this thesis research, as detailed in Chapters 3 and 4, sought to expand the substrate scope of the FFN program to neurotransmitter systems other than dopamine. Described in Chapter 3, is the identification of a fluorescent phenylpyridinium, APP+, with excellent labeling for dopamine, norepinephrine, and serotonin neurons, however, the properties of the probe were found to be ill-suited for measuring neurotransmitter release. As a result, it was concluded that this class of compounds was not suitable for generating viable FFN leads. In contrast, Chapter 4 highlights the design, synthesis, and screening towards generating the novel noradrenergic-specific FFN, FFN270. This probe was further tested for application in acute murine brain slices where it labeled noradrenergic neurons, and was demonstrated to release upon stimulation. This chapter also describes the application of this compound in a series of in vivo experiments, where the ability to measure norepinephrine release from individual release sites was demonstrated in a living animal for the first time. This work opens the possibility for many exciting future FFN experiments studying the presynaptic regulation of neurotransmission in vivo.
604

The Role of the Clustered Protocadherins in the Assembly of Olfactory Neural Circuits

Mountoufaris, George January 2016 (has links)
The clustered protocadherins (Pcdh α, β & γ) provide individual neurons with cell surface diversity. However, the importance of Pcdh mediated diversity in neural circuit assembly and how it may promote neuronal connectivity remains largely unknown. Moreover, to date, Pcdh in vivo function has been studied at the level of individual gene clusters; whole cluster-wide function has not been addressed. Here I examine the role of all three Pcdh gene clusters in olfactory sensory neurons (OSNs); a neuronal type that expressed all three types of Pcdhs and in addition I address the role of Pcdh mediate diversity in their wiring. When OSNs share a dominant single Pcdh identity (α, β & γ) their axons fail to form distinct glomeruli, suggestive of inappropriate self-recognition of neighboring axons (loss of non-self-discrimination). By contrast, deletion of the entire α, β,γ Pcdh gene cluster, but not of each individual cluster alone, leads to loss of self-recognition and self-avoidance thus, OSN axons fail to properly arborize. I conclude that Pcdh-expression is necessary for self-recognition in OSNs, whereas its diversity allows distinction between self and non-self. Both of these functions are required for OSNs to connect and assembly into functional circuits in the olfactory bulb. My results, also reveal neuron-type specific differences in the requirement of specific Pcdh gene clusters and demonstrate significant redundancy between Pcdh isoforms in the olfactory system.
605

Methods for Building Network Models of Neural Circuits

DePasquale, Brian David January 2016 (has links)
Artificial recurrent neural networks (RNNs) are powerful models for understanding and modeling dynamic computation in neural circuits. As such, RNNs that have been constructed to perform tasks analogous to typical behaviors studied in systems neuroscience are useful tools for understanding the biophysical mechanisms that mediate those behaviors. There has been significant progress in recent years developing gradient-based learning methods to construct RNNs. However, the majority of this progress has been restricted to network models that transmit information through continuous state variables since these methods require the input-output function of individual neuronal units to be differentiable. Overwhelmingly, biological neurons transmit information by discrete action potentials. Spiking model neurons are not differentiable and thus gradient-based methods for training neural networks cannot be applied to them. This work focuses on the development of supervised learning methods for RNNs that do not require the computation of derivatives. Because the methods we develop do not rely on the differentiability of the neural units, we can use them to construct realistic RNNs of spiking model neurons that perform a variety of benchmark tasks, and also to build networks trained directly from experimental data. Surprisingly, spiking networks trained with these non-gradient methods do not require significantly more neural units to perform tasks than their continuous-variable model counterparts. The crux of the method draws a direct correspondence between the dynamical variables of more abstract continuous-variable RNNs and spiking network models. The relationship between these two commonly used model classes has historically been unclear and, by resolving many of these issues, we offer a perspective on the appropriate use and interpretation of continuous-variable models as they relate to understanding network computation in biological neural circuits. Although the main advantage of these methods is their ability to construct realistic spiking network models, they can equally well be applied to continuous-variable network models. An example is the construction of continuous-variable RNNs that perform tasks for which they provide performance and computational cost competitive with those of traditional methods that compute derivatives and outperform previous non-gradient-based network training approaches. Collectively, this thesis presents efficient methods for constructing realistic neural network models that can be used to understand computation in biological neural networks and provides a unified perspective on how the dynamic quantities in these models relate to each other and to quantities that can be observed and extracted from experimental recordings of neurons.
606

A Novel Circuit Model of Contextual Modulation and Normalization in Primary Visual Cortex

Rubin, Daniel Brett January 2012 (has links)
The response of a neuron encoding information about a sensory stimulus is influenced by the context in which that information is presented. In the primary visual cortex (area V1), neurons respond selectively to stimuli presented to a relatively constrained region of visual space known as the classical receptive field (CRF). These responses are influenced by stimuli in a much larger region of visual space known as the extra-classical receptive field (eCRF). In that they cannot directly evoke a response from the neuron, surround stimuli in the eCRF provide the context for the input to the CRF. Though the past few decades of research have revealed many details of the complex and nuanced interactions between the CRF and eCRF, the circuit mechanisms underlying these interactions are still unknown. In this thesis, we present a simple, novel cortical circuit model that can account for a surprisingly diverse array of eCRF properties. This model relies on extensive recurrent interactions between excitatory and inhibitory neurons, connectivity that is strongest between neurons with similar stimu- lus preferences, and an expansive input-output neuronal nonlinearity. There is substantial evidence for all of these features in V1. Through analytical and computational modeling techniques, we demonstrate how and why this circuit is able to account for such a comprehensive array of contextual modulations. In a linear network model, we demonstrate how surround suppression of both excitatory and inhibitory neurons is achieved through the selective amplification of spatially-periodic pat- terns of activity. This amplification relies on the network operating as an inhibition-stabilized network, a dynamic regime previously shown to account for the paradoxical decrease in in- hibition during surround suppression (Ozeki et al., 2009). With the addition of nonlinearity, effective connectivity strength scales with firing rate, and the network can transition be- tween different dynamic regimes as a function of input strength. By moving into and out of the inhibition-stabilized state, the model can reproduce a number of contrast-dependent changes in the eCRF without requiring any asymmetry in the intrinsic contrast-response properties of the cells. This same model also provides a biologically plausible mechanism for cortical normalization, an operation that has been shown to be ubiquitous in V1. Through a winner-take-all population response, we demonstrate how this network undergoes a strong reduction in trial-to-trial variability at stimulus onset. We also propose a novel mechanism for attentional modulation in visual cortex. We then go on to test several of the critical pre- dictions of the model using single unit electrophysiology. From these experiments, we find ample evidence for the spatially-periodic patterns of activity predicted by the model. Lastly, we show how this same circuit motif may underlie behavior in a higher cortical region, the lateral intraparietal area.
607

Modeling the impact of internal state on sensory processing

Lindsay, Grace Wilhelmina January 2018 (has links)
Perception is the result of more than just the unbiased processing of sensory stimuli. At each moment in time, sensory inputs enter a circuit already impacted by signals of arousal, attention, and memory. This thesis aims to understand the impact of such internal states on the processing of sensory stimuli. To do so, computational models meant to replicate known biological circuitry and activity were built and analyzed. Part one aims to replicate the neural activity changes observed in auditory cortex when an animal is passively versus actively listening. In part two, the impact of selective visual attention on performance is probed in two models: a large-scale abstract model of the visual system and a smaller, more biologically-realistic one. Finally in part three, a simplified model of Hebbian learning is used to explore how task context comes to impact prefrontal cortical activity. While the models used in this thesis range in scale and represent diverse brain areas, they are all designed to capture the physical processes by which internal brain states come to impact sensory processing.
608

Sparse algorithms for decoding and identification of neural circuits

Ukani, Nikul January 2018 (has links)
The brain, as an information processing machine, surpasses any man-made computational device, both in terms of its capabilities and its efficiency. Neuroscience research has made great strides since the foundational works of Cajal and Golgi. However, we still have very little understanding about the algorithmic underpinnings of the brain as an information processor. Identifying mechanistic models of the functional building blocks of the brain will have significant impact not just on neuroscience, but also on artificial computational systems. This provides the main motivation for the work presented in this thesis, summarily i) biologically-inspired algorithms that can be efficiently implemented in silico, ii) functional identification of the processing in certain types of neural circuits, and iii) a collaborative ecosystem for brain research in a model organism, towards the synergistic goal of understanding functional mechanisms employed by the brain. First, this thesis provides a highly parallelizable, biologically-inspired, motion detection algorithm that is based upon the temporal processing of the local (spatial) phase of a visual stimulus. The relation of the phase based motion detector to the widely studied Reichardt detector model, is discussed. Examples are provided comparing the performance of the proposed algorithm with the Reichardt detector as well as the optic flow algorithm, which is the workhorse for motion detection in computer vision. Further, it is shown through examples that the phase based motion detection model provides intuitive explanations for reverse-phi based illusory motion percepts. Then, tractable algorithms are presented for decoding with and identification of neural circuits, comprised of processing that can be described by a second-order Volterra kernel (quadratic filter). It is shown that the Reichardt detector, as well as models of cortical complex cells, can be described by this structure. Examples are provided for decoding of visual stimuli encoded by a population of Reichardt detector cells and complex cells, as well as their identification from observed spike times. Further, the phase based motion detection model is shown to be equivalent to a second-order Volterra kernel acting on two normalized inputs. Subsequently, a general model that computes the ratio of two non-linear functionals, each comprising linear (first order Volterra kernel) and quadratic (second-order Volterra kernel) filters, is proposed. It is shown that, even under these highly non-linear operations, a population of cells can encode stimuli faithfully using a number of measurements that are proportional to the bandwidth of the input stimulus. Tractable algorithms are devised to identify the divisive normalization model and examples of identification are provided for both simulated and biological data. Additionally, an extended framework, comprising parallel channels of divisively normalized cells each subjected to further divisive normalization from lateral feedback connections, is proposed. An algorithm is formulated for identifying all the components in this extended framework from controlled stimulus presentation and observed outputs samples. Finally, the thesis puts forward the Fruit Fly Brain Observatory (FFBO), an initiative to enable a collaborative ecosystem for fruit fly brain research. Key applications in FFBO, and the software and computational infrastructure enabling them, are described along with case studies.
609

Learning and generalization in cerebellum-like structures

Dempsey, Conor January 2019 (has links)
The study of cerebellum-like circuits allows many points of entry. These circuits are often involved in very specific systems not found in all animals (for example electrolocation in weakly electric fish) and thus can be studied with a neuroethological approach in mind. There are many cerebellum-like circuits found across the animal kingdom, and so studies of these systems allow us to make interesting comparative observations. Cerebellum-like circuits are involved in computations that touch many domains of theoretical interest - the formation of internal predictions, adaptive filtering, cancellation of self-generated sensory inputs. This latter is linked both conceptually and historically to philosophical questions about the nature of perception and the distinction between the self and the outside world. The computation thought to be performed in cerebellum-like structures is further related, especially through studies of the cerebellum, to theories of motor control and cognition. The cerebellum itself is known to be involved in much more than motor learning, its traditionally assumed function, with particularly interesting links to schizophrenia and to autism. The particular advantage of studying cerbellum-like structures is that they sit at such a rich confluence of interests while being involved in well-defined computations and being accessible at the synaptic, cellular, and circuit levels. In this thesis we present work on two cerebellum-like structures: the electrosensory lobe (ELL) of mormyrid fish and the dorsal cochlear nucleus (DCN) of mice. Recent work in ELL has shown that a temporal basis of granule cells allows the formation of predictions of the sensory consequences of a simple motor act - the electric organ discharge (EOD). Here we demonstrate that such predictions generalize between electric organ discharge rates - an ability crucial to the ethological relevance of such predictions. We develop a model of how such generalization is made possible at the circuit level. In a second section we show that the DCN is able to adaptively cancel self-generated sounds. In the conclusion we discuss some differences between DCN and ELL and suggest future studies of both structures motivated by a reading of different aspects of the machine learning literature.
610

Food for thought : examining the neural circuitry regulating food choices

Medic, Nenad January 2015 (has links)
No description available.

Page generated in 0.0349 seconds