• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5137
  • 1981
  • 420
  • 367
  • 312
  • 100
  • 73
  • 68
  • 66
  • 63
  • 56
  • 50
  • 44
  • 43
  • 39
  • Tagged with
  • 10698
  • 5795
  • 2836
  • 2720
  • 2637
  • 2389
  • 1656
  • 1614
  • 1545
  • 1523
  • 1336
  • 1114
  • 1030
  • 930
  • 898
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Spatiotemporal patterns of neural fields in a spherical cortex with general connectivity

Unknown Date (has links)
The human brain consists of billions of neurons and these neurons pool together in groups at different scales. On one hand, these neural entities tend to behave as single units and on the other hand show collective macroscopic patterns of activity. The neural units communicate with each other and process information over time. This communication is through small electrical impulses which at the macroscopic scale are measurable as brain waves. The electric field that is produced collectively by macroscopic groups of neurons within the brain can be measured on the surface of the skull via a brain imaging modality called Electroencephalography (EEG). The brain as a neural system has variant connection topology, in which an area might not only be connected to its adjacent neighbors homogeneously but also distant areas can directly transfer brain activity [16]. Timing of these brain activity communications between different neural units bring up overall emerging spatiotemporal patterns. The dynamics of these patterns and formation of neural activities in cortical surface is influenced by the presence of long-range connections between heterogeneous neural units. Brain activity at large-scale is thought to be involved in the information processing and the implementation of cognitive functions of the brain. This research aims to determine how the spatiotemporal pattern formation phenomena in the brain depend on its connection topology. This connection topology consists of homogeneous connections in local cortical areas alongside the couplings between distant functional units as heterogeneous connections. Homogeneous connectivity or synaptic weight distribution representing the large-scale anatomy of cortex is assumed to depend on the Euclidean distance between interacting neural units. Altering characteristics of inhomogeneous pathways as control parameters guide the brain pattern formation through phase transitions at critical points. In this research, linear stability analysis is applied to a macroscopic neural field in a one-dimensional circular and a twodimensional spherical model of the brain in order to find destabilization mechanism and subsequently emerging patterns. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
212

Innovative microelectronic signal processing techniques for the recording and analysis of the human electroneurogram

Metcalfe, Benjamin January 2016 (has links)
Injuries involving the nervous system are among the most devastating and life altering of all neurological disorders. The resulting loss of sensation and voluntary muscle control represent a drastic change in the individuals lifestyle and independence. Spinal cord injury affects over two hundred thousand people within the United States alone. While there have been many attempts to develop neural interfaces that can be used as part of a prosthetic device to improve the quality of life of such patients and contribute to the reduction of ongoing health care costs, the design of such a device has proved elusive. Direct access to the spinal cord requires potentially life threatening surgery during which the dura, the protective covering surrounding the cord, must be opened with a resulting high risk of infection. For this reason research has been focussed on the stimulation of and recording from the peripheral nerves in an attempt to restore the functionality that has been lost through spinal cord injury. This thesis is concerned with the current status and limitations of peripheral nerve interfaces that are designed for recording electrical signals directly from the nervous system using a technique called velocity selective recording. This technique exploits the relationship between axonal diameter, which is linked via anatomy to function, and the speed with which the axon conducts excitation. New techniques are developed that improve current methods for identifying and simulating neural signals and power efficient implementations of these methods are presented in modern microelectronic platforms. Results are presented from pioneering experiments in rat and pig that for the first time demonstrate the recording and analysis of the physiological electroneurogram using velocity based methods. New methods are developed that enable the extraction of neuronal firing rates and thus the extraction of the information encoded within the nervous system.
213

Regulation of Synapse Development by Miniature Neurotransmission in vivo

Choi, Benjamin Jiwon January 2015 (has links)
Miniature neurotransmission is the trans-synaptic process where single synaptic vesicles spontaneously released from presynaptic neurons induce miniature postsynaptic potentials. Since their discovery over 60 years ago, miniature events have been found at every chemical synapse studied. However, the in vivo necessity for these small-amplitude events has remained enigmatic. In this thesis, I show that miniature neurotransmission is required for the normal structural maturation of Drosophila glutamatergic synapses in a developmental role that is not shared by evoked neurotransmission. Conversely, I find that increasing miniature events is sufficient to induce synaptic terminal growth. I show that miniature neurotransmission acts locally at terminals to regulate synapse maturation via a Trio guanine nucleotide exchange factor (GEF) and Rac1 GTPase molecular signaling pathway. My thesis study establishes that miniature neurotransmission, a universal but often-overlooked feature of synapses, has unique and essential functions in vivo.
214

Neural circuits mediating innate and learned behavior

Gore, Felicity May January 2015 (has links)
For many organisms the sense of smell is critical to survival. Some olfactory stimuli elicit innate responses that are mediated through hardwired circuits that have developed over long periods of evolutionary time. Most olfactory stimuli, however, have no inherent meaning. Instead, meaning must be imposed by learning during the lifetime of an organism. Despite the dominance of olfactory stimuli on animal behavior, the mechanisms by which odorants elicit learned behavioral responses remain poorly understood. All odor-evoked behaviors are initiated by the binding of an odorant to olfactory receptors located on sensory neurons in the nasal epithelium. Olfactory sensory neurons transmit this information to the olfactory bulb via spatially organized axonal projections such that individual odorants evoke a stereotyped map of bulbar activity. A subset of bulbar neurons, the mitral and tufted cells, relay olfactory information to higher brain structures that have been implicated in the generation of innate and learned behavioral responses, including the cortical amygdala and piriform cortex. Anatomical studies have demonstrated that the spatial stereotypy of the olfactory bulb is maintained in projections to the posterolateral cortical amygdala, a structure that is involved in the generation of innate odor-evoked responses. The projections of mitral and tufted cells to piriform cortex however appear to discard the spatial order of the olfactory bulb: each glomerulus sends spatially diffuse, apparently random projections across the entire cortex. This anatomy appears to constrain odor-evoked responses in piriform cortex: electrophysiological and imaging studies demonstrate that individual odorants activate sparse ensembles that are distributed across the extent of cortex, and individual piriform neurons exhibit discontinuous receptive fields such that they respond to structurally and perceptually similar and dissimilar odorants. It is therefore unlikely that olfactory representations in piriform have inherent meaning. Instead, these representations have been proposed to mediate olfactory learning. In accord with this, lesions of posterior piriform cortex prevent the expression of a previously acquired olfactory fear memory and photoactivation of a random ensemble of piriform neurons can become entrained to both appetitive and aversive outcomes. Piriform cortex therefore plays a central role in olfactory fear learning. However, how meaning is imparted on olfactory representations in piriform remains largely unknown. We developed a strategy to manipulate the neural activity of representations of conditioned and unconditioned stimuli in the basolateral amygdala (BLA), a downstream target of piriform cortex that has been implicated in the generation of learned responses. This strategy allowed us to demonstrate that distinct neural ensembles represent an appetitive and an aversive unconditioned stimulus (US) in the BLA. Moreover, the activity of these representations can elicit innate responses as well as direct Pavlovian and instrumental learning. Finally activity of an aversive US representation in the basolateral amygdala is required for learned olfactory and auditory fear responses. These data suggest that both olfactory and auditory stimuli converge on US representations in the BLA to generate learned behavioral responses. Having identified a US representation in the BLA that receives convergent olfactory information to generate learned fear responses, we were then able to step back into the olfactory system and demonstrate that the BLA receives olfactory input via the monosynaptic projection from piriform cortex. These data suggest that aversive meaning is imparted on an olfactory representation in piriform cortex via reinforcement of its projections onto a US representation in the BLA. The work described in this thesis has identified mechanisms by which sensory stimuli generate appropriate behavioral responses. Manipulations of representations of unconditioned stimuli have identified a central role for US representations in the BLA in connecting sensory stimuli to both innate and learned behavioral responses. In addition, these experiments have suggested local mechanisms by which fear learning might be implemented in the BLA. Finally, we have identified a fundamental transformation through which a disordered olfactory representation in piriform cortex acquires meaning. Strikingly this transformation appears to occur within 3 synapses of the periphery. These data, and the techniques we employ, therefore have the potential to significantly impact upon our understanding of the neural origins of motivated behavior.
215

Non-overlapping neural networks in Hydra vulgaris

Dupre, Christophe January 2018 (has links)
To understand the emergent properties of neural circuits it would be ideal to record the activity of every neuron in a behaving animal and decode how it relates to behavior. We have achieved this with the cnidarian Hydra vulgaris, using calcium imaging of genetically engineered animals to measure the activity of essentially all of its neurons. While the nervous system of Hydra is traditionally described as a simple nerve net, we surprisingly find instead a series of functional networks that are anatomically non-overlapping and are associated with specific behaviors. Three major functional networks extend through the entire animal and are activated selectively during longitudinal contractions, elongations in response to light and radial contractions, while an additional network is located near the hypostome and is active during nodding. Additionally, we show that the behavior of Hydra is made of regularly occurring radial contractions, which expel the content of the gastric cavity about every 45 minutes. These results demonstrate the functional sophistication of apparently simple nerve nets, and the potential of Hydra and other basal metazoans as a model system for neural circuit studies.
216

Transient Dynamics in Neural Networks

Schaffer, Evan Shuman January 2011 (has links)
The motivation for this thesis is to devise a simple model of transient dynamics in neural networks. Neural circuits are capable of performing many computations without reaching an equilibrium, but instead through transient changes in activity. Thus, having a good model for transient activity is important. In particular, this thesis focuses on a firing-rate description of neural activity. Firing rates offer a convenient simplification of neural activity, and have been shown experimentally to convey information about stimuli and behavior. This work begins by review the philosophy of modeling firing rates, as well as the problems that go with it. It examines traditional approaches to modeling firing rates, and in particular how common assumptions lead to a model that fails to capture transient dynamics. Chapter 2 applies a traditional model of firing rates in order to gain insight into properties of cortical circuitry. In collaboration with the lab of David Ferster at Northwestern University, we found that surround suppression in cat primary visual cortex is mediated by a withdrawal of excitation in the cortical circuit. In theoretical work, we find that this behavior can only arise if excitatory recurrence alone is strong enough to destabilize visual responses but feedback inhibition maintains stability. Chapter 3 reviews concepts and literature related to the dynamics of large networks of spiking neurons. Population density approaches are common for describing the dynamics of networks of spiking neurons. These approaches allow for a rigorous approach to relate the dynamics of individual neurons to the population firing rate. Chapter 4 explores a method for accurately approximating the firing-rate dynamics of a population of spiking neurons. We describe the population by the probability density of membrane potentials, so the dynamics are governed by a Fokker-Planck equation. Using a spiking model with periodic boundary conditions, we write the Fokker-Planck dynamics in a Fourier basis. We find that the lowest Fourier modes dominate the dynamics. Chapter 5 presents a novel rate model that successfully captures synchronous dynamics. As in the previous chapter, we invoke an approximation to the dynamics of a population of spiking neurons in order to develop a firing-rate model. Our approach derives from an eigenfunction expansion of a Fokker-Planck equation, which is a common approach to solving such problems. We find that a very simple approximation turns out to be surprisingly accurate. This approximation allows us to write a closed-form expression for the firing rate that resembles the equations for a damped harmonic oscillator. Finally, chapter 6 uses the formalism derived in the previous chapter to analyze activity in a large randomly-connected network of neurons. Comparing this large spiking network to a network of two coupled rate units, we find that the firing rate network gives a good approximation to the time-varying activity of a spiking network across a wide range of parameters. Perhaps most surprisingly, we also find that the rate network can approximate the phase diagram of the spiking network, predicting the bifurcation line between synchronous and asynchronous states.
217

Real time Spaun on SpiNNaker : functional brain simulation on a massively-parallel computer architecture

Mundy, Andrew January 2017 (has links)
Model building is a fundamental scientific tool. Increasingly there is interest in building neurally-implemented models of cognitive processes with the intention of modelling brains. However, simulation of such models can be prohibitively expensive in both the time and energy required. For example, Spaun - "the world's first functional brain model", comprising 2.5 million neurons - required 2.5 hours of computation for every second of simulation on a large compute cluster. SpiNNaker is a massively parallel, low power architecture specifically designed for the simulation of large neural models in biological real time. Ideally, SpiNNaker could be used to facilitate rapid simulation of models such as Spaun. However the Neural Engineering Framework (NEF), with which Spaun is built, maps poorly to the architecture - to the extent that models such as Spaun would consume vast portions of SpiNNaker machines and still not run as fast as biology. This thesis investigates whether real time simulation of Spaun on SpiNNaker is at all possible. Three techniques which facilitate such a simulation are presented. The first reduces the memory, compute and network loads consumed by the NEF. Consequently, it is demonstrated that only a twentieth of the cores are required to simulate a core component of the Spaun network than would otherwise have been needed. The second technique uses a small number of additional cores to significantly reduce the network traffic required to simulated this core component. As a result simulation in real time is shown to be feasible. The final technique is a novel logic minimisation algorithm which reduces the size of the routing tables which are used to direct information around the SpiNNaker machine. This last technique is necessary to allow the routing of models of the scale and complexity of Spaun. Together these provide the ability to simulate the Spaun model in biological real time - representing a speed-up of 9000 times over previously reported results - with room for much larger models on full-scale SpiNNaker machines.
218

The Isolation and Identification of the Definitive Adult Neural Stem Cell Following Ablation of the Neurogenic GFAP Expressing Subependymal Cell

Doherty, James Patrick 14 July 2009 (has links)
Neural stem cells (NSCs) in the adult forebrain are thought to comprise a subpopulation of cells that express glial fibrillary acidic protein (GFAP), termed B cells. These GFAP+ cells generate proliferating neuroblasts that migrate from the lateral ventricle subependyma along the rostral migratory stream to become olfactory bulb interneurons. Based on this lineage, we set out to create a NSC deficient mouse through targeted ablation of dividing GFAP+ cells in vivo. We successfully depleted the GFAP+ cells as seen using an in vitro colony forming assay in multiple kill paradigms, however we were unable to permanently eliminate the multipotent, self-renewing colony forming cells. Instead, the targeted ablation of GFAP+ cells revealed an upstream, GFAP- cell that was induced to proliferate in the presence of leukemia inhibitory factor (LIF). These findings support the hypothesis that a population of GFAP-, LIF responsive cells are the definitive adult NSC upstream of GFAP+ cells.
219

The Isolation and Identification of the Definitive Adult Neural Stem Cell Following Ablation of the Neurogenic GFAP Expressing Subependymal Cell

Doherty, James Patrick 14 July 2009 (has links)
Neural stem cells (NSCs) in the adult forebrain are thought to comprise a subpopulation of cells that express glial fibrillary acidic protein (GFAP), termed B cells. These GFAP+ cells generate proliferating neuroblasts that migrate from the lateral ventricle subependyma along the rostral migratory stream to become olfactory bulb interneurons. Based on this lineage, we set out to create a NSC deficient mouse through targeted ablation of dividing GFAP+ cells in vivo. We successfully depleted the GFAP+ cells as seen using an in vitro colony forming assay in multiple kill paradigms, however we were unable to permanently eliminate the multipotent, self-renewing colony forming cells. Instead, the targeted ablation of GFAP+ cells revealed an upstream, GFAP- cell that was induced to proliferate in the presence of leukemia inhibitory factor (LIF). These findings support the hypothesis that a population of GFAP-, LIF responsive cells are the definitive adult NSC upstream of GFAP+ cells.
220

Learning in large-scale spiking neural networks

Bekolay, Trevor January 2011 (has links)
Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.

Page generated in 0.0604 seconds