• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 128
  • 128
  • 128
  • 22
  • 17
  • 16
  • 15
  • 15
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A solver for sets of linear systems for neural network simuations in CUDA

Shariati, Saeed January 2014 (has links)
Orientador: Prof. Raphael Yokoingawa de Camargo / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Neurociência e Cognição, 2014. / Nowadays, utilizing co-processors, accelerators and specially GPGPU computation are widely accepted as a new paradigm of High Performance Computing (HPC). However, developing softwares that can utilize available resources still remains a challenging task. In other side, scientist have used legacy CPU-based simulators for decades and many of them are still the main tools in different fields of science. In fact, any activity that can combine the legacy simulators with powerful co-processors devices is in the main interest. In this project, we design and develop a simulation engine, Parallel Neural Network Simulator (PN2S), to communicate with MOOSE simulator (A well-known tools by Neuroscientists) and provide CUDA based execution for simulating realistic neural network models. The simulation engine maps the voltage distribution in neuron¿s body to sets of linear systems and solve them on GPU. To provide usable functionality, we also developed solver for active channels which support Hodgkin-Huxley model of ionic channels. We compared the engine with CPU version for both homogeneous simple models and randomly generated heterogeneous network. The evaluation focused on performance and also covered the accuracy of the simulation. The experimental results, showed that by facilitating PN2S engine, we can significantly increase the performance of a simulation engine, since its execution is quite transparent to the users and major parts of the host simulator.
82

Optogenetic investigation of the neural network underlying the oxygen modulation of C. elegans locomotion

Soltesz, Zoltan January 2014 (has links)
No description available.
83

Optimization Of Fed-Batch Fermentation Processes With Neural Networks

Chaudhuri, Bodhisattwa 12 1900 (has links) (PDF)
No description available.
84

Functional and Categorical Analysis of Waveshapes Recorded on Microelectrode Arrays

Schwartz, Jacob C. 05 1900 (has links)
Dissociated neuronal cell cultures grown on substrate integrated microelectrode arrays (MEAs) generate spontaneous activity that can be recorded for up to several weeks. The signature wave shapes from extracellular recording of neuronal activity display a great variety of shapes with triphasic signals predominating. I characterized extracellular recordings from over 600 neuronal signals. I have preformed a categorical study by dividing wave shapes into two major classes: (type 1) signals in which the large positive peak follows the negative spike, and (type 2) signals in which the large positive peak precedes the negative spike. The former are hypothesized to be active signal propagation that can occur in the axon and possibly in soma or dendrites. The latter are hypothesized to be passive which is generally secluded to soma or dendrites. In order to verify these hypotheses, I pharmacologically targeted ion channels with tetrodotoxin (TTX), tetraethylammonium (TEA), 4-aminopyridine (4-AP), and monensin.
85

Medial Medulla Networks in Culture: a Multichannel Electrophysiologic and Pharmacological Study

Keefer, Edward W. (Edward Wesley) 08 1900 (has links)
Spontaneously active primary cultures obtained from dissociated embryonic medial medulla tissue were grown on microelectrode arrays for investigating burst patterns and pharmacological responses of respiratory-related neurons. Multichannel burst rates and spike production were used as primary variables for analysis. Pacemaker-like neurons were identified by continued spiking under low Ca++/high Mg++conditions. The number of pacemakers increased with time under synaptic blocking medium. Sensitivity to CO2 levels was found in some neurons. Acetylcholine changed activity in a complex fashion. Curare, atropine and gallamine modified ACh effects. Eserine alone was ineffective, but potentiated ACh-induced responses. Norepinephrine caused channel-specific increases or decreases, whereas dopamine and serotonin had little effect at 30 μM. GABA and glycine stopped most spiking at 70 μM. Developmental changes in glycine sensitivity (increasing with age) were also observed. It is concluded that pacemaker and chemosensitive neurons develop in medial medulla cultures, and that these cultures are pharmacologically histiotypic.
86

Exploration of hierarchical leadership and connectivity in neural networks in vitro.

Ham, Michael I. 12 1900 (has links)
Living neural networks are capable of processing information much faster than a modern computer, despite running at significantly lower clock speeds. Therefore, understanding the mechanisms neural networks utilize is an issue of substantial importance. Neuronal interaction dynamics were studied using histiotypic networks growing on microelectrode arrays in vitro. Hierarchical relationships were explored using bursting (when many neurons fire in a short time frame) dynamics, pairwise neuronal activation, and information theoretic measures. Together, these methods reveal that global network activity results from ignition by a small group of burst leader neurons, which form a primary circuit that is responsible for initiating most network-wide burst events. Phase delays between leaders and followers reveal information about the nature of the connection between the two. Physical distance from a burst leader appears to be an important factor in follower response dynamics. Information theory reveals that mutual information between neuronal pairs is also a function of physical distance. Activation relationships in developing networks were studied and plating density was found to play an important role in network connectivity development. These measures provide unique views of network connectivity and hierarchical relationship in vitro which should be included in biologically meaningful models of neural networks.
87

Acute Effects of the Antibiotic Streptomycin on Neural Network Activity and Pharmacological Responses

Zeng, Wei Rong 12 1900 (has links)
The purpose of this study is to find out that if antibiotic streptomycin decreases neuronal network activity or affects the pharmacological responses. The experiments in this study were conducted via MEA (multi-electrode array) technology which records neuronal activity from devices that have multiple small electrodes, serve as neural interfaces connecting neurons to electronic circuitry. The result of this study shows that streptomycin lowered the spike production of neuronal network, and also, sensitization was seen when neuronal network pre-exposed to streptomycin.
88

A COMPARISON OF TASK RELEVANT NODE IDENTIFICATION TECHNIQUES AND THEIR IMPACT ON NETWORK INFERENCES: GROUP-AGGREGATED, SUBJECT-SPECIFIC, AND VOXEL WISE APPROACHES

Unknown Date (has links)
The dissertation discusses various node identification techniques as well as their downstream effects on network characteristics using task-activated fMRI data from two working memory paradigms: a verbal n-back task and a visual n-back task. The three node identification techniques examined within this work include: a group-aggregated approach, a subject-specific approach, and a voxel wise approach. The first chapters highlight crucial differences between group-aggregated and subject-specific methods of isolating nodes prior to undirected functional connectivity analysis. Results show that the two techniques yield significantly different network interactions and local network characteristics, despite having their network nodes restricted to the same anatomical regions. Prior to the introduction of the third technique, a chapter is dedicated to explaining the differences between a priori approaches (like the previously introduced group-aggregated and subject-specific techniques) and no a priori approaches (like the voxel wise approach). The chapter also discusses two ways to aggregate signal for node representation within a network: using the signal from a single voxel or aggregating signal across a group of neighboring voxels. Subsequently, a chapter is dedicated to introducing a novel processing pipeline which uses a data driven voxel wise approach to identify network nodes. The novel pipeline defines nodes using spatial temporal features generated by a deep learning algorithm and is validated by an analysis showing that the isolated nodes are condition and subject specific. The dissertation concludes by summarizing the main takeaways from each of the three analyses as well as highlighting the advantages and disadvantages of each of the three node identification techniques. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
89

Mechanistic Models of Neural Computation in the Fruit Fly Brain

Yeh, Chung-Heng January 2019 (has links)
Understanding the operating principles of the brain functions is the key to building novel computing architectures for mimicking human intelligence. Neural activities at different scales lead to different levels of brain functions. For example, cellular functions, such as sensory transduction, occur in the molecular reactions, and cognitive functions, such as recognition, emerge in neural systems across multiple brain regions. To bridge the gap between neuroscience and artificial computation, we need systematic development of mechanistic models for neural computation across multiple scales. Existing models of neural computation are often independently developed for a specific scale and hence not compatible with others. In this thesis, we investigate the neural computations in the fruit fly brain and devise mechanistic models at different scales in a systematic manner so that models at one scale constitute functional building blocks for the next scale. Our study spans from the molecular and circuit computations in the olfactory system to the system-level computation of the central complex in the fruit fly. First, we study how the two key aspects of odorant, identity and concentration, are encoded by the odorant transduction process at the molecular scale. We mathematically quantify the odorant space and propose a biophysical model of the olfactory sensory neuron (OSN). To validate our modeling approaches, we examine the OSN model with a multitude of odorant waveforms and demonstrate that the model output reproduces the temporal responses of OSNs obtained from in vivo electrophysiology recordings. In addition, we evaluate the model at the OSN population level and quantify the combinatorial complexity of the transformation taking place between the odorant space and the OSNs. The resulting concentration-dependent combinatorial code determines the complexity of the input space driving olfactory processing in the downstream neuropil, the antennal lobe. Second, we investigate the neural information processing in the antennal lobe across the molecule scale and the circuit scale. The antennal lobe encodes the output of the OSN population from a concentration-dependent code into a concentration-independent combinatorial code. To study the transformation of the combinatorial code, we construct a computational model of the antennal lobe that consists of two sub circuits, a predictive coding circuit and an on-off circuit, realized by two distinct local neuron networks, respectively. By examining the entire circuit model with both monomolecular odorant and odorant mixtures, we demonstrate that the predictive coding circuit encodes the odorant identity into concentration invariant code and the on-off circuit encodes the onset and the offset of a unique odorant identity. Third, we investigate the odorant representation inherent in the Kenyon cell activities in the mushroom body. The Kenyon cells encodes the output of the antennal lobe into a high-dimensional, sparse neural code that is immediately used for learning and memory formation. We model the Kenyon cell circuitry as a real-time feedback normalization circuit converting odorant information into a time-dependent hash codes. The resultant real-time hash code represents odorants, pure or mixture alike, in a way conducive to classifications, and suggests an intrinsic partition of the odorant space with similar hash codes. Forth, we study at the system scale the neural coding of the central complex. The central complex is a set of neuropils in the center of the fly brain that integrates multiple sensory information and play an important role in locomotor control. We create an application that enables simultaneous graphical querying and construction of executable model of the central complex neural circuitry. By reconfiguring the circuitry and generating different executable models, we compare the model response of the wild type and mutant fly strains. Finally, we show that the multi-scale study of the fruit fly brain is made possible by the Fruit Fly Brain Observatory (FFBO), an open-source platform to support open, collaborative fruit fly neuroscience research. The software architecture of the FFBO and its key application are highlighted along with several examples.
90

Synaptic plasticity and memory addressing in biological and artificial neural networks

Tyulmankov, Danil January 2024 (has links)
Biological brains are composed of neurons, interconnected by synapses to create large complex networks. Learning and memory occur, in large part, due to synaptic plasticity -- modifications in the efficacy of information transmission through these synaptic connections. Artificial neural networks model these with neural "units" which communicate through synaptic weights. Models of learning and memory propose synaptic plasticity rules that describe and predict the weight modifications. An equally important but under-evaluated question is the selection of \textit{which} synapses should be updated in response to a memory event. In this work, we attempt to separate the questions of synaptic plasticity from that of memory addressing. Chapter 1 provides an overview of the problem of memory addressing and a summary of the solutions that have been considered in computational neuroscience and artificial intelligence, as well as those that may exist in biology. Chapter 2 presents in detail a solution to memory addressing and synaptic plasticity in the context of familiarity detection, suggesting strong feedforward weights and anti-Hebbian plasticity as the respective mechanisms. Chapter 3 proposes a model of recall, with storage performed by addressing through local third factors and neo-Hebbian plasticity, and retrieval by content-based addressing. In Chapter 4, we consider the problem of concurrent memory consolidation and memorization. Both storage and retrieval are performed by content-based addressing, but the plasticity rule itself is implemented by gradient descent, modulated according to whether an item should be stored in a distributed manner or memorized verbatim. However, the classical method for computing gradients in recurrent neural networks, backpropagation through time, is generally considered unbiological. In Chapter 5 we suggest a more realistic implementation through an approximation of recurrent backpropagation. Taken together, these results propose a number of potential mechanisms for memory storage and retrieval, each of which separates the mechanism of synaptic updating -- plasticity -- from that of synapse selection -- addressing. Explicit studies of memory addressing may find applications not only in artificial intelligence but also in biology. In artificial networks, for example, selectively updating memories in large language models can help improve user privacy and security. In biological ones, understanding memory addressing can help with health outcomes and treating memory-based illnesses such as Alzheimers or PTSD.

Page generated in 0.0198 seconds