• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 12
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 174
  • 125
  • 78
  • 50
  • 45
  • 41
  • 37
  • 37
  • 31
  • 29
  • 21
  • 21
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Implementation Of Associative Memory With Online Learning into a Spiking Neural Network On Neuromorphic Hardware

Hampo, Michael J. January 2020 (has links)
No description available.
82

Spiking Neuromorphic Architecture for Associative Learning

Jones, Alexander January 2020 (has links)
No description available.
83

Adapting Neural Network Learning Algorithms for Neuromorphic Implementations

Jason M Allred (11197680) 29 July 2021 (has links)
<div>Computing with Artificial Neural Networks (ANNs) is a branch of machine learning that has seen substantial growth over the last decade, significantly increasing the accuracy and capability of machine learning systems. ANNs are connected networks of computing elements inspired by the neuronal connectivity in the brain. Spiking Neural Networks (SNNs) are a type of ANN that operate with event-driven computation, inspired by the “spikes” or firing events of individual neurons in the brain. Neuromorphic computing—the implementation of neural networks in hardware—seeks to improve the energy efficiency of these machine learning systems either by computing directly with device physical primitives, by bypassing the software layer of logical implementations, or by operating with SNN event-driven computation. Such implementations may, however, have added restrictions, including weight-localized learning and hard-wired connections. Further obstacles, such as catastrophic forgetting, the lack of supervised error signals, and storage and energy constraints, are encountered when these systems need to perform autonomous online, real-time learning in an unknown, changing environment. </div><div><br></div><div>Adapting neural network learning algorithms for these constraints can help address these issues. Specifically, corrections to Spike Timing-Dependent Plasticity (STDP) can stabilize local, unsupervised learning; accounting for the statistical firing properties of spiking neurons may improve conversions from non-spiking to spiking networks; biologically-inspired dopaminergic and habituation adjustments to STDP can limit catastrophic forgetting; convolving temporally instead of spatially can provide for localized weight sharing with direct synaptic connections; and explicitly training for spiking sparsity can significantly reduce computational energy consumption.</div>
84

Information processing in the Striatum : a computational study

Hjorth, Johannes January 2006 (has links)
The basal ganglia form an important structure centrally placed in the brain. They receive input from motor, associative and limbic areas, and produce output mainly to the thalamus and the brain stem. The basal ganglia have been implied in cognitive and motor functions. One way to understand the basal ganglia is to take a look at the diseases that affect them. Both Parkinson's disease and Huntington's disease with their motor problems are results of malfunctioning basal ganglia. There are also indications that these diseases affect cognitive functions. Drug addiction is another example that involves this structure, which is also important for motivation and selection of behaviour. In this licentiate thesis I am laying the groundwork for a detailed model of the striatum, which is the input stage of the basal ganglia. The striatum receives glutamatergic input from the cortex and thalamus, as well as dopaminergic input from substantia nigra. The majority of the neurons in the striatum are medium spiny (MS) projection neurons that project mainly to globus pallidus but also to other neurons in the striatum and to both dopamine producing and GABAergic neurons in substantia nigra. In addition to the MS neurons there are fast spiking (FS) interneurons that are in a position to regulate the firing of the MS neurons. These FS neurons are few, but connected into large networks through electrical synapses that could synchronise their effect. By forming strong inhibitory synapses on the MS neurons the FS neurons have a powerful influence on the striatal output. The inhibitory output of the basal ganglia on the thalamus is believed to keep prepared motor commands on hold, but once one of them is disinhibited, then the selected motor command is executed. This disinhibition is initiated in the striatum by the MS neurons. Both MS and FS neurons are active during so called up-states, which are periods of elevated cortical input to striatum. Here I have studied the FS neurons and their ability to detect such up-states. This is important because FS neurons can delay spikes in MS neurons and the time between up-state onset and the first spike in the MS neurons is correlated with the amount of calcium entering the MS neuron, which in turn might have implications for plasticity and learning of new behaviours. The effect of different combinations of electrical couplings between two FS neurons has been tested, where the location, number and strength of these gap junctions have been varied. I studied both the ability of the FS neurons to fire action potentials during the up-state, and the synchronisation between neighbouring FS neurons due to electrical coupling. I found that both proximal and distal gap junctions synchronised the firing, but the distal gap junctions did not have the same temporal precision. The ability of the FS neurons to detect an up-state was affected by whether the neighbouring FS neuron also received up-state input or not. This effect was more pronounced for distal gap junctions than proximal ones, due to a stronger shunting effect of distal gap junctions when the dendrites were synaptically activated. We have also performed initial stochastic simulations of the Ca2+-calmodulin-dependent protein kinase II (CaMKII). The purpose here is to build the knowledge as well as the tools necessary for biochemical simulations of intracellular processes that are important for plasticity in the MS neurons. The simulated biochemical pathways will then be integrated into an existing model of a full MS neuron. Another venue to explore is to build striatal network models consisting of MS and FS neurons and using experimental data of the striatal microcircuitry. With these different approaches we will improve our understanding of striatal information processing. / QC 20101116
85

Noise Robustness of CNN and SNN for EEG Motor imagery classification / Robusthet mot störning hos CNN och SNN vid klassificering av EEG motor imagery

Sewina, Merlin January 2023 (has links)
As an able-bodied human, understanding what someone says during a phone call with a lot of background noise is usually a task that is quite easy for us as we are aware of what the information is we want to hear, e.g. the voice of the person we are talking to, and the information that is noise, e.g. music or ambient noise in the background. While dealing with noise of all kinds for most humans proves to be the easiest, it is a very hard task for algorithms to deal with noisy data. Unfortunately for some beneficial and interesting applications, like Brain Computer Interfaces (short BCI) based on Electroencephalography (short EEG) data, noise is a very prevalent problem that greatly hinders the progress of making BCIs for real-life applications. In this thesis, we investigate what effect noise added to EEG data has on the classification accuracy of one Spiking Neural Network and one Convolutional Neural Network based classifier for a motor imagery classification task. The thesis shows that already relatively small amounts of noise (10% of original data) can have strong effects on the classification accuracy of the chosen classifiers. It also provides evidence that SNN based models have a more stable classification accuracy for low amounts of noise. Still, their classification accuracy after that declines more rapidly, while CNN based classifiers show a more linear decline in classification accuracy / Att förstå vad någon säger under ett telefonsamtal med mycket bakgrundsljud är en relativt enkel uppgift för en människa eftersom vi är duktiga på att urskilja vilken del av ljudet som är relevant, t.ex. rösten hos den vi pratar med, och vilken del av ljudet som är bakgrundsbrus, t.ex. musik eller omgivningsljud. Även om det är en enkel uppgift för en människa att filtrera bort olika sorters brus så är det betydligt svårare för en algoritm att hantera brusig data. Tyvärr finns det flertalet användbara och intressanta applikationsområden där svårigheten med brus orsakar betydande problem. Ett sådant exempel är braincomputer interfaces (BCI) baserade på elektroencefalografi (EEG) där brus är ett så pass utbrett problem att det begränsar möjligheten att använda BCI i verkliga tillämpningar. I detta examensarbete undersöks hur tillägget av brus till EEG-data påverkar noggrannheten på klassificeringen av hjärnaktivitet vid visualisering av olika rörelser. För detta ändamål jämfördes två typer av klassificerare: ett spiking neural network (SNN) och ett convolutional neural network (CNN). Examensarbetet visar att redan relativt små tillägg av brus (10%) kan ha stor påverkan på klassificeringens noggrannhet. Studien påvisar även att SNN-baserade modeller har en mer stabil noggrannhet för låga mängder brus, men att noggrannheten försämras snabbare vid ökad mängd brus än för CNN-baserade klassificerare som visar en mer linjär försämring.
86

INVESTIGATION OF THE ELECTROPHYSIOLOGICAL PROPERTIES OF THE MAJOR CELL TYPES IN THE RAT OLFACTORY TUBERCLE

Chiang, Elizabeth C. January 2008 (has links)
No description available.
87

Mechanisms Underlying Subthreshold and Suprathreshold Responses in Dorsal Cochlear Nucleus Cartwheel Cells

Tong, Mingjie January 2005 (has links)
No description available.
88

Energy Efficient Deep Spiking Recurrent Neural Networks: A Reservoir Computing-Based Approach

Hamedani, Kian 18 June 2020 (has links)
Recurrent neural networks (RNNs) have been widely used for supervised pattern recognition and exploring the underlying spatio-temporal correlation. However, due to the vanishing/exploding gradient problem, training a fully connected RNN in many cases is very difficult or even impossible. The difficulties of training traditional RNNs, led us to reservoir computing (RC) which recently attracted a lot of attention due to its simple training methods and fixed weights at its recurrent layer. There are three different categories of RC systems, namely, echo state networks (ESNs), liquid state machines (LSMs), and delayed feedback reservoirs (DFRs). In this dissertation a novel structure of RNNs which is inspired by dynamic delayed feedback loops is introduced. In the reservoir (recurrent) layer of DFR, only one neuron is required which makes DFRs extremely suitable for hardware implementations. The main motivation of this dissertation is to introduce an energy efficient, and easy to train RNN while this model achieves high performances in different tasks compared to the state-of-the-art. To improve the energy efficiency of our model, we propose to adopt spiking neurons as the information processing unit of DFR. Spiking neural networks (SNNs) are the most biologically plausible and energy efficient class of artificial neural networks (ANNs). The traditional analog ANNs have marginal similarity with the brain-like information processing. It is clear that the biological neurons communicate together through spikes. Therefore, artificial SNNs have been introduced to mimic the biological neurons. On the other hand, the hardware implementation of SNNs have shown to be extremely energy efficient. Towards achieving this overarching goal, this dissertation presents a spiking DFR (SDFR) with novel encoding schemes, and defense mechanisms against adversarial attacks. To verify the effectiveness and performance of the SDFR, it is adopted in three different applications where there exists a significant Spatio-temporal correlations. These three applications are attack detection in smart grids, spectrum sensing of multi-input-multi-output(MIMO)-orthogonal frequency division multiplexing (OFDM) Dynamic Spectrum Sharing (DSS) systems, and video-based face recognition. In this dissertation, the performance of SDFR is first verified in cyber attack detection in Smart grids. Smart grids are a new generation of power grids which guarantee a more reliable and efficient transmission and delivery of power to the costumers. A more reliable and efficient power generation and distribution can be realized through the integration of internet, telecommunication, and energy technologies. The convergence of different technologies, brings up opportunities, but the challenges are also inevitable. One of the major challenges that pose threat to the smart grids is cyber-attacks. A novel method is developed to detect false data injection (FDI) attacks in smart grids. The second novel application of SDFR is the spectrum sensing of MIMO-OFDM DSS systems. DSS is being implemented in the fifth generation of wireless communication systems (5G) to improve the spectrum efficiency. In a MIMO-OFDM system, not all the subcarriers are utilized simultaneously by the primary user (PU). Therefore, it is essential to sense the idle frequency bands and assign them to the secondary user (SU). The effectiveness of SDFR in capturing the spatio-temporal correlation of MIMO-OFDM time-series and predicting the availability of frequency bands in the future time slots is studied as well. In the third application, the SDFR is modified to be adopted in video-based face recognition. In this task, the SDFR is leveraged to recognize the identities of different subjects while they rotate their heads in different angles. Another contribution of this dissertation is to propose a novel encoding scheme of spiking neurons which is inspired by the cognitive studies of rats. For the first time, the multiplexing of multiple neural codes is introduced and it is shown that the robustness and resilience of the spiking neurons is increased against noisy data, and adversarial attacks, respectively. Adversarial attacks are small and imperceptible perturbations of the input data, which have shown to be able to fool deep learning (DL) models. So far, many adversarial attack and defense mechanisms have been introduced for DL models. Compromising the security and reliability of artificial intelligence (AI) systems is a major concern of government, industry and cyber-security researchers, in that insufficient protections can compromise the security and privacy of everyone in society. Finally, a defense mechanism to protect spiking neurons against adversarial attacks is introduced for the first time. In a nutshell, this dissertation presents a novel energy efficient deep spiking recurrent neural network which is inspired by delayed dynamic loops. The effectiveness of the introduced model is verified in several different applications. At the end, novel encoding and defense mechanisms are introduced which improve the robustness of the model against noise and adversarial attacks. / Doctor of Philosophy / The ultimate goal of artificial intelligence (AI) is to mimic the human brain. Artificial neural networks (ANN) are an attempt to realize that goal. However, traditional ANNs are very far from mimicking biological neurons. It is well-known that biological neurons communicate with one another through signals in the format of spikes. Therefore, artificial spiking neural networks (SNNs) have been introduced which behave more similarly to biological neurons. Moreover, SNNs are very energy efficient which makes them a suitable choice for hardware implementation of ANNs (neuromporphic computing). Despite the many benefits that are brought about by SNNs, they are still behind traditional ANNs in terms of performance. Therefore, in this dissertation, a new structure of SNNs is introduced which outperforms the traditional ANNs in three different applications. This new structure is inspired by delayed dynamic loops which exist in biological brains. The main objective of this novel structure is to capture the spatio-temporal correlation which exists in time-series while the training overhead and power consumption is reduced. Another contribution of this dissertation is to introduce novel encoding schemes for spiking neurons. It is clear that biological neurons leverage spikes, but the language that they use to communicate is not clear. Hence, the spikes require to be encoded in a certain language which is called neural spike encoding scheme. Inspired by the cognitive studies of rats, a novel encoding scheme is presented. Lastly, it is shown that the introduced encoding scheme increases the robustness of SNNs against noisy data and adversarial attacks. AI models including SNNs have shown to be vulnerable to adversarial attacks. Adversarial attacks are minor perturbations of the input data that can cause the AI model to misscalassify the data. For the first time, a defense mechanism is introduced which can protect SNNs against such attacks.
89

Spike Processing Circuit Design for Neuromorphic Computing

Zhao, Chenyuan 13 September 2019 (has links)
Von Neumann Bottleneck, which refers to the limited throughput between the CPU and memory, has already become the major factor hindering the technical advances of computing systems. In recent years, neuromorphic systems started to gain increasing attention as compact and energy-efficient computing platforms. Spike based-neuromorphic computing systems require high performance and low power neural encoder and decoder to emulate the spiking behavior of neurons. These two spike-analog signals converting interface determine the whole spiking neuromorphic computing system's performance, especially the highest performance. Many state-of-the-art neuromorphic systems typically operate in the frequency range between 〖10〗^0KHz and 〖10〗^2KHz due to the limitation of encoding/decoding speed. In this dissertation, all these popular encoding and decoding schemes, i.e. rate encoding, latency encoding, ISI encoding, together with related hardware implementations have been discussed and analyzed. The contributions included in this dissertation can be classified into three main parts: neuron improvement, three kinds of ISI encoder design, two types of ISI decoder design. Two-path leakage LIF neuron has been fabricated and modular design methodology is invented. Three kinds of ISI encoding schemes including parallel signal encoding, full signal iteration encoding, and partial signal encoding are discussed. The first two types ISI encoders have been fabricated successfully and the last ISI encoder will be taped out by the end of 2019. Two types of ISI decoders adopted different techniques which are sample-and-hold based mixed-signal design and spike-timing-dependent-plasticity (STDP) based analog design respectively. Both these two ISI encoders have been evaluated through post-layout simulations successfully. The STDP based ISI encoder will be taped out by the end of 2019. A test bench based on correlation inspection has been built to evaluate the information recovery capability of the proposed spiking processing link. / Doctor of Philosophy / Neuromorphic computing is a kind of specific electronic system that could mimic biological bodies’ behavior. In most cases, neuromorphic computing system is built with analog circuits which have benefits in power efficient and low thermal radiation. Among neuromorphic computing system, one of the most important components is the signal processing interface, i.e. encoder/decoder. To increase the whole system’s performance, novel encoders and decoders have been proposed in this dissertation. In this dissertation, three kinds of temporal encoders, one rate encoder, one latency encoder, one temporal decoder, and one general spike decoder have been proposed. These designs could be combined together to build high efficient spike-based data link which guarantee the processing performance of whole neuromorphic computing system.
90

Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture

Nowshin, Fabiha January 2021 (has links)
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. There are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We develop a novel input and output processing engine for our network and demonstrate the spatio-temporal information processing capability. We demonstrate an accuracy of a 100% with our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations. / M.S. / In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.

Page generated in 0.0349 seconds