• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Design and Optimization of Temporal Encoders using Integrate-and-Fire and Leaky Integrate-and-Fire Neurons

Anderson, Juliet Graciela 05 October 2022 (has links)
As Moore's law nears its limit, a new form of signal processing is needed. Neuromorphic computing has used inspiration from biology to produce a new form of signal processing by mimicking biological neural networks using electrical components. Neuromorphic computing requires less signal preprocessing than digital systems since it can encode signals directly using analog temporal encoders from Spiking Neural Networks (SNNs). These encoders receive an analog signal as an input and generate a spike or spike trains as their output. The proposed temporal encoders use latency and Inter-Spike Interval (ISI) encoding and are expected to produce a highly sensitive hardware implementation of time encoding to preprocess signals for dynamic neural processors. Two ISI and two latency encoders were designed using Integrate-and-Fire (IF) and Leaky Integrate-and-Fire (LIF) neurons and optimized to produce low area designs. The IF and LIF neurons were designed using the Global Foundries 180nm CMOS process and achieved an area of 186µm2 and 182µm2, respectively. All four encoders have a sampling frequency of 50kHz. The latency encoders achieved an average energy consumption per spike of 277nJ and 316pJ for the IF-based and LIF-based latency encoders, respectively. The ISI encoders achieved an average energy consumption per spike of 1.07uJ and 901nJ for the IF-based and LIF-based ISI encoders, respectively. Power consumption is proportional to the number of neurons employed in the encoder and the potential to reduce power consumption through layout-level simulations is presented. The LIF neuron is able to use a smaller membrane capacitance to achieve similar operability as the IF neuron and consumes less area despite having more components. This demonstrates that capacitor sizes are the main limitations of a small size in spiking neurons for SNNs. An overview of the design and layout process of the two presented neurons is discussed with tips for overcoming problems encountered. The proposed designs can result in a fast neuromorphic process by employing a frequency higher than 10kHz and by providing a hardware implementation that is efficient in multiple sectors like machine learning, medical implementations, or security systems since hardware is safer from hacks. / Master of Science / As Moore's law nears its limit, a new form of signal processing is needed. Moore's law anticipated that transistor sizes will decrease exponentially as the years pass but CMOS technology is reaching physical limitations which could mean an end to Moore's prediction. Neuromorphic computing has used inspiration from biology to produce a new form of signal processing by mimicking biological neural networks using electrical components. Biological neural networks communicate through interconnected neurons that transmit signals through synapses. Neuromorphic computing uses a subdivision of Artificial Neural Networks (ANNs) called Spiking Neural Networks (SNNs) to encode input signals into voltage spikes to mimic biological neurons. Neuromorphic computing reduces the preprocessing step needed to process data in the digital domain since it can encode signals directly using analog temporal encoders from SNNs. These encoders receive an analog signal as an input and generate a spike or spike trains as their output. The proposed temporal encoders use latency and Inter-Spike Interval (ISI) encoding and are expected to produce a highly sensitive hardware implementation of time encoding to preprocess signals for dynamic neural processors. Two ISI and two latency encoders were designed using Integrate-and-Fire (IF) and Leaky Integrate-and-Fire (LIF) neurons and optimized to produce low area designs. All four encoders have a sampling frequency of 50kHz. The latency encoders achieved an average energy consumption per spike of 277nJ and 316pJ for the IF-based and LIF-based latency encoders, respectively. The ISI encoders achieved an average energy consumption per spike of 1.07uJ and 901nJ for the IF-based and LIF-based ISI encoders, respectively. Power consumption is proportional to the number of neurons employed in the encoder and the potential to reduce power consumption through layout-level simulations is presented. The LIF neuron is able to use a smaller membrane capacitance to achieve similar operability which consumes less area despite having more components than the IF neuron. This demonstrates that capacitor sizes are the main limitations of small size in neurons for spiking neural networks. An overview of the design and layout process of the two presented neurons is discussed with tips for overcoming problems encountered. The proposed designs can result in a fast neuromorphic process by employing a frequency higher than 10kHz and by providing a hardware implementation that is efficient in multiple sectors like machine learning, medical implementations, or security systems since hardware is safer from hacks.
2

Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture

Nowshin, Fabiha January 2021 (has links)
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. There are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We develop a novel input and output processing engine for our network and demonstrate the spatio-temporal information processing capability. We demonstrate an accuracy of a 100% with our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations. / M.S. / In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
3

Nonlinear signal processing by noisy spiking neurons

Voronenko, Sergej Olegovic 12 February 2018 (has links)
Neurone sind anregbare Zellen, die mit Hilfe von elektrischen Signalen miteinander kommunizieren. Im allgemeinen werden eingehende Signale von den Nervenzellen in einer nichtlinearen Art und Weise verarbeitet. Wie diese Verarbeitung in einer umfassenden und exakten Art und Weise mathematisch beschrieben werden kann, ist bis heute nicht geklärt und ist Gegenstand aktueller Forschung. In dieser Arbeit untersuchen wir die nichtlineare Übertragung und Verarbeitung von Signalen durch stochastische Nervenzellen und wenden dabei zwei unterschiedliche Herangehensweisen an. Im ersten Teil der Arbeit befassen wir uns mit der Frage, auf welche Art und Weise ein Signal mit einer bekannten Zeitabhängigkeit die Rate der neuronalen Aktivität beeinflusst. Im zweiten Teil der Arbeit widmen wir uns der Rekonstruktion eingehender Signale aus der durch sie hervorgerufenen neuronalen Aktivität und beschäftigen uns mit der Abschätzung der übertragenen Informationsmenge. Die Ergebnisse dieser Arbeit demonstrieren, wie die etablierten linearen Theorien, die die Modellierung der neuronalen Aktivitätsrate bzw. die Rekonstruktion von Signalen beschreiben, um Beiträge höherer Ordnung erweitert werden können. Einen wichtigen Beitrag dieser Arbeit stellt allerdings auch die Darstellung der Signifikanz der nichtlinearen Theorien dar. Die nichtlinearen Beiträge erweisen sich nicht nur als schwache Korrekturen zu den etablierten linearen Theorien, sondern beschreiben neuartige Effekte, die durch die linearen Theorien nicht erfasst werden können. Zu diesen Effekten gehört zum Beispiel die Anregung von harmonischen Oszillationen der neuronalen Aktivitätsrate und die Kodierung von Signalen in der signalabhängigen Varianz einer Antwortvariablen. / Neurons are excitable cells which communicate with each other via electrical signals. In general, these signals are processed by the Neurons in a nonlinear fashion, the exact mathematical description of which is still an open problem in neuroscience. In this thesis, the broad topic of nonlinear signal processing is approached from two directions. The first part of the thesis is devoted to the question how input signals modulate the neural response. The second part of the thesis is concerned with the nonlinear reconstruction of input signals from the neural output and with the estimation of the amount of the transmitted information. The results of this thesis demonstrate how existing linear theories can be extended to capture nonlinear contributions of the signal to the neural response or to incorporate nonlinear correlations into the estimation of the transmitted information. More importantly, however, our analysis demonstrates that these extensions do not merely provide small corrections to the existing linear theories but can account for qualitatively novel effects which are completely missed by the linear theories. These effects include, for example, the excitation of harmonic oscillations in the neural firing rate or the estimation of information for systems with a signal-dependent output variance.

Page generated in 0.0394 seconds