• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 2
  • Tagged with
  • 44
  • 44
  • 21
  • 19
  • 17
  • 14
  • 14
  • 13
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Adapting Neural Network Learning Algorithms for Neuromorphic Implementations

Jason M Allred (11197680) 29 July 2021 (has links)
<div>Computing with Artificial Neural Networks (ANNs) is a branch of machine learning that has seen substantial growth over the last decade, significantly increasing the accuracy and capability of machine learning systems. ANNs are connected networks of computing elements inspired by the neuronal connectivity in the brain. Spiking Neural Networks (SNNs) are a type of ANN that operate with event-driven computation, inspired by the “spikes” or firing events of individual neurons in the brain. Neuromorphic computing—the implementation of neural networks in hardware—seeks to improve the energy efficiency of these machine learning systems either by computing directly with device physical primitives, by bypassing the software layer of logical implementations, or by operating with SNN event-driven computation. Such implementations may, however, have added restrictions, including weight-localized learning and hard-wired connections. Further obstacles, such as catastrophic forgetting, the lack of supervised error signals, and storage and energy constraints, are encountered when these systems need to perform autonomous online, real-time learning in an unknown, changing environment. </div><div><br></div><div>Adapting neural network learning algorithms for these constraints can help address these issues. Specifically, corrections to Spike Timing-Dependent Plasticity (STDP) can stabilize local, unsupervised learning; accounting for the statistical firing properties of spiking neurons may improve conversions from non-spiking to spiking networks; biologically-inspired dopaminergic and habituation adjustments to STDP can limit catastrophic forgetting; convolving temporally instead of spatially can provide for localized weight sharing with direct synaptic connections; and explicitly training for spiking sparsity can significantly reduce computational energy consumption.</div>
12

Implementation of bioinspired algorithms on the neuromorphic VLSI system SpiNNaker 2

Yan, Yexin 29 June 2023 (has links)
It is believed that neuromorphic hardware will accelerate neuroscience research and enable the next generation edge AI. On the other hand, brain-inspired algorithms are supposed to work efficiently on neuromorphic hardware. But both processes don't happen automatically. To efficiently bring together hardware and algorithm, optimizations are necessary based on the understanding of both sides. In this work, software frameworks and optimizations for efficient implementation of neural network-based algorithms on SpiNNaker 2 are proposed, resulting in optimized power consumption, memory footprint and computation time. In particular, first, a software framework including power management strategies is proposed to apply dynamic voltage and frequency scaling (DVFS) to the simulation of spiking neural networks, which is also the first-ever software framework running a neural network on SpiNNaker 2. The result shows the power consumption is reduced by 60.7% in the synfire chain benchmark. Second, numerical optimizations and data structure optimizations lead to an efficient implementation of reward-based synaptic sampling, which is one of the most complex plasticity algorithms ever implemented on neuromorphic hardware. The results show a reduction of computation time by a factor of 2 and energy consumption by 62%. Third, software optimizations are proposed which effectively exploit the efficiency of the multiply-accumulate array and the flexibility of the ARM core, which results in, when compared with Loihi, 3 times faster inference speed and 5 times lower energy consumption in a keyword spotting benchmark, and faster inference speed and lower energy consumption for adaptive control benchmark in high dimensional cases. The results of this work demonstrate the potential of SpiNNaker 2, explore its range of applications and also provide feedback for the design of the next generation neuromorphic hardware.
13

Use and Application of 2D Layered Materials-Based Memristors for Neuromorphic Computing

Alharbi, Osamah 01 February 2023 (has links)
This work presents a step forward in the use of 2D layered materials (2DLM), specifically hexagonal boron nitride (h-BN), for the fabrication of memristors. In this study, we fabricate, characterize, and use h-BN based memristors with Ag/few-layer h-BN/Ag structure to implement a fully functioning artificial leaky integrate-and-fire neuron on hardware. The devices showed volatile resistive switching behavior with no electro-forming process required, with relatively low VSET and long endurance of beyond 1.5 million cycles. In addition, we present some of the failure mechanisms in these devices with some statistical analyses to understand the causes, as well as a statistical study of both cycle-to-cycle and device-to-device variabilities in 20 devices. Moreover, we study the use of these devices in implementing a functioning artificial leaky integrate-and-fire neuron similar to a biological neuron in the brain. We provide SPICE simulation as well as hardware implementation of the artificial neuron that are in full agreement, showing that our device could be used for such application. Additionally, we study the use of these devices as an activation function for spiking neural networks (SNNs) by providing a SPICE simulation of a fully trained network, where the artificial spiking neuron is connected to the output terminal of a crossbar array. The SPICE simulations provide a proof of concept for using h-BN based memristor for activation function for SNNs.
14

Application and Simulation of Neuromorphic Devices for use in Neural Networks

Wenke, Sam 28 September 2018 (has links)
No description available.
15

Neuromorphic electronics with Mott insulators

Michael Taejoon Park (11896016) 25 July 2022 (has links)
<p>The traditional semiconductor device scaling based on Moore’s law is reaching its physical limits. New materials hosting rich physical phenomena such as correlated electronic behavior may be essential to identify novel approaches for information processing. The tunable band structures in such systems enables the design of hardware for neuromorphic computing. Strongly correlated perovskite nickelates (ReNiO3) represent a class of quantum materials that possess exotic electronic properties such as metal-to-insulator transitions. In this thesis, detailed studies of NdNiO3 thin films from wafer-scale synthesis to structure characterization and to electronic device demonstration will be discussed.</p> <p>Atomic layer deposition (ALD) of correlated oxide thin films is essential for emerging electronic technologies and industry. We reported the scalable ALD growth of neodymium nickelate (NdNiO3) with high crystal quality using Nd(iPrCp)3, Ni(tBu2-amd)2 and ozone (O3) as precursors. By controlling various growth parameters such as precursor dose time and reactor temperature, we have optimized ALD condition for perovskite phase of NdNiO3­. We studied the structure and electrical properties of ALD NdNiO3 films epitaxially grown on LaAlO3 and confirmed their properties were comparable to those synthesized by physical vapor deposition methods. </p> <p>ReNiO3 undergoes a dramatic phase transition by hydrogen doping with catalytic electrodes independent of temperature. The electrons from hydrogen occupy Ni 3<em>d</em> orbitals and create strongly correlated insulating state with resistance changes up to eight orders of magnitudes. At room temperature, protons remain in the lattice locally near catalytic electrodes and can move by electrical fields due to its charge. The effect of high-speed voltage pulses on the migration of protons in NdNiO3 devices is discussed. After voltage pulses were applied with changing the voltage magnitude in nanosecond time scale, the resistance changes of the nickelate device were investigated. </p> <p>Reconfigurable perovskite nickelate devices were demonstrated and a single device can switch between multiple electronic functions such as neuron, synapse, resistor, and capacitor controlled by a single electrical pulse. Raman spectroscopy showed that differences in local proton distributions near the Pd electrode leads to different functions. This body of results motivates the search for novel materials where subtle compositional or structural differences can enable different gaps that can host neuromorphic functions.</p>
16

Spike Processing Circuit Design for Neuromorphic Computing

Zhao, Chenyuan 13 September 2019 (has links)
Von Neumann Bottleneck, which refers to the limited throughput between the CPU and memory, has already become the major factor hindering the technical advances of computing systems. In recent years, neuromorphic systems started to gain increasing attention as compact and energy-efficient computing platforms. Spike based-neuromorphic computing systems require high performance and low power neural encoder and decoder to emulate the spiking behavior of neurons. These two spike-analog signals converting interface determine the whole spiking neuromorphic computing system's performance, especially the highest performance. Many state-of-the-art neuromorphic systems typically operate in the frequency range between 〖10〗^0KHz and 〖10〗^2KHz due to the limitation of encoding/decoding speed. In this dissertation, all these popular encoding and decoding schemes, i.e. rate encoding, latency encoding, ISI encoding, together with related hardware implementations have been discussed and analyzed. The contributions included in this dissertation can be classified into three main parts: neuron improvement, three kinds of ISI encoder design, two types of ISI decoder design. Two-path leakage LIF neuron has been fabricated and modular design methodology is invented. Three kinds of ISI encoding schemes including parallel signal encoding, full signal iteration encoding, and partial signal encoding are discussed. The first two types ISI encoders have been fabricated successfully and the last ISI encoder will be taped out by the end of 2019. Two types of ISI decoders adopted different techniques which are sample-and-hold based mixed-signal design and spike-timing-dependent-plasticity (STDP) based analog design respectively. Both these two ISI encoders have been evaluated through post-layout simulations successfully. The STDP based ISI encoder will be taped out by the end of 2019. A test bench based on correlation inspection has been built to evaluate the information recovery capability of the proposed spiking processing link. / Doctor of Philosophy / Neuromorphic computing is a kind of specific electronic system that could mimic biological bodies’ behavior. In most cases, neuromorphic computing system is built with analog circuits which have benefits in power efficient and low thermal radiation. Among neuromorphic computing system, one of the most important components is the signal processing interface, i.e. encoder/decoder. To increase the whole system’s performance, novel encoders and decoders have been proposed in this dissertation. In this dissertation, three kinds of temporal encoders, one rate encoder, one latency encoder, one temporal decoder, and one general spike decoder have been proposed. These designs could be combined together to build high efficient spike-based data link which guarantee the processing performance of whole neuromorphic computing system.
17

Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture

Nowshin, Fabiha January 2021 (has links)
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. There are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We develop a novel input and output processing engine for our network and demonstrate the spatio-temporal information processing capability. We demonstrate an accuracy of a 100% with our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations. / M.S. / In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
18

Design and Optimization of Temporal Encoders using Integrate-and-Fire and Leaky Integrate-and-Fire Neurons

Anderson, Juliet Graciela 05 October 2022 (has links)
As Moore's law nears its limit, a new form of signal processing is needed. Neuromorphic computing has used inspiration from biology to produce a new form of signal processing by mimicking biological neural networks using electrical components. Neuromorphic computing requires less signal preprocessing than digital systems since it can encode signals directly using analog temporal encoders from Spiking Neural Networks (SNNs). These encoders receive an analog signal as an input and generate a spike or spike trains as their output. The proposed temporal encoders use latency and Inter-Spike Interval (ISI) encoding and are expected to produce a highly sensitive hardware implementation of time encoding to preprocess signals for dynamic neural processors. Two ISI and two latency encoders were designed using Integrate-and-Fire (IF) and Leaky Integrate-and-Fire (LIF) neurons and optimized to produce low area designs. The IF and LIF neurons were designed using the Global Foundries 180nm CMOS process and achieved an area of 186µm2 and 182µm2, respectively. All four encoders have a sampling frequency of 50kHz. The latency encoders achieved an average energy consumption per spike of 277nJ and 316pJ for the IF-based and LIF-based latency encoders, respectively. The ISI encoders achieved an average energy consumption per spike of 1.07uJ and 901nJ for the IF-based and LIF-based ISI encoders, respectively. Power consumption is proportional to the number of neurons employed in the encoder and the potential to reduce power consumption through layout-level simulations is presented. The LIF neuron is able to use a smaller membrane capacitance to achieve similar operability as the IF neuron and consumes less area despite having more components. This demonstrates that capacitor sizes are the main limitations of a small size in spiking neurons for SNNs. An overview of the design and layout process of the two presented neurons is discussed with tips for overcoming problems encountered. The proposed designs can result in a fast neuromorphic process by employing a frequency higher than 10kHz and by providing a hardware implementation that is efficient in multiple sectors like machine learning, medical implementations, or security systems since hardware is safer from hacks. / Master of Science / As Moore's law nears its limit, a new form of signal processing is needed. Moore's law anticipated that transistor sizes will decrease exponentially as the years pass but CMOS technology is reaching physical limitations which could mean an end to Moore's prediction. Neuromorphic computing has used inspiration from biology to produce a new form of signal processing by mimicking biological neural networks using electrical components. Biological neural networks communicate through interconnected neurons that transmit signals through synapses. Neuromorphic computing uses a subdivision of Artificial Neural Networks (ANNs) called Spiking Neural Networks (SNNs) to encode input signals into voltage spikes to mimic biological neurons. Neuromorphic computing reduces the preprocessing step needed to process data in the digital domain since it can encode signals directly using analog temporal encoders from SNNs. These encoders receive an analog signal as an input and generate a spike or spike trains as their output. The proposed temporal encoders use latency and Inter-Spike Interval (ISI) encoding and are expected to produce a highly sensitive hardware implementation of time encoding to preprocess signals for dynamic neural processors. Two ISI and two latency encoders were designed using Integrate-and-Fire (IF) and Leaky Integrate-and-Fire (LIF) neurons and optimized to produce low area designs. All four encoders have a sampling frequency of 50kHz. The latency encoders achieved an average energy consumption per spike of 277nJ and 316pJ for the IF-based and LIF-based latency encoders, respectively. The ISI encoders achieved an average energy consumption per spike of 1.07uJ and 901nJ for the IF-based and LIF-based ISI encoders, respectively. Power consumption is proportional to the number of neurons employed in the encoder and the potential to reduce power consumption through layout-level simulations is presented. The LIF neuron is able to use a smaller membrane capacitance to achieve similar operability which consumes less area despite having more components than the IF neuron. This demonstrates that capacitor sizes are the main limitations of small size in neurons for spiking neural networks. An overview of the design and layout process of the two presented neurons is discussed with tips for overcoming problems encountered. The proposed designs can result in a fast neuromorphic process by employing a frequency higher than 10kHz and by providing a hardware implementation that is efficient in multiple sectors like machine learning, medical implementations, or security systems since hardware is safer from hacks.
19

Leveraging Biological Mechanisms in Machine Learning

Rogers, Kyle J. 10 June 2024 (has links) (PDF)
This thesis integrates biologically-inspired mechanisms into machine learning to develop novel tuning algorithms, gradient abstractions for depth-wise parallelism, and an original bias neuron design. We introduce neuromodulatory tuning, which uses neurotransmitter-inspired bias adjustments to enhance transfer learning in spiking and non-spiking neural networks, significantly reducing parameter usage while maintaining performance. Additionally, we propose a novel approach that decouples the backward pass of backpropagation using layer abstractions, inspired by feedback loops in biological systems, enabling depth-wise training parallelization. We further extend neuromodulatory tuning by designing spiking bias neurons that mimic dopamine neuron mechanisms, leading to the development of volumetric tuning. This method enhances the fine-tuning of a small spiking neural network for EEG emotion classification, outperforming previous bias tuning methods. Overall, this thesis demonstrates the potential of leveraging neuroscience discoveries to improve machine learning.
20

Organic electrochemical networks for biocompatible and implantable machine learning: Organic bioelectronic beyond sensing

Cucchi, Matteo 31 January 2022 (has links)
How can the brain be such a good computer? Part of the answer lies in the astonishing number of neurons and synapses that process electrical impulses in parallel. Part of it must be found in the ability of the nervous system to evolve in response to external stimuli and grow, sharpen, and depress synaptic connections. However, we are far from understanding even the basic mechanisms that allow us to think, be aware, recognize patterns, and imagine. The brain can do all this while consuming only around 20 Watts, out-competing any human-made processor in terms of energy-efficiency. This question is of particular interest in a historical era and technological stage where phrases like machine learning and artificial intelligence are more and more widespread, thanks to recent advances produced in the field of computer science. However, brain-inspired computation is today still relying on algorithms that run on traditional silicon-made, digital processors. Instead, the making of brain-like hardware, where the substrate itself can be used for computation and it can dynamically update its electrical pathways, is still challenging. In this work, I tried to employ organic semiconductors that work in electrolytic solutions, called organic mixed ionic-electronic conductors (OMIECs) to build hardware capable of computation. Moreover, by exploiting an electropolymerization technique, I could form conducting connections in response to electrical spikes, in analogy to how synapses evolve when the neuron fires. After demonstrating artificial synapses as a potential building block for neuromorphic chips, I shifted my attention to the implementation of such synapses in fully operational networks. In doing so, I borrowed the mathematical framework of a machine learning approach known as reservoir computing, which allows computation with random (neural) networks. I capitalized my work on demonstrating the possibility of using such networks in-vivo for the recognition and classification of dangerous and healthy heartbeats. This is the first demonstration of machine learning carried out in a biological environment with a biocompatible substrate. The implications of this technology are straightforward: a constant monitoring of biological signals and fluids accompanied by an active recognition of the presence of malign patterns may lead to a timely, targeted and early diagnosis of potentially mortal conditions. Finally, in the attempt to simulate the random neural networks, I faced difficulties in the modeling of the devices with the state-of-the-art approach. Therefore, I tried to explore a new way to describe OMIECs and OMIECs-based devices, starting from thermodynamic axioms. The results of this model shine a light on the mechanism behind the operation of the organic electrochemical transistors, revealing the importance of the entropy of mixing and suggesting new pathways for device optimization for targeted applications.

Page generated in 0.0624 seconds