1 |
THE DESIGN, FABRICATION AND CHARACTERIZATION OF SILICON OXIDE NITRIDE OXIDE SEMICONDUCTOR THIN FILM GATES FOR USE IN MODELING SPIKING ANALOG NEURAL CIRCUITSWood, Richard P. 04 1900 (has links)
<p>This Thesis details the design, fabrication and characterization of organic semiconductor field effect transistors with silicon oxide-nitride-oxide-semiconductor (SONOS) gates for use in spiking analog neural circuits. The results are divided into two main sections. First, the SONOS structures, parallel plate capacitors and field effect transistors, were designed, fabricated and characterized. Second, these results are used to model spiking analog neural circuits. The modeling is achieved using PSPICE based software.</p> <p>The initial design work begins with an analysis of the basic SONOS structure. The existence of the ultrathin layers of the SONOS structure is confirmed with the use of Transmission Electron Microscopy (TEM) and Energy Dispersive Spectroscopy (EDS) scans of device stacks. Parallel plate capacitors were fabricated prior to complete transistors due to the significantly less processing required. The structure and behaviour of these capacitors is similar to that of the transistor gates which allows for the optimization of the structures prior to the fabrication of the transistors. These capacitors were fabricated using the semiconductor materials of; crystalline silicon, amorphous silicon, Zinc Oxide, copper phthalocyanine (CuPc) and tris 8-hydroxyquinolinato aluminium (AlQ3). These devices are then subjected to standard capacitance voltage (C-V) analysis. The results of this analysis demonstrate that the inclusion of SONOS structures in the capacitors (and transistors) result in a hysteresis which is the result of charge accumulation in the nitride layer of the SONOS structure. This effect can be utilized as an imbedded memory. Standard control devices were fabricated and analysed and no significant hysteresis effect was observed. The hysteresis effect is only observed after the SONOS devices are subject to high voltages (approximately 14 volts) which allows tunneling through a thin oxide layer into traps in the silicon nitride layer. This analysis was conducted to confirm that the SONOS structure causes the memory effect, not the existence of interface states that can be charged and discharged.</p> <p>The next step was to design and fabricate amorphous semiconductor field effect transistors with and without the SONOS structure. First FETs without the SONOS gates were fabricated using amorphous semiconductor materials; Zinc Oxide, CuPc and AlQ3 and then the devices were characterized. This initial step confirmed the functionality of these basic devices and the ability to fabricate working control samples. Next, SONOS gate TFTs were fabricated using CuPc as the semiconductor material. The characterization of these devices confirmed the ability to shift the transfer characteristics of the devices through a read and write mechanism similar to that used to shift the C-V characteristics of the parallel plate capacitors. Split gate FETs were also produced to examine the feasibility of individual transistors with multiple gates.</p> <p>The results of these characterizations were used to model spiking analog neural circuits. This modeling was carried out in four parts. First, representative transfer and output characteristics were used to replicate analog spiking neural circuits. This was carried out using standard PSPICE software with the modification of the discrete TFT device characteristics to represent the amorphous CuPc organic transistors. The results were found to be comparable to circuits using crystalline silicon transistors. Second, the SONOS structures were modeled closely matching the characterized results for charge and voltage shift. Third, a simple Hebbian learning circuit was designed and modeled, demonstrating the potential for imbedded memories. Lastly, split gate devices were modeled using the device characterizations.</p> / Doctor of Philosophy (PhD)
|
2 |
Deep spiking neural networksLiu, Qian January 2018 (has links)
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutions for cognitive tasks. Thus the main research problem emerges of understanding how to operate and train biologically-plausible SNNs to close the gap in cognitive capabilities between SNNs and ANNs. SNNs can be trained by first training an equivalent ANN and then transferring the tuned weights to the SNN. This method is called âoff-lineâ training, since it does not take place on an SNN directly, but rather on an ANN instead. However, previous work on such off-line training methods has struggled in terms of poor modelling accuracy of the spiking neurons and high computational complexity. In this thesis we propose a simple and novel activation function, Noisy Softplus (NSP), to closely model the response firing activity of biologically-plausible spiking neurons, and introduce a generalised off-line training method using the Parametric Activation Function (PAF) to map the abstract numerical values of the ANN to concrete physical units, such as current and firing rate in the SNN. Based on this generalised training method and its fine tuning, we achieve the state-of-the-art accuracy on the MNIST classification task using spiking neurons, 99.07%, on a deep spiking convolutional neural network (ConvNet). We then take a step forward to âon-lineâ training methods, where Deep Learning modules are trained purely on SNNs in an event-driven manner. Existing work has failed to provide SNNs with recognition accuracy equivalent to ANNs due to the lack of mathematical analysis. Thus we propose a formalised Spike-based Rate Multiplication (SRM) method which transforms the product of firing rates to the number of coincident spikes of a pair of rate-coded spike trains. Moreover, these coincident spikes can be captured by the Spike-Time-Dependent Plasticity (STDP) rule to update the weights between the neurons in an on-line, event-based, and biologically-plausible manner. Furthermore, we put forward solutions to reduce correlations between spike trains; thereby addressing the result of performance drop in on-line SNN training. The promising results of spiking Autoencoders (AEs) and Restricted Boltzmann Machines (SRBMs) exhibit equivalent, sometimes even superior, classification and reconstruction capabilities compared to their non-spiking counterparts. To provide meaningful comparisons between these proposed SNN models and other existing methods within this rapidly advancing field of NE, we propose a large dataset of spike-based visual stimuli and a corresponding evaluation methodology to estimate the overall performance of SNN models and their hardware implementations.
|
3 |
Exploring Optical Devices for Neuromorphic ApplicationsRhim, Seon-Young 30 April 2024 (has links)
In den letzten Jahren dominierten elektronikbasierte künstliche neuronale Netzwerke (KNN) die Computertechnik. Mit zunehmender Komplexität der Aufgaben stoßen konventionelle elektronische Architekturen jedoch an ihre Grenzen. Optische Ansätze bieten daher Lösungen durch analoge Berechnungen unter Verwendung von Materialien, die optische Signale zur synaptischen Plastizität steuern. Diese Studie untersucht daher die synaptischen Funktionen von photo- und elektrochrome Materialien für KNN.
Das Modulationsverhalten des Moleküls Diarylethen (DAE) auf Oberflächenplasmonen wird in der Kretschmann-Konfiguration untersucht. Optische Impulsfolgen ermöglichen synaptische Plastizität wie Langzeitpotenzierung und -depression. DAE-Modulation und Informationsübertragung bei unterschiedlichen Wellenlängen ermöglichen simultane Lese- und Schreibvorgänge und demonstrieren die nichtflüchtige Informationsspeicherung in plasmonischen Wellenleitern.
Die Integration von DAE in einem Y-Wellenleiter bildet somit ein vollständig optisches neuronales 2x1-Netzwerk. Synaptische Funktionen, die sich in DAE-Schaltvorgängen widerspiegeln, können somit in der Wellenleiterübertragung angewendet werden. Das Netzwerktraining für Logikgatter wird durch Gradientenverfahren erreicht, um UND- oder ODER-Funktionen auszuführen.
Elektrochrome Materialien in Wellenleitern ermöglichen optoelektronische Modulation. Die Kombination von gelartigem Polymer-Elektrolyt PS-PMMA-PS:[EMIM][TFSI] mit PEDOT:PSS ermöglicht eine elektrisch-gesteuerte Absorptionsmodulation. Eine binäre komplementäre Steuerung von Übertragungen und somit auch optisches Multiplexing in Y-Wellenleitern können dadurch demonstriert werden. Der feste Polymer-Elektrolyt PEG:NaOtf ermöglicht eine optische Signalmodulation für neuromorphes Computing. Mithilfe von analog gesteuertes Gradientenverfahren kann daher in einem Y-Wellenleiter lineare Klassifikation, ohne die Verwendung von zusätzlichen Speicher- oder Prozesseinheiten, antrainiert werden. / In recent years, electronic-based artificial neural networks (ANNs) have been dominant in computer engineering. However, as tasks grow complex, conventional electronic architectures reach their limits. Optical approaches therefore offer solutions through analog calculations using materials controlling optical signals for synaptic plasticity. This study explores photo- and electrochromic materials for synaptic functions in ANNs.
The switching behavior of the molecule diarylethene (DAE) affecting Surface Plasmon Polaritons (SPPs) is studied in the Kretschmann configuration. Optical pulse sequences enable synaptic plasticity like long-term potentiation and depression. DAE modulation and information transfer at distinct wavelengths allow simultaneous read and write processes, demonstrating non-volatile information storage in plasmonic waveguides.
DAE integration into Y-branch waveguides forms full-optical neural 2x1 networks. Synaptic functions, reflected in DAE switching, can be thus applied in waveguide transmission. Network training for logic gates is achieved using gradient descent method to adapt AND or OR gate functions based on the learning set.
Electrochromic materials in waveguides enable optoelectronic modulation. Combining gel-like polymer electrolyte PS-PMMA-PS:[EMIM][TFSI] with PEDOT:PSS allows electrical modulation, demonstrating binary complementary control of transmissions and optical multiplexing in Y-branch waveguides. The solid polymer electrolyte PEG:NaOtf enables optical signal modulation for neuromorphic computing, thereby facilitating the adaptation of linear classification in Y-branch waveguides without the need for additional storage or processing units.
|
4 |
The Role of Heterogeneity in Rhythmic Networks of NeuronsReid, Michael Steven 02 January 2007 (has links)
Engineers often view variability as undesirable and seek to minimize it, such as when they employ transistor-matching techniques to improve circuit and system performance. Biology, however, makes no discernible attempt to avoid this variability, which is particularly evident in biological nervous systems whose neurons exhibit marked variability in their cellular properties. In previous studies, this heterogeneity has been shown to have mixed consequences on network rhythmicity, which is essential to locomotion and other oscillatory neural behaviors. The systems that produce and control these stereotyped movements have been optimized to be energy efficient and dependable, and one particularly well-studied rhythmic network is the central pattern generator (CPG), which is capable of generating a coordinated, rhythmic pattern of motor activity in the absence of phasic sensory input. Because they are ubiquitous in biological preparations and reveal a variety of physiological behaviors, these networks provide a platform for studying a critical set of biological control paradigms and inspire research into engineered systems that exploit these underlying principles. We are directing our efforts toward the implementation of applicable technologies and modeling to better understand the combination of these two concepts---the role of heterogeneity in rhythmic networks of neurons. The central engineering theme of our work is to use digital and analog platforms to design and build Hodgkin--Huxley conductance-based neuron models that will be used to implement a half-center oscillator (HCO) model of a CPG. The primary scientific question that we will address is to what extent this heterogeneity affects the rhythmicity of a network of neurons. To do so, we will first analyze the locations, continuities, and sizes of bursting regions using single-neuron models and will then use an FPGA model neuron to study parametric and topological heterogeneity in a fully-connected 36-neuron HCO. We found that heterogeneity can lead to more robust rhythmic networks of neurons, but the type and quantity of heterogeneity and the population-level metric that is used to analyze bursting are critical in determining when this occurs.
|
5 |
Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical MicrocircuitsTully, Philip January 2017 (has links)
Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations. In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels. The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations. / <p>QC 20170421</p>
|
6 |
Modèles cellulaires de champs neuronaux dynamiques / Cellular model of dynamic neural fieldsChappet de Vangel, Benoît 14 November 2016 (has links)
Dans la recherche permanente de solutions pour dépasser les limitations de plus en plus visibles de nos architectures matérielles, le calcul non-conventionnel offre des alternatives variées comme l’ingénierie neuromorphique et le calcul cellulaire. Comme von Neumann qui s’était initialement inspiré du cerveau pour concevoir l’architecture des ordinateurs, l’ingénierie neuromorphique prend la même inspiration en utilisant un substrat analogique plus proche des neurones et des synapses. Le calcul cellulaire s’inspire lui des substrats de calcul naturels (chimique, physiques ou biologiques) qui imposent une certaine localité des calculs de laquelle va émerger une organisation et des calculs. La recherche sur les mécanismes neuronaux permet de comprendre les grands principes de calculs émergents des neurones. Un des grands principes que nous allons utiliser dans cette thèse est la dynamique d’attracteurs d’abord décrite par Amari (champs neuronaux dynamiques, ou DNF pour dynamic neural fields), Amit et Zhang (réseaux de neurones à attracteurs continus). Ces champs de neurones ont des propriétés de calcul variées mais sont particulièrement adaptés aux représentations spatiales et aux fonctions des étages précoces du cortex visuel. Ils ont été utilisés entre autres dans des applications de robotique autonome, dans des tâches de classification et clusterisation. Comme de nombreux modèles de calcul neuronal, ils sont également intéressants du point de vue des architectures matérielles en raison de leur robustesse au bruit et aux fautes. On voit donc l’intérêt que ces modèles de calcul peuvent avoir comme solution permettant de dépasser (ou poursuivre) la loi de Moore. La réduction de la taille des transistors provoque en effet beaucoup de bruit, de même que la relaxation de la contrainte de ~ 0% de fautes lors de la production ou du fonctionnement des circuits permettrait d’énormes économies. Par ailleurs, l’évolution actuelle vers des circuits many-core de plus en plus distribués implique des difficultés liées au mode de calcul encore centralisés de la plupart des modèles algorithmiques parallèles, ainsi qu’au goulot d’étranglement des communications. L’approche cellulaire est une réponse naturelle à ces enjeux. Partant de ces différents constats, l’objectif de cette thèse est de rendre possible les calculs et applications riches des champs neuronaux dynamiques sur des substrats matériels grâce à des modèles neuro-cellulaires assurant une véritable localité, décentralisation et mise à l’échelle des calculs. Cette thèse est donc une proposition argumentée pour dépasser les limites des architectures de type von Neumann en utilisant des principes de calcul neuronal et cellulaire. Nous restons cependant dans le cadre numérique en explorant les performances des architectures proposées sur FPGA. L’utilisation de circuits analogiques (VLSI) serait tous aussi intéressante mais n’est pas étudiée ici. Les principales contributions sont les suivantes : 1) Calcul DNF dans un environnement neuromorphique ; 2) Calcul DNF avec communication purement locale : modèle RSDNF (randomly spiking DNF) ; 3) Calcul DNF avec communication purement locale et asynchrone : modèle CASAS-DNF (cellular array of stochastic asynchronous spiking DNF). / In the constant search for design going beyond the limits of the von Neumann architecture, non conventional computing offers various solutions like neuromorphic engineering and cellular computing. Like von Neumann who roughly reproduced brain structures to design computers architecture, neuromorphic engineering takes its inspiration directly from neurons and synapses using analog substratum. Cellular computing influence comes from natural substratum (chemistry, physic or biology) imposing locality of interactions from which organisation and computation emerge. Research on neural mechanisms was able to demonstrate several emergent properties of the neurons and synapses. One of them is the attractor dynamics described in different frameworks by Amari with the dynamic neural fields (DNF) and Amit and Zhang with the continuous attractor neural networks. These neural fields have various computing properties and are particularly relevant for spatial representations and early stages of visual cortex processing. They were used, for instance, in autonomous robotics, classification and clusterization. Similarly to many neuronal computing models, they are robust to noise and faults and thus are good candidates for noisy hardware computation models which would enable to keep up or surpass the Moore law. Indeed, transistor area reductions is leading to more and more noise and the relaxation of the approx. 0% fault during production and operation of integrated circuits would lead to tremendous savings. Furthermore, progress towards many-cores circuits with more and more cores leads to difficulties due to the centralised computation mode of usual parallel algorithms and their communication bottleneck. Cellular computing is the natural answer to these problems. Based on these different arguments, the goal of this thesis is to enable rich computations and applications of dynamic neural fields on hardware substratum with neuro-cellular models enabling a true locality, decentralization and scalability of the computations. This work is an attempt to go beyond von Neumann architectures by using cellular and neuronal computing principles. However, we will stay in the digital framework by exploring performances of proposed architectures on FPGA. Analog hardware like VLSI would also be very interesting but is not studied here. The main contributions of this work are : 1) Neuromorphic DNF computation ; 2) Local DNF computations with randomly spiking dynamic neural fields (RSDNF model) ; 3) Local and asynchronous DNF computations with cellular arrays of stochastic asynchronous spiking DNFs (CASAS-DNF model).
|
7 |
Learning in silicon: a floating-gate based, biophysically inspired, neuromorphic hardware system with synaptic plasticityBrink, Stephen Isaac 24 August 2012 (has links)
The goal of neuromorphic engineering is to create electronic systems that model the behavior of biological neural systems. Neuromorphic systems can leverage a combination of analog and digital circuit design techniques to enable computational modeling, with orders of magnitude of reduction in size, weight, and power consumption compared to the traditional modeling approach based upon numerical integration. These benefits of neuromorphic modeling have the potential to facilitate neural modeling in resource-constrained research environments. Moreover, they will make it practical to use neural computation in the design of intelligent machines, including portable, battery-powered, and energy harvesting applications. Floating-gate transistor technology is a powerful tool for neuromorphic engineering because it allows dense implementation of synapses with nonvolatile storage of synaptic weights, cancellation of process mismatch, and reconfigurable system design. A novel neuromorphic hardware system, featuring compact and efficient channel-based model neurons and floating-gate transistor synapses, was developed. This system was used to model a variety of network topologies with up to 100 neurons. The networks were shown to possess computational capabilities such as spatio-temporal pattern generation and recognition, winner-take-all competition, bistable activity implementing a "volatile memory", and wavefront-based robotic path planning. Some canonical features of synaptic plasticity, such as potentiation of high frequency inputs and potentiation of correlated inputs in the presence of uncorrelated noise, were demonstrated. Preliminary results regarding formation of receptive fields were obtained. Several advances in enabling technologies, including methods for floating-gate transistor array programming, and the creation of a reconfigurable system for studying adaptation in floating-gate transistor circuits, were made.
|
8 |
Monte Carlo Optimization of Neuromorphic Cricket Auditory Feature Detection Circuits in the Dynap-SE ProcessorNilsson, Mattias January 2018 (has links)
Neuromorphic information processing systems mimic the dynamics of neurons and synapses, and the architecture of biological nervous systems. By using a combination of sub-threshold analog circuits, and fast programmable digital circuits, spiking neural networks with co-localized memory and computation can be implemented, enabling more energy-efficient information processing than conventional von Neumann digital computers. When configuring such a spiking neural network, the variability caused by device mismatch of the analog electronic circuits must be managed and exploited. While pre-trained spiking neural networks have been approximated in neuromorphic processors in previous work, configuration methods and tools need to be developed that make efficient use of the high number of inhomogeneous analog neuron and synapse circuits in a systematic manner. The aim of the work presented here is to investigate such automatic configuration methods, focusing in particular on Monte Carlo methods, and to develop software for training and configuration of the Dynap-SE neuromorphic processor, which is based on the Dynamic Neuromorphic Asynchronous Processor (DYNAP) architecture. A Monte Carlo optimization method enabling configuration of spiking neural networks on the Dynap-SE is developed and tested with the Metropolis-Hastings algorithm in the low-temperature limit. The method is based on a hardware-in-the-loop setup where a PC performs online optimization of a Dynap-SE, and the resulting system is tested by reproducing properties of small neural networks in the auditory system of field crickets. It is shown that the system successfully configures two different auditory neural networks, consisting of three and four neurons respectively. However, appropriate bias parameter values defining the dynamic properties of the analog neuron and synapse circuits must be manually defined prior to optimization, which is time consuming and should be included in the optimization protocol in future work.
|
Page generated in 0.123 seconds