• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 9
  • 5
  • Tagged with
  • 126
  • 56
  • 48
  • 44
  • 35
  • 31
  • 30
  • 30
  • 23
  • 23
  • 19
  • 17
  • 16
  • 15
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Scalable event-driven modelling architectures for neuromimetic hardware

Rast, Alexander Douglas January 2011 (has links)
Neural networks present a fundamentally different model of computation from the conventional sequential digital model. Dedicated hardware may thus be more suitable for executing them. Given that there is no clear consensus on the model of computation in the brain, model flexibility is at least as important a characteristic of neural hardware as is performance acceleration. The SpiNNaker chip is an example of the emerging 'neuromimetic' architecture, a universal platform that specialises the hardware for neural networks but allows flexibility in model choice. It integrates four key attributes: native parallelism, event-driven processing, incoherent memory and incremental reconfiguration, in a system combining an array of general-purpose processors with a configurable asynchronous interconnect. Making such a device usable in practice requires an environment for instantiating neural models on the chip that allows the user to focus on model characteristics rather than on hardware details. The central part of this system is a library of predesigned, 'drop-in' event-driven neural components that specify their specific implementation on SpiNNaker. Three exemplar models: two spiking networks and a multilayer perceptron network, illustrate techniques that provide a basis for the library and demonstrate a reference methodology that can be extended to support third-party library components not only on SpiNNaker but on any configurable neuromimetic platform. Experiments demonstrate the capability of the library model to implement efficient on-chip neural networks, but also reveal important hardware limitations, particularly with respect to communications, that require careful design. The ultimate goal is the creation of a library-based development system that allows neural modellers to work in the high-level environment of their choice, using an automated tool chain to create the appropriate SpiNNaker instantiation. Such a system would enable the use of the hardware to explore abstractions of biological neurodynamics that underpin a functional model of neural computation.
12

LowPy: Simulation Platform for Machine Learning Algorithm Realization in Neuromorphic RRAM-Based Processors

Ford, Andrew J. 28 June 2021 (has links)
No description available.
13

Optimum Microarchitectures for Neuromorphic Algorithms

Wang, Shu January 2011 (has links)
No description available.
14

FPGA Based Multi-core Architectures for Deep Learning Networks

Chen, Hua January 2015 (has links)
No description available.
15

THE DESIGN, FABRICATION AND CHARACTERIZATION OF SILICON OXIDE NITRIDE OXIDE SEMICONDUCTOR THIN FILM GATES FOR USE IN MODELING SPIKING ANALOG NEURAL CIRCUITS

Wood, Richard P. 04 1900 (has links)
<p>This Thesis details the design, fabrication and characterization of organic semiconductor field effect transistors with silicon oxide-nitride-oxide-semiconductor (SONOS) gates for use in spiking analog neural circuits. The results are divided into two main sections. First, the SONOS structures, parallel plate capacitors and field effect transistors, were designed, fabricated and characterized. Second, these results are used to model spiking analog neural circuits. The modeling is achieved using PSPICE based software.</p> <p>The initial design work begins with an analysis of the basic SONOS structure. The existence of the ultrathin layers of the SONOS structure is confirmed with the use of Transmission Electron Microscopy (TEM) and Energy Dispersive Spectroscopy (EDS) scans of device stacks. Parallel plate capacitors were fabricated prior to complete transistors due to the significantly less processing required. The structure and behaviour of these capacitors is similar to that of the transistor gates which allows for the optimization of the structures prior to the fabrication of the transistors. These capacitors were fabricated using the semiconductor materials of; crystalline silicon, amorphous silicon, Zinc Oxide, copper phthalocyanine (CuPc) and tris 8-hydroxyquinolinato aluminium (AlQ3). These devices are then subjected to standard capacitance voltage (C-V) analysis. The results of this analysis demonstrate that the inclusion of SONOS structures in the capacitors (and transistors) result in a hysteresis which is the result of charge accumulation in the nitride layer of the SONOS structure. This effect can be utilized as an imbedded memory. Standard control devices were fabricated and analysed and no significant hysteresis effect was observed. The hysteresis effect is only observed after the SONOS devices are subject to high voltages (approximately 14 volts) which allows tunneling through a thin oxide layer into traps in the silicon nitride layer. This analysis was conducted to confirm that the SONOS structure causes the memory effect, not the existence of interface states that can be charged and discharged.</p> <p>The next step was to design and fabricate amorphous semiconductor field effect transistors with and without the SONOS structure. First FETs without the SONOS gates were fabricated using amorphous semiconductor materials; Zinc Oxide, CuPc and AlQ3 and then the devices were characterized. This initial step confirmed the functionality of these basic devices and the ability to fabricate working control samples. Next, SONOS gate TFTs were fabricated using CuPc as the semiconductor material. The characterization of these devices confirmed the ability to shift the transfer characteristics of the devices through a read and write mechanism similar to that used to shift the C-V characteristics of the parallel plate capacitors. Split gate FETs were also produced to examine the feasibility of individual transistors with multiple gates.</p> <p>The results of these characterizations were used to model spiking analog neural circuits. This modeling was carried out in four parts. First, representative transfer and output characteristics were used to replicate analog spiking neural circuits. This was carried out using standard PSPICE software with the modification of the discrete TFT device characteristics to represent the amorphous CuPc organic transistors. The results were found to be comparable to circuits using crystalline silicon transistors. Second, the SONOS structures were modeled closely matching the characterized results for charge and voltage shift. Third, a simple Hebbian learning circuit was designed and modeled, demonstrating the potential for imbedded memories. Lastly, split gate devices were modeled using the device characterizations.</p> / Doctor of Philosophy (PhD)
16

Deep spiking neural networks

Liu, Qian January 2018 (has links)
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutions for cognitive tasks. Thus the main research problem emerges of understanding how to operate and train biologically-plausible SNNs to close the gap in cognitive capabilities between SNNs and ANNs. SNNs can be trained by first training an equivalent ANN and then transferring the tuned weights to the SNN. This method is called ‘off-line’ training, since it does not take place on an SNN directly, but rather on an ANN instead. However, previous work on such off-line training methods has struggled in terms of poor modelling accuracy of the spiking neurons and high computational complexity. In this thesis we propose a simple and novel activation function, Noisy Softplus (NSP), to closely model the response firing activity of biologically-plausible spiking neurons, and introduce a generalised off-line training method using the Parametric Activation Function (PAF) to map the abstract numerical values of the ANN to concrete physical units, such as current and firing rate in the SNN. Based on this generalised training method and its fine tuning, we achieve the state-of-the-art accuracy on the MNIST classification task using spiking neurons, 99.07%, on a deep spiking convolutional neural network (ConvNet). We then take a step forward to ‘on-line’ training methods, where Deep Learning modules are trained purely on SNNs in an event-driven manner. Existing work has failed to provide SNNs with recognition accuracy equivalent to ANNs due to the lack of mathematical analysis. Thus we propose a formalised Spike-based Rate Multiplication (SRM) method which transforms the product of firing rates to the number of coincident spikes of a pair of rate-coded spike trains. Moreover, these coincident spikes can be captured by the Spike-Time-Dependent Plasticity (STDP) rule to update the weights between the neurons in an on-line, event-based, and biologically-plausible manner. Furthermore, we put forward solutions to reduce correlations between spike trains; thereby addressing the result of performance drop in on-line SNN training. The promising results of spiking Autoencoders (AEs) and Restricted Boltzmann Machines (SRBMs) exhibit equivalent, sometimes even superior, classification and reconstruction capabilities compared to their non-spiking counterparts. To provide meaningful comparisons between these proposed SNN models and other existing methods within this rapidly advancing field of NE, we propose a large dataset of spike-based visual stimuli and a corresponding evaluation methodology to estimate the overall performance of SNN models and their hardware implementations.
17

A neuromorphic approach for edge use allocation

Petersson Steenari, Kim January 2022 (has links)
This paper introduces a new way of solving an edge user allocation problem. The problem is to be solved with a network of spiking neurons. This network should quickly and with low energy cost solve the optimization problem of allocating users to servers and minimizing the amount of servers hired to reduce the related hiring cost. The demonstrated method is a simulation of a method which could be implemented onto neuromorphic hardware. It is written in Python using the Brian2 spiking neural network simulator. The core of the method involves simulating an energy function through the use of circuit motifs. The dynamics of these circuit motifs mimic a search for the lowest energy point in an energy landscape, corresponding to a valid solution for the edge user allocation problem. The paper also shows the results of testing this network within the Brian2 environment.
18

Place and Route Algorithms for a Neuromorphic Communication Network Simulator

Pettersson, Fredrik January 2021 (has links)
In recent years, neural networks have seen increased interest from both the cognitive computing and computation neuroscience fields. Neuromorphic computing systems simulate neural network efficiently, but have not yet reached the amount of neurons that a mammal has. Increasing this quantity is an aspiration, but more neurons will also increase the traffic load of the system. The placement of the neurons onto the neuromorphic computing system has a significant effect on the network load. This thesis introduces algorithms for placing a large amount of neurons in an efficient and agile way. First, an analysis of placement algorithms for very large scale integration design is done, displaying that computing complexity of these algorithms is high. When using the predefined underlying structure of the neural network, more rapid algorithms can be used. The results show that the population placement algorithm has high computing speed as well as providing exceptional result.
19

Exploring Optical Devices for Neuromorphic Applications

Rhim, Seon-Young 30 April 2024 (has links)
In den letzten Jahren dominierten elektronikbasierte künstliche neuronale Netzwerke (KNN) die Computertechnik. Mit zunehmender Komplexität der Aufgaben stoßen konventionelle elektronische Architekturen jedoch an ihre Grenzen. Optische Ansätze bieten daher Lösungen durch analoge Berechnungen unter Verwendung von Materialien, die optische Signale zur synaptischen Plastizität steuern. Diese Studie untersucht daher die synaptischen Funktionen von photo- und elektrochrome Materialien für KNN. Das Modulationsverhalten des Moleküls Diarylethen (DAE) auf Oberflächenplasmonen wird in der Kretschmann-Konfiguration untersucht. Optische Impulsfolgen ermöglichen synaptische Plastizität wie Langzeitpotenzierung und -depression. DAE-Modulation und Informationsübertragung bei unterschiedlichen Wellenlängen ermöglichen simultane Lese- und Schreibvorgänge und demonstrieren die nichtflüchtige Informationsspeicherung in plasmonischen Wellenleitern. Die Integration von DAE in einem Y-Wellenleiter bildet somit ein vollständig optisches neuronales 2x1-Netzwerk. Synaptische Funktionen, die sich in DAE-Schaltvorgängen widerspiegeln, können somit in der Wellenleiterübertragung angewendet werden. Das Netzwerktraining für Logikgatter wird durch Gradientenverfahren erreicht, um UND- oder ODER-Funktionen auszuführen. Elektrochrome Materialien in Wellenleitern ermöglichen optoelektronische Modulation. Die Kombination von gelartigem Polymer-Elektrolyt PS-PMMA-PS:[EMIM][TFSI] mit PEDOT:PSS ermöglicht eine elektrisch-gesteuerte Absorptionsmodulation. Eine binäre komplementäre Steuerung von Übertragungen und somit auch optisches Multiplexing in Y-Wellenleitern können dadurch demonstriert werden. Der feste Polymer-Elektrolyt PEG:NaOtf ermöglicht eine optische Signalmodulation für neuromorphes Computing. Mithilfe von analog gesteuertes Gradientenverfahren kann daher in einem Y-Wellenleiter lineare Klassifikation, ohne die Verwendung von zusätzlichen Speicher- oder Prozesseinheiten, antrainiert werden. / In recent years, electronic-based artificial neural networks (ANNs) have been dominant in computer engineering. However, as tasks grow complex, conventional electronic architectures reach their limits. Optical approaches therefore offer solutions through analog calculations using materials controlling optical signals for synaptic plasticity. This study explores photo- and electrochromic materials for synaptic functions in ANNs. The switching behavior of the molecule diarylethene (DAE) affecting Surface Plasmon Polaritons (SPPs) is studied in the Kretschmann configuration. Optical pulse sequences enable synaptic plasticity like long-term potentiation and depression. DAE modulation and information transfer at distinct wavelengths allow simultaneous read and write processes, demonstrating non-volatile information storage in plasmonic waveguides. DAE integration into Y-branch waveguides forms full-optical neural 2x1 networks. Synaptic functions, reflected in DAE switching, can be thus applied in waveguide transmission. Network training for logic gates is achieved using gradient descent method to adapt AND or OR gate functions based on the learning set. Electrochromic materials in waveguides enable optoelectronic modulation. Combining gel-like polymer electrolyte PS-PMMA-PS:[EMIM][TFSI] with PEDOT:PSS allows electrical modulation, demonstrating binary complementary control of transmissions and optical multiplexing in Y-branch waveguides. The solid polymer electrolyte PEG:NaOtf enables optical signal modulation for neuromorphic computing, thereby facilitating the adaptation of linear classification in Y-branch waveguides without the need for additional storage or processing units.
20

Can my chip behave like my brain?

George, Suma 27 May 2016 (has links)
Many decades ago, Carver Mead established the foundations of neuromorphic systems. Neuromorphic systems are analog circuits that emulate biology. These circuits utilize subthreshold dynamics of CMOS transistors to mimic the behavior of neurons. The objective is to not only simulate the human brain, but also to build useful applications using these bio-inspired circuits for ultra low power speech processing, image processing, and robotics. This can be achieved using reconfigurable hardware, like field programmable analog arrays (FPAAs), which enable configuring different applications on a cross platform system. As digital systems saturate in terms of power efficiency, this alternate approach has the potential to improve computational efficiency by approximately eight orders of magnitude. These systems, which include analog, digital, and neuromorphic elements combine to result in a very powerful reconfigurable processing machine.

Page generated in 0.0358 seconds