• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 2
  • Tagged with
  • 42
  • 42
  • 19
  • 19
  • 17
  • 13
  • 12
  • 12
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Powering Next-Generation Artificial Intelligence by Designing Three-dimensional High-Performance Neuromorphic Computing System with Memristors

An, Hongyu 17 September 2020 (has links)
Human brains can complete numerous intelligent tasks, such as pattern recognition, reasoning, control and movement, with remarkable energy efficiency (20 W). In contrast, a typical computer only recognizes 1,000 different objects but consumes about 250 W power [1]. This performance significant differences stem from the intrinsic different structures of human brains and digital computers. The latest discoveries in neuroscience indicate the capabilities of human brains are attributed to three unique features: (1) neural network structure; (2) spike-based signal representation; (3) synaptic plasticity and associative memory learning [1, 2]. In this dissertation, the next-generation platform of artificial intelligence is explored by utilizing memristors to design a three-dimensional high-performance neuromorphic computing system. The low-variation memristors (fabricated by Virginia Tech) reduce the learning accuracy of the system significantly through adding heat dissipation layers. Moreover, three emerging neuromorphic architectures are proposed showing a path to realizing the next-generation platform of artificial intelligence with self-learning capability and high energy efficiency. At last, an Associative Memory Learning System is exhibited to reproduce an associative memory learning that remembers and correlates two concurrent events (pronunciation and shape of digits) together. / Doctor of Philosophy / In this dissertation, the next-generation platform of artificial intelligence is explored by utilizing memristors to design a three-dimensional high-performance neuromorphic computing system. The low-variation memristors (fabricated by Virginia Tech) reduce the learning accuracy of the system significantly through adding heat dissipation layers. Moreover, three emerging neuromorphic architectures are proposed showing a path to realizing the next-generation platform of artificial intelligence with self-learning capability and high energy efficiency. At last, an Associative Memory Learning System is exhibited to reproduce an associative memory learning that remembers and correlates two concurrent events (pronunciation and shape of digits) together.
2

Neuromorphic Computing for Autonomous Racing

Patton, Robert, Schuman, Catherine, Kulkarni, Shruti, Parsa, Maryam, Mitchell, J. P., Haas, N. Q., Stahl, Christopher, Paulissen, Spencer, Date, Prasanna, Potok, Thomas, Sneider, Shay 27 July 2021 (has links)
Neuromorphic computing has many opportunities in future autonomous systems, especially those that will operate at the edge. However, there are relatively few demonstrations of neuromorphic implementations on real-world applications, partly because of the lack of availability of neuromorphic hardware and software, but also because of the lack of availability of an accessible demonstration platform. In this work, we propose utilizing the F1Tenth platform as an evaluation task for neuromorphic computing. F1Tenth is a competition wherein one tenth scale cars compete in an autonomous racing task; there are significant open source resources in both software and hardware for realizing this task. We present a workflow with neuromorphic hardware, software, and training that can be used to develop a spiking neural network for neuromorphic hardware deployment to perform autonomous racing. We present initial results on utilizing this approach for this small-scale, real-world autonomous vehicle task.
3

LowPy: Simulation Platform for Machine Learning Algorithm Realization in Neuromorphic RRAM-Based Processors

Ford, Andrew J. 28 June 2021 (has links)
No description available.
4

Exploration of Energy Efficient Hardware and Algorithms for Deep Learning

Syed Sarwar (6634835) 14 May 2019 (has links)
<div>Deep Neural Networks (DNNs) have emerged as the state-of-the-art technique in a wide range of machine learning tasks for analytics and computer vision in the next generation of embedded (mobile, IoT, wearable) devices. Despite their success, they suffer from high energy requirements both in inference and training. In recent years, the inherent error resiliency of DNNs has been exploited by introducing approximations at either the algorithmic or the hardware levels (individually) to obtain energy savings while incurring tolerable accuracy degradation. We perform a comprehensive analysis to determine the effectiveness of cross-layer approximations for the energy-efficient realization of large-scale DNNs. Our experiments on recognition benchmarks show that cross-layer approximation provides substantial improvements in energy efficiency for different accuracy/quality requirements. Furthermore, we propose a synergistic framework for combining the approximation techniques. </div><div>To reduce the training complexity of Deep Convolutional Neural Networks (DCNN), we replace certain weight kernels of convolutional layers with Gabor filters. The convolutional layers use the Gabor filters as fixed weight kernels, which extracts intrinsic features, with regular trainable weight kernels. This combination creates a balanced system that gives better training performance in terms of energy and time, compared to the standalone Deep CNN (without any Gabor kernels), in exchange for tolerable accuracy degradation. We also explore an efficient training methodology and incrementally growing a DCNN to allow new classes to be learned while sharing part of the base network. Our approach is an end-to-end learning framework, where we focus on reducing the incremental training complexity while achieving accuracy close to the upper-bound without using any of the old training samples. We have also explored spiking neural networks for energy-efficiency. Training of deep spiking neural networks from direct spike inputs is difficult since its temporal dynamics are not well suited for standard supervision based training algorithms used to train DNNs. We propose a spike-based backpropagation training methodology for state-of-the-art deep Spiking Neural Network (SNN) architectures. This methodology enables real-time training in deep SNNs while achieving comparable inference accuracies on standard image recognition tasks.</div>
5

Understanding Security Threats of Emerging Computing Architectures and Mitigating Performance Bottlenecks of On-Chip Interconnects in Manycore NTC System

Rajamanikkam, Chidhambaranathan 01 May 2019 (has links)
Emerging computing architectures such as, neuromorphic computing and third party intellectual property (3PIP) cores, have attracted significant attention in the recent past. Neuromorphic Computing introduces an unorthodox non-von neumann architecture that mimics the abstract behavior of neuron activity of the human brain. They can execute more complex applications, such as image processing, object recognition, more efficiently in terms of performance and energy than the traditional microprocessors. However, focus on the hardware security aspects of the neuromorphic computing at its nascent stage. 3PIP core, on the other hand, have covertly inserted malicious functional behavior that can inflict range of harms at the system/application levels. This dissertation examines the impact of various threat models that emerges from neuromorphic architectures and 3PIP cores. Near-Threshold Computing (NTC) serves as an energy-efficient paradigm by aggressively operating all computing resources with a supply voltage closer to its threshold voltage at the cost of performance. Therefore, STC system is scaled to many-core NTC system to reclaim the lost performance. However, the interconnect performance in many-core NTC system pose significant bottleneck that hinders the performance of many-core NTC system. This dissertation analyzes the interconnect performance, and further, propose a novel technique to boost the interconnect performance of many-core NTC system.
6

Multilevel Resistance Programming in Conductive Bridge Resistive Memory

January 2015 (has links)
abstract: This work focuses on the existence of multiple resistance states in a type of emerging non-volatile resistive memory device known commonly as Programmable Metallization Cell (PMC) or Conductive Bridge Random Access Memory (CBRAM), which can be important for applications such as multi-bit memory as well as non-volatile logic and neuromorphic computing. First, experimental data from small signal, quasi-static and pulsed mode electrical characterization of such devices are presented which clearly demonstrate the inherent multi-level resistance programmability property in CBRAM devices. A physics based analytical CBRAM compact model is then presented which simulates the ion-transport dynamics and filamentary growth mechanism that causes resistance change in such devices. Simulation results from the model are fitted to experimental dynamic resistance switching characteristics. The model designed using Verilog-a language is computation-efficient and can be integrated with industry standard circuit simulation tools for design and analysis of hybrid circuits involving both CMOS and CBRAM devices. Three main circuit applications for CBRAM devices are explored in this work. Firstly, the susceptibility of CBRAM memory arrays to single event induced upsets is analyzed via compact model simulation and experimental heavy ion testing data that show possibility of both high resistance to low resistance and low resistance to high resistance transitions due to ion strikes. Next, a non-volatile sense amplifier based flip-flop architecture is proposed which can help make leakage power consumption negligible by allowing complete shutdown of power supply while retaining its output data in CBRAM devices. Reliability and energy consumption of the flip-flop circuit for different CBRAM low resistance levels and supply voltage values are analyzed and compared to CMOS designs. Possible extension of this architecture for threshold logic function computation using the CBRAM devices as re-configurable resistive weights is also discussed. Lastly, Spike timing dependent plasticity (STDP) based gradual resistance change behavior in CBRAM device fabricated in back-end-of-line on a CMOS die containing integrate and fire CMOS neuron circuits is demonstrated for the first time which indicates the feasibility of using CBRAM devices as electronic synapses in spiking neural network hardware implementations for non-Boolean neuromorphic computing. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2015
7

Spike Processing Circuit Design for Neuromorphic Computing

Zhao, Chenyuan 13 September 2019 (has links)
Von Neumann Bottleneck, which refers to the limited throughput between the CPU and memory, has already become the major factor hindering the technical advances of computing systems. In recent years, neuromorphic systems started to gain increasing attention as compact and energy-efficient computing platforms. Spike based-neuromorphic computing systems require high performance and low power neural encoder and decoder to emulate the spiking behavior of neurons. These two spike-analog signals converting interface determine the whole spiking neuromorphic computing system's performance, especially the highest performance. Many state-of-the-art neuromorphic systems typically operate in the frequency range between 〖10〗^0KHz and 〖10〗^2KHz due to the limitation of encoding/decoding speed. In this dissertation, all these popular encoding and decoding schemes, i.e. rate encoding, latency encoding, ISI encoding, together with related hardware implementations have been discussed and analyzed. The contributions included in this dissertation can be classified into three main parts: neuron improvement, three kinds of ISI encoder design, two types of ISI decoder design. Two-path leakage LIF neuron has been fabricated and modular design methodology is invented. Three kinds of ISI encoding schemes including parallel signal encoding, full signal iteration encoding, and partial signal encoding are discussed. The first two types ISI encoders have been fabricated successfully and the last ISI encoder will be taped out by the end of 2019. Two types of ISI decoders adopted different techniques which are sample-and-hold based mixed-signal design and spike-timing-dependent-plasticity (STDP) based analog design respectively. Both these two ISI encoders have been evaluated through post-layout simulations successfully. The STDP based ISI encoder will be taped out by the end of 2019. A test bench based on correlation inspection has been built to evaluate the information recovery capability of the proposed spiking processing link. / Doctor of Philosophy / Neuromorphic computing is a kind of specific electronic system that could mimic biological bodies’ behavior. In most cases, neuromorphic computing system is built with analog circuits which have benefits in power efficient and low thermal radiation. Among neuromorphic computing system, one of the most important components is the signal processing interface, i.e. encoder/decoder. To increase the whole system’s performance, novel encoders and decoders have been proposed in this dissertation. In this dissertation, three kinds of temporal encoders, one rate encoder, one latency encoder, one temporal decoder, and one general spike decoder have been proposed. These designs could be combined together to build high efficient spike-based data link which guarantee the processing performance of whole neuromorphic computing system.
8

Evaluating Online Learning Anomaly Detection on Intel Neuromorphic Chip and Memristor Characterization Tool

Jaoudi, Yassine 09 August 2021 (has links)
No description available.
9

Jonctions tunnel magnétiques stochastiques pour le calcul bioinspiré / Stochastic magnetic tunnel junctions for bioinspired computing

Mizrahi, Alice 11 January 2017 (has links)
Les jonctions tunnel magnétiques sont des candidats prometteurs for le calcul. Mais quand elles sont réduites à des dimensions nanométriques, conserver leur stabilité devient difficile. Les jonctions tunnel magnétiques instables subissent des renversements aléatoires de leur aimantation et se comportent comme des oscillateurs stochastiques. Pourtant, la nature stochastique de ces jonctions tunnel superparamagnétiques n’est pas une faille mais un atout qui peut être utilisé pour le calcul bio-inspiré. En effet, notre cerveau a évolué de sorte qu’il puisse fonctionner dans un environnement bruité et avec des composants instables. Dans cette thèse, nous montrons plusieurs applications possibles des jonctions tunnel superparamagnétiques.Nous démontrons qu’une junction tunnel superparamagnétique peut être synchronisée en fréquence et en phase à une faible tension oscillante. De manière contre intuitive, notre expérience montre que cela peut être fait grâce à l’injection de bruit dans le système. Nous développons un modèle théorique pour comprendre ce phénomène et prédire qu’il permet un gain énergétique d’un facteur cent par rapport à la synchronisation d’oscillateurs à transfert de spin traditionnels. De plus, nous utilisons notre modèle pour étudier la synchronisation de plusieurs jonctions couplées. De nombreuses méthodes théoriques réalisant des tâches cognitives telles que la reconnaissance de motifs et la classification grâce à la synchronisation d’oscillateurs ont été proposés. Utiliser la synchronisation induite par le bruit de jonctions tunnel superparamagnétiques permettrait de réaliser ces tâches à basse énergie.Nous faisons une analogie entre les jonctions tunnel superparamagnétiques et les neurones sensoriels qui émettent des pics de tension séparés par des intervalles aléatoires. En poursuivant cette analogie, nous démontrons que des populations de jonctions tunnel superparamagnétiques peuvent représenter des distributions de probabilité et réaliser de l’inférence Bayésienne. De plus, nous démontrons que des populations interconnectées peuvent faire du calcul, notamment de l’apprentissage, des transformations de coordonnées et de la fusion sensorielles. Un tel système est faisable de manière réaliste et pourrait permettre de fabriquer des capteurs intelligents à bas coût énergétique. / Magnetic tunnel junctions are promising candidates for computing applications. But when they are reduced to nanoscale dimensions, maintaining their stability becomes an issue. Unstable magnetic tunnel junctions undergo random switches of the magnetization between their two stable states and thus behave as stochastic oscillators. However, the stochastic nature of these superparamagnetic tunnel junctions is not a liability but an asset which can be used for the implementation of bio-inspired computing schemes. Indeed, our brain has evolved to function in a noisy environment and with unstable components. In this thesis, we show several possible applications of superparamagnetic tunnel junctions.We demonstrate how a superparamagnetic tunnel junction can be frequency and phase-locked to a weak oscillating voltage. Counterintuitively, our experiment shows that this is achieved by injecting noise in the system. We develop a theoretical model to understand this phenomenon and predict that it allows a hundred-fold energy gain over the synchronization of traditional dc-driven spin torque oscillators. Furthermore, we leverage our model to study the synchronization of several coupled junctions. Many theoretical schemes using the synchronization of oscillators to perform cognitive tasks such as pattern recognition and classification have been proposed. Using the noise-induced synchronization of superparamagnetic tunnel junctions would allow implementing these tasks at low energy.We draw an analogy between superparamagnetic tunnel junctions and sensory neurons which fire voltage pulses with random time intervals. Pushing this analogy, we demonstrate that populations of junctions can represent probability distributions and perform Bayesian inference. Furthermore, we demonstrate that interconnected populations can perform computing tasks such as learning, coordinate transformations and sensory fusion. Such a system is realistically implementable and could allow for intelligent sensory processing at low energy cost.
10

Organic electrochemical networks for biocompatible and implantable machine learning: Organic bioelectronic beyond sensing

Cucchi, Matteo 31 January 2022 (has links)
How can the brain be such a good computer? Part of the answer lies in the astonishing number of neurons and synapses that process electrical impulses in parallel. Part of it must be found in the ability of the nervous system to evolve in response to external stimuli and grow, sharpen, and depress synaptic connections. However, we are far from understanding even the basic mechanisms that allow us to think, be aware, recognize patterns, and imagine. The brain can do all this while consuming only around 20 Watts, out-competing any human-made processor in terms of energy-efficiency. This question is of particular interest in a historical era and technological stage where phrases like machine learning and artificial intelligence are more and more widespread, thanks to recent advances produced in the field of computer science. However, brain-inspired computation is today still relying on algorithms that run on traditional silicon-made, digital processors. Instead, the making of brain-like hardware, where the substrate itself can be used for computation and it can dynamically update its electrical pathways, is still challenging. In this work, I tried to employ organic semiconductors that work in electrolytic solutions, called organic mixed ionic-electronic conductors (OMIECs) to build hardware capable of computation. Moreover, by exploiting an electropolymerization technique, I could form conducting connections in response to electrical spikes, in analogy to how synapses evolve when the neuron fires. After demonstrating artificial synapses as a potential building block for neuromorphic chips, I shifted my attention to the implementation of such synapses in fully operational networks. In doing so, I borrowed the mathematical framework of a machine learning approach known as reservoir computing, which allows computation with random (neural) networks. I capitalized my work on demonstrating the possibility of using such networks in-vivo for the recognition and classification of dangerous and healthy heartbeats. This is the first demonstration of machine learning carried out in a biological environment with a biocompatible substrate. The implications of this technology are straightforward: a constant monitoring of biological signals and fluids accompanied by an active recognition of the presence of malign patterns may lead to a timely, targeted and early diagnosis of potentially mortal conditions. Finally, in the attempt to simulate the random neural networks, I faced difficulties in the modeling of the devices with the state-of-the-art approach. Therefore, I tried to explore a new way to describe OMIECs and OMIECs-based devices, starting from thermodynamic axioms. The results of this model shine a light on the mechanism behind the operation of the organic electrochemical transistors, revealing the importance of the entropy of mixing and suggesting new pathways for device optimization for targeted applications.

Page generated in 0.0832 seconds