• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 9
  • 5
  • Tagged with
  • 134
  • 60
  • 50
  • 46
  • 37
  • 34
  • 32
  • 31
  • 25
  • 24
  • 19
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Optimizing Reservoir Computing Architecture for Dynamic Spectrum Sensing Applications

Sharma, Gauri 25 April 2024 (has links)
Spectrum sensing in wireless communications serves as a crucial binary classification tool in cognitive radios, facilitating the detection of available radio spectrums for secondary users, especially in scenarios with high Signal-to-Noise Ratio (SNR). Leveraging Liquid State Machines (LSMs), which emulate spiking neural networks like the ones in the human brain, prove to be highly effective for real-time data monitoring for such temporal tasks. The inherent advantages of LSM-based recurrent neural networks, such as low complexity, high power efficiency, and accuracy, surpass those of traditional deep learning and conventional spectrum sensing methods. The architecture of the liquid state machine processor and its training methods are crucial for the performance of an LSM accelerator. This thesis presents one such LSM-based accelerator that explores novel architectural improvements for LSM hardware. Through the adoption of triplet-based Spike-Timing-Dependent Plasticity (STDP) and various spike encoding schemes on the spectrum dataset within the LSM, we investigate the advantages offered by these proposed techniques compared to traditional LSM models on the FPGA. FPGA boards, known for their power efficiency and low latency, are well-suited for time-critical machine learning applications. The thesis explores these novel onboard learning methods, shares the results of the suggested architectural changes, explains the trade-offs involved, and explores how the improved LSM model's accuracy can benefit different classification tasks. Additionally, we outline the future research directions aimed at further enhancing the accuracy of these models. / Master of Science / Machine Learning (ML) and Artificial Intelligence (AI) have significantly shaped various applications in recent years. One notable domain experiencing substantial positive impact is spectrum sensing within wireless communications, particularly in cognitive radios. In light of spectrum scarcity and the underutilization of RF spectrums, accurately classifying spectrums as occupied or unoccupied becomes crucial for enabling secondary users to efficiently utilize available resources. Liquid State Machines (LSMs), made of spiking neural networks resembling human brain, prove effective in real-time data monitoring for this classification task. Exploiting the temporal operations, LSM accelerators and processors, facilitate high performance and accurate spectrum monitoring than conventional spectrum sensing methods. The architecture of the liquid state machine processor's training and optimal learning methods plays a pivotal role in the performance of a LSM accelerator. This thesis delves into various architectural enhancements aimed at spectrum classification using a liquid state machine accelerator, particularly implemented on an FPGA board. FPGA boards, known for their power efficiency and low latency, are well-suited for time-critical machine learning applications. The thesis explores onboard learning methods, such as employing a targeted encoder and incorporating Triplet Spike Timing-Dependent Plasticity (Triplet STDP) in the learning reservoir. These enhancements propose improvements in accuracy for conventional LSM models. The discussion concludes by presenting results of the architectural implementations, highlighting trade-offs, and shedding light on avenues for enhancing the accuracy of conventional liquid state machine-based models further.
92

Pattern recognition with spiking neural networks and the ROLLS low-power online learning neuromorphic processor

Ternstedt, Andreas January 2017 (has links)
Online monitoring applications requiring advanced pattern recognition capabilities implemented in resource-constrained wireless sensor systems are challenging to construct using standard digital computers. An interesting alternative solution is to use a low-power neuromorphic processor like the ROLLS, with subthreshold mixed analog/digital circuits and online learning capabilities that approximate the behavior of real neurons and synapses. This requires that the monitoring algorithm is implemented with spiking neural networks, which in principle are efficient computational models for tasks such as pattern recognition. In this work, I investigate how spiking neural networks can be used as a pre-processing and feature learning system in a condition monitoring application where the vibration of a machine with healthy and faulty rolling-element bearings is considered. Pattern recognition with spiking neural networks is investigated using simulations with Brian -- a Python-based open source toolbox -- and an implementation is developed for the ROLLS neuromorphic processor. I analyze the learned feature-response properties of individual neurons. When pre-processing the input signals with a neuromorphic cochlea known as the AER-EAR system, the ROLLS chip learns to classify the resulting spike patterns with a training error of less than 1 %, at a combined power consumption of approximately 30 mW. Thus, the neuromorphic hardware system can potentially be realized in a resource-constrained wireless sensor for online monitoring applications.However, further work is needed for testing and cross validation of the feature learning and pattern recognition networks.i
93

Systèmes neuromorphiques : étude et implantation de fonctions d'apprentissage et de plasticité

Daouzli, Adel Mohamed 18 June 2009 (has links)
Dans ces travaux de thèse, nous nous sommes intéressés à l'influence du bruit synaptique sur la plasticité synaptique dans un réseau de neurones biophysiquement réalistes. Le simulateur utilisé est un système électronique neuromorphique. Nous avons implanté un modèle de neurones à conductances basé sur le formalisme de Hodgkin et Huxley, et un modèle biophysique de plasticité. Ces travaux ont inclus la configuration du système, le développement d'outils pour l'exploiter, son utilisation ainsi que la mise en place d'une plateforme le rendant accessible à la communauté scientifique via Internet et l'utilisation de scripts PyNN (langage de description de simulations en neurosciences computationnelles). / In this work, we have investigated the effect of input noise patterns on synaptic plasticity applied to a neural network. The study was realised using a neuromorphic hardware simulation system. We have implemented a neural conductance model based on Hodgkin and Huxley formalism, and a biophysical model for plasticity. The tasks performed during this thesis project included the configuration of the system, the development of software tools, the analysis tools to explore experimental results, and the development of the software modules for the remote access to the system via Internet using PyNN scripts (PyNN is a neural network description language commonly used in computational neurosciences).
94

Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits

Tully, Philip January 2017 (has links)
Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations.    In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels.    The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations. / <p>QC 20170421</p>
95

Modèles cellulaires de champs neuronaux dynamiques / Cellular model of dynamic neural fields

Chappet de Vangel, Benoît 14 November 2016 (has links)
Dans la recherche permanente de solutions pour dépasser les limitations de plus en plus visibles de nos architectures matérielles, le calcul non-conventionnel offre des alternatives variées comme l’ingénierie neuromorphique et le calcul cellulaire. Comme von Neumann qui s’était initialement inspiré du cerveau pour concevoir l’architecture des ordinateurs, l’ingénierie neuromorphique prend la même inspiration en utilisant un substrat analogique plus proche des neurones et des synapses. Le calcul cellulaire s’inspire lui des substrats de calcul naturels (chimique, physiques ou biologiques) qui imposent une certaine localité des calculs de laquelle va émerger une organisation et des calculs. La recherche sur les mécanismes neuronaux permet de comprendre les grands principes de calculs émergents des neurones. Un des grands principes que nous allons utiliser dans cette thèse est la dynamique d’attracteurs d’abord décrite par Amari (champs neuronaux dynamiques, ou DNF pour dynamic neural fields), Amit et Zhang (réseaux de neurones à attracteurs continus). Ces champs de neurones ont des propriétés de calcul variées mais sont particulièrement adaptés aux représentations spatiales et aux fonctions des étages précoces du cortex visuel. Ils ont été utilisés entre autres dans des applications de robotique autonome, dans des tâches de classification et clusterisation. Comme de nombreux modèles de calcul neuronal, ils sont également intéressants du point de vue des architectures matérielles en raison de leur robustesse au bruit et aux fautes. On voit donc l’intérêt que ces modèles de calcul peuvent avoir comme solution permettant de dépasser (ou poursuivre) la loi de Moore. La réduction de la taille des transistors provoque en effet beaucoup de bruit, de même que la relaxation de la contrainte de ~ 0% de fautes lors de la production ou du fonctionnement des circuits permettrait d’énormes économies. Par ailleurs, l’évolution actuelle vers des circuits many-core de plus en plus distribués implique des difficultés liées au mode de calcul encore centralisés de la plupart des modèles algorithmiques parallèles, ainsi qu’au goulot d’étranglement des communications. L’approche cellulaire est une réponse naturelle à ces enjeux. Partant de ces différents constats, l’objectif de cette thèse est de rendre possible les calculs et applications riches des champs neuronaux dynamiques sur des substrats matériels grâce à des modèles neuro-cellulaires assurant une véritable localité, décentralisation et mise à l’échelle des calculs. Cette thèse est donc une proposition argumentée pour dépasser les limites des architectures de type von Neumann en utilisant des principes de calcul neuronal et cellulaire. Nous restons cependant dans le cadre numérique en explorant les performances des architectures proposées sur FPGA. L’utilisation de circuits analogiques (VLSI) serait tous aussi intéressante mais n’est pas étudiée ici. Les principales contributions sont les suivantes : 1) Calcul DNF dans un environnement neuromorphique ; 2) Calcul DNF avec communication purement locale : modèle RSDNF (randomly spiking DNF) ; 3) Calcul DNF avec communication purement locale et asynchrone : modèle CASAS-DNF (cellular array of stochastic asynchronous spiking DNF). / In the constant search for design going beyond the limits of the von Neumann architecture, non conventional computing offers various solutions like neuromorphic engineering and cellular computing. Like von Neumann who roughly reproduced brain structures to design computers architecture, neuromorphic engineering takes its inspiration directly from neurons and synapses using analog substratum. Cellular computing influence comes from natural substratum (chemistry, physic or biology) imposing locality of interactions from which organisation and computation emerge. Research on neural mechanisms was able to demonstrate several emergent properties of the neurons and synapses. One of them is the attractor dynamics described in different frameworks by Amari with the dynamic neural fields (DNF) and Amit and Zhang with the continuous attractor neural networks. These neural fields have various computing properties and are particularly relevant for spatial representations and early stages of visual cortex processing. They were used, for instance, in autonomous robotics, classification and clusterization. Similarly to many neuronal computing models, they are robust to noise and faults and thus are good candidates for noisy hardware computation models which would enable to keep up or surpass the Moore law. Indeed, transistor area reductions is leading to more and more noise and the relaxation of the approx. 0% fault during production and operation of integrated circuits would lead to tremendous savings. Furthermore, progress towards many-cores circuits with more and more cores leads to difficulties due to the centralised computation mode of usual parallel algorithms and their communication bottleneck. Cellular computing is the natural answer to these problems. Based on these different arguments, the goal of this thesis is to enable rich computations and applications of dynamic neural fields on hardware substratum with neuro-cellular models enabling a true locality, decentralization and scalability of the computations. This work is an attempt to go beyond von Neumann architectures by using cellular and neuronal computing principles. However, we will stay in the digital framework by exploring performances of proposed architectures on FPGA. Analog hardware like VLSI would also be very interesting but is not studied here. The main contributions of this work are : 1) Neuromorphic DNF computation ; 2) Local DNF computations with randomly spiking dynamic neural fields (RSDNF model) ; 3) Local and asynchronous DNF computations with cellular arrays of stochastic asynchronous spiking DNFs (CASAS-DNF model).
96

Contribution à la conception d'architecture de calcul auto-adaptative intégrant des nanocomposants neuromorphiques et applications potentielles / Adaptive Computing Architectures Based on Nano-fabricated Components

Bichler, Olivier 14 November 2012 (has links)
Dans cette thèse, nous étudions les applications potentielles des nano-dispositifs mémoires émergents dans les architectures de calcul. Nous montrons que des architectures neuro-inspirées pourraient apporter l'efficacité et l'adaptabilité nécessaires à des applications de traitement et de classification complexes pour la perception visuelle et sonore. Cela, à un cout moindre en termes de consommation énergétique et de surface silicium que les architectures de type Von Neumann, grâce à une utilisation synaptique de ces nano-dispositifs. Ces travaux se focalisent sur les dispositifs dit «memristifs», récemment (ré)-introduits avec la découverte du memristor en 2008 et leur utilisation comme synapse dans des réseaux de neurones impulsionnels. Cela concerne la plupart des technologies mémoire émergentes : mémoire à changement de phase – «Phase-Change Memory» (PCM), «Conductive-Bridging RAM» (CBRAM), mémoire résistive – «Resistive RAM» (RRAM)... Ces dispositifs sont bien adaptés pour l'implémentation d'algorithmes d'apprentissage non supervisés issus des neurosciences, comme «Spike-Timing-Dependent Plasticity» (STDP), ne nécessitant que peu de circuit de contrôle. L'intégration de dispositifs memristifs dans des matrices, ou «crossbar», pourrait en outre permettre d'atteindre l'énorme densité d'intégration nécessaire pour ce type d'implémentation (plusieurs milliers de synapses par neurone), qui reste hors de portée d'une technologie purement en «Complementary Metal Oxide Semiconductor» (CMOS). C'est l'une des raisons majeures pour lesquelles les réseaux de neurones basés sur la technologie CMOS n'ont pas eu le succès escompté dans les années 1990. A cela s'ajoute la relative complexité et inefficacité de l'algorithme d'apprentissage de rétro-propagation du gradient, et ce malgré tous les aspects prometteurs des architectures neuro-inspirées, tels que l'adaptabilité et la tolérance aux fautes. Dans ces travaux, nous proposons des modèles synaptiques de dispositifs memristifs et des méthodologies de simulation pour des architectures les exploitant. Des architectures neuro-inspirées de nouvelle génération sont introduites et simulées pour le traitement de données naturelles. Celles-ci tirent profit des caractéristiques synaptiques des nano-dispositifs memristifs, combinées avec les dernières avancées dans les neurosciences. Nous proposons enfin des implémentations matérielles adaptées pour plusieurs types de dispositifs. Nous évaluons leur potentiel en termes d'intégration, d'efficacité énergétique et également leur tolérance à la variabilité et aux défauts inhérents à l'échelle nano-métrique de ces dispositifs. Ce dernier point est d'une importance capitale, puisqu'il constitue aujourd'hui encore la principale difficulté pour l'intégration de ces technologies émergentes dans des mémoires numériques. / In this thesis, we study the potential applications of emerging memory nano-devices in computing architecture. More precisely, we show that neuro-inspired architectural paradigms could provide the efficiency and adaptability required in some complex image/audio processing and classification applications. This, at a much lower cost in terms of power consumption and silicon area than current Von Neumann-derived architectures, thanks to a synaptic-like usage of these memory nano-devices. This work is focusing on memristive nano-devices, recently (re-)introduced by the discovery of the memristor in 2008 and their use as synapses in spiking neural network. In fact, this includes most of the emerging memory technologies: Phase-Change Memory (PCM), Conductive-Bridging RAM (CBRAM), Resistive RAM (RRAM)... These devices are particularly suitable for the implementation of natural unsupervised learning algorithms like Spike-Timing-Dependent Plasticity (STDP), requiring very little control circuitry.The integration of memristive devices in crossbar array could provide the huge density required by this type of architecture (several thousand synapses per neuron), which is impossible to match with a CMOS-only implementation. This can be seen as one of the main factors that hindered the rise of CMOS-based neural network computing architectures in the nineties, among the relative complexity and inefficiency of the back-propagation learning algorithm, despite all the promising aspects of such neuro-inspired architectures, like adaptability and fault-tolerance. In this work, we propose synaptic models for memristive devices and simulation methodologies for architectural design exploiting them. Novel neuro-inspired architectures are introduced and simulated for natural data processing. They exploit the synaptic characteristics of memristives nano-devices, along with the latest progresses in neurosciences. Finally, we propose hardware implementations for several device types. We assess their scalability and power efficiency potential, and their robustness to variability and faults, which are unavoidable at the nanometric scale of these devices. This last point is of prime importance, as it constitutes today the main difficulty for the integration of these emerging technologies in digital memories.
97

Utilisation des nano-composants électroniques dans les architectures de traitement associées aux imageurs / Integration of memory nano-devices in image sensors processing architecture

Roclin, David 16 December 2014 (has links)
En utilisant les méthodes d’apprentissages tirées des récentes découvertes en neuroscience, les réseaux de neurones impulsionnels ont démontrés leurs capacités à analyser efficacement les grandes quantités d’informations provenant de notre environnement. L’implémentation de ces circuits à l’aide de processeurs classiques ne permet pas d’exploiter efficacement leur parallélisme. L’utilisation de mémoire numérique pour implémenter les poids synaptique ne permet pas la lecture ou la programmation parallèle des synapses et est limité par la bande passante reliant la mémoire à l’unité de calcul. Les technologies mémoire de type memristive pourrait permettre l’implémentation de ce parallélisme au coeur de la mémoire.Dans cette thèse, nous envisageons le développement d’un réseau de neurones impulsionnels dédié au monde de l’embarqué à base de dispositif mémoire émergents. Dans un premier temps, nous avons analysé un réseau impulsionnel afin d’optimiser ses différentes composantes : neurone, synapse et méthode d’apprentissage STDP en vue d’une implémentation numérique. Dans un second temps, nous envisageons l’implémentation de la mémoire synaptique par des dispositifs memristifs. Enfin, nous présentons le développement d’une puce co-intégrant des neurones implémentés en CMOS avec des synapses en technologie CBRAM. / By using learning mechanisms extracted from recent discoveries in neuroscience, spiking neural networks have demonstrated their ability to efficiently analyze the large amount of data from our environment. The implementation of such circuits on conventional processors does not allow the efficient exploitation of their parallelism. The use of digital memory to implement the synaptic weight does not allow the parallel reading or the parallel programming of the synapses and it is limited by the bandwidth of the connection between the memory and the processing unit. Emergent memristive memory technologies could allow implementing this parallelism directly in the heart of the memory.In this thesis, we consider the development of an embedded spiking neural network based on emerging memory devices. First, we analyze a spiking network to optimize its different components: the neuron, the synapse and the STDP learning mechanism for digital implementation. Then, we consider implementing the synaptic memory with emergent memristive devices. Finally, we present the development of a neuromorphic chip co-integrating CMOS neurons with CBRAM synapses.
98

Integration of Ferroelectricity into Advanced 3D Germanium MOSFETs for Memory and Logic Applications

Wonil Chung (7887626) 20 November 2019 (has links)
<div>Germanium-based MOS device which is considered as one of the promising alternative channel materials has been studied with well-known FinFET, nanowire structures and HKMG (High-k metal gate). Recent introduction of Ferroelectric (FE) Zr-doped HfO<sub>2</sub> (Hf<sub>x</sub>Zr<sub>1-x</sub>O<sub>2</sub>, HZO) has opened various possibilities both in memory and logic</div><div>applications.</div><div><br></div><div>First, integration of FE HZO into the conventional Ge platform was studied to demonstrate Ge FeFET. The FE oxide was deposited with optimized atomic layer deposition (ALD) recipe by intermixing HfO<sub>2</sub> and ZrO<sub>2</sub>. The HZO film was characterized with FE tester, XRD and AR-XPS. Then, it was integrated into conventional gate stack of Ge devices to demonstrate Ge FeFETs. Polarization switching was measured with ultrafast measurement set-up down to 100 ps.</div><div><br></div><div>Then, HZO layer was controlled for the first demonstration of hysteresis-free Ge negative capacitance (NC) CMOS FinFETs with sub-60mV/dec SS bi-directionally at room temperature towards possible logic applications. Short channel effect in Ge NCFETs were compared with our reported work to show superior robustness. For smaller widths that cannot be directly written by the e-beam lithography tool, digital etching on Ge fins were optimized.</div><div>Lastly, Ge FeFET-based synaptic device for neuromorphic computing was demonstrated. Optimum pulsing schemes were tested for both potentiation and depression which resulted in highly linear and symmetric conductance profiles. Simulation was done to analyze Ge FeFET's role as a synaptic device for deep neural network.</div>
99

An Analog Architecture for Auditory Feature Extraction and Recognition

Smith, Paul Devon 22 November 2004 (has links)
Speech recognition systems have been implemented using a wide range of signal processing techniques including neuromorphic/biological inspired and Digital Signal Processing techniques. Neuromorphic/biologically inspired techniques, such as silicon cochlea models, are based on fairly simple yet highly parallel computation and/or computational units. While the area of digital signal processing (DSP) is based on block transforms and statistical or error minimization methods. Essential to each of these techniques is the first stage of extracting meaningful information from the speech signal, which is known as feature extraction. This can be done using biologically inspired techniques such as silicon cochlea models, or techniques beginning with a model of speech production and then trying to separate the the vocal tract response from an excitation signal. Even within each of these approaches, there are multiple techniques including cepstrum filtering, which sits under the class of Homomorphic signal processing, or techniques using FFT based predictive approaches. The underlying reality is there are multiple techniques that have attacked the problem in speech recognition but the problem is still far from being solved. The techniques that have shown to have the best recognition rates involve Cepstrum Coefficients for the feature extraction and Hidden-Markov Models to perform the pattern recognition. The presented research develops an analog system based on programmable analog array technology that can perform the initial stages of auditory feature extraction and recognition before passing information to a digital signal processor. The goal being a low power system that can be fully contained on one or more integrated circuit chips. Results show that it is possible to realize advanced filtering techniques such as Cepstrum Filtering and Vector Quantization in analog circuitry. Prior to this work, previous applications of analog signal processing have focused on vision, cochlea models, anti-aliasing filters and other single component uses. Furthermore, classic designs have looked heavily at utilizing op-amps as a basic core building block for these designs. This research also shows a novel design for a Hidden Markov Model (HMM) decoder utilizing circuits that take advantage of the inherent properties of subthreshold transistors and floating-gate technology to create low-power computational blocks.
100

Learning in silicon: a floating-gate based, biophysically inspired, neuromorphic hardware system with synaptic plasticity

Brink, Stephen Isaac 24 August 2012 (has links)
The goal of neuromorphic engineering is to create electronic systems that model the behavior of biological neural systems. Neuromorphic systems can leverage a combination of analog and digital circuit design techniques to enable computational modeling, with orders of magnitude of reduction in size, weight, and power consumption compared to the traditional modeling approach based upon numerical integration. These benefits of neuromorphic modeling have the potential to facilitate neural modeling in resource-constrained research environments. Moreover, they will make it practical to use neural computation in the design of intelligent machines, including portable, battery-powered, and energy harvesting applications. Floating-gate transistor technology is a powerful tool for neuromorphic engineering because it allows dense implementation of synapses with nonvolatile storage of synaptic weights, cancellation of process mismatch, and reconfigurable system design. A novel neuromorphic hardware system, featuring compact and efficient channel-based model neurons and floating-gate transistor synapses, was developed. This system was used to model a variety of network topologies with up to 100 neurons. The networks were shown to possess computational capabilities such as spatio-temporal pattern generation and recognition, winner-take-all competition, bistable activity implementing a "volatile memory", and wavefront-based robotic path planning. Some canonical features of synaptic plasticity, such as potentiation of high frequency inputs and potentiation of correlated inputs in the presence of uncorrelated noise, were demonstrated. Preliminary results regarding formation of receptive fields were obtained. Several advances in enabling technologies, including methods for floating-gate transistor array programming, and the creation of a reconfigurable system for studying adaptation in floating-gate transistor circuits, were made.

Page generated in 0.0299 seconds