• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 4
  • 2
  • 2
  • Tagged with
  • 27
  • 27
  • 24
  • 23
  • 15
  • 14
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Exploring Column Update Elimination Optimization for Spike-Timing-Dependent Plasticity Learning Rule / Utforskar kolumnuppdaterings-elimineringsoptimering för spik-timing-beroende plasticitetsinlärningsregel

Singh, Ojasvi January 2022 (has links)
Hebbian learning based neural network learning rules when implemented on hardware, store their synaptic weights in the form of a two-dimensional matrix. The storage of synaptic weights demands large memory bandwidth and storage. While memory units are optimized for only row-wise memory access, Hebbian learning rules, like the spike-timing dependent plasticity, demand both row and column-wise access of memory. This dual pattern of memory access accounts for the dominant cost in terms of latency as well as energy for realization of large scale spiking neural networks in hardware. In order to reduce the memory access cost in Hebbian learning rules, a Column Update Elimination optimization has been previously implemented, with great efficacy, on the Bayesian Confidence Propagation neural network, that faces a similar challenge of dual pattern memory access. This thesis explores the possibility of extending the column update elimination optimization to spike-timing dependent plasticity, by simulating the learning rule on a two layer network of leaky integrate-and-fire neurons on an image classification task. The spike times are recorded for each neuron in the network, to derive a suitable probability distribution function for spike rates per neuron. This is then used to derive an ideal postsynaptic spike history buffer size for the given algorithm. The associated memory access reductions are analysed based on data to assess feasibility of the optimization to the learning rule. / Hebbiansk inlärning baserat på neural nätverks inlärnings regler används vid implementering på hårdvara, de lagrar deras synaptiska vikter i form av en tvådimensionell matris. Lagringen av synaptiska vikter kräver stor bandbredds minne och lagring. Medan minnesenheter endast är optimerade för radvis minnesåtkomst. Hebbianska inlärnings regler kräver som spike-timing-beroende plasticitet, både rad- och kolumnvis åtkomst av minnet. Det dubbla mönstret av minnes åtkomsten står för den dominerande kostnaden i form av fördröjning såväl som energi för realiseringen av storskaliga spikande neurala nätverk i hårdvara. För att minska kostnaden för minnesåtkomst i hebbianska inlärnings regler har en Column Update Elimination-optimering tidigare implementerats, med god effektivitet på Bayesian Confidence Propagation neurala nätverket, som står inför en liknande utmaning med dubbel mönster minnesåtkomst. Denna avhandling undersöker möjligheten att utöka ColumnUpdate Elimination-optimeringen till spike-timing-beroende plasticitet. Detta genom att simulera inlärnings regeln på ett tvålagers nätverk av läckande integrera-och-avfyra neuroner på en bild klassificerings uppgift. Spike tiderna registreras för varje neuron i nätverket för att erhålla en lämplig sannolikhetsfördelning funktion för frekvensen av toppar per neuron. Detta används sedan för att erhålla en idealisk postsynaptisk spike historisk buffertstorlek för den angivna algoritmen. De associerade minnesåtkomst minskningarna analyseras baserat på data för att bedöma genomförbarheten av optimeringen av inlärnings regeln.
12

Mécanismes d'apprentissage pour expliquer la rapidité, la sélectivité et l'invariance des réponses dans le cortex visuel

Masquelier, Timothée 15 February 2008 (has links) (PDF)
Dans cette thèse je propose plusieurs mécanismes de plasticité synaptique qui pourraient expliquer la rapidité, la sélectivité et l'invariance des réponses neuronales dans le cortex visuel. Leur plausibilité biologique est discutée. J'expose également les résultats d'une expérience de psychophysique pertinente, qui montrent que la familiarité peut accélérer les traitements visuels. Au delà de ces résultats propres au système visuel, les travaux présentés ici créditent l'hypothèse de l'utilisation des dates de spikes pour encoder, décoder, et traiter l'information dans le cerveau – c'est la théorie dite du ‘codage temporel'. Dans un tel cadre, la Spike Timing Dependent Plasticity pourrait jouer un rôle clef, en détectant des patterns de spikes répétitifs et en permettant d'y répondre de plus en plus rapidement.
13

Contribution à la conception d'architecture de calcul auto-adaptative intégrant des nanocomposants neuromorphiques et applications potentielles

Bichler, Olivier 14 November 2012 (has links) (PDF)
Dans cette thèse, nous étudions les applications potentielles des nano-dispositifs mémoires émergents dans les architectures de calcul. Nous montrons que des architectures neuro-inspirées pourraient apporter l'efficacité et l'adaptabilité nécessaires à des applications de traitement et de classification complexes pour la perception visuelle et sonore. Cela, à un cout moindre en termes de consommation énergétique et de surface silicium que les architectures de type Von Neumann, grâce à une utilisation synaptique de ces nano-dispositifs. Ces travaux se focalisent sur les dispositifs dit "memristifs", récemment (ré)-introduits avec la découverte du memristor en 2008 et leur utilisation comme synapse dans des réseaux de neurones impulsionnels. Cela concerne la plupart des technologies mémoire émergentes : mémoire à changement de phase - "Phase-Change Memory" (PCM), "Conductive-Bridging RAM" (CBRAM), mémoire résistive - "Resistive RAM" (RRAM)... Ces dispositifs sont bien adaptés pour l'implémentation d'algorithmes d'apprentissage non supervisés issus des neurosciences, comme "Spike-Timing-Dependent Plasticity" (STDP), ne nécessitant que peu de circuit de contrôle. L'intégration de dispositifs memristifs dans des matrices, ou "crossbar", pourrait en outre permettre d'atteindre l'énorme densité d'intégration nécessaire pour ce type d'implémentation (plusieurs milliers de synapses par neurone), qui reste hors de portée d'une technologie purement en "Complementary Metal Oxide Semiconductor" (CMOS). C'est l'une des raisons majeures pour lesquelles les réseaux de neurones basés sur la technologie CMOS n'ont pas eu le succès escompté dans les années 1990. A cela s'ajoute la relative complexité et inefficacité de l'algorithme d'apprentissage de rétro-propagation du gradient, et ce malgré tous les aspects prometteurs des architectures neuro-inspirées, tels que l'adaptabilité et la tolérance aux fautes. Dans ces travaux, nous proposons des modèles synaptiques de dispositifs memristifs et des méthodologies de simulation pour des architectures les exploitant. Des architectures neuro-inspirées de nouvelle génération sont introduites et simulées pour le traitement de données naturelles. Celles-ci tirent profit des caractéristiques synaptiques des nano-dispositifs memristifs, combinées avec les dernières avancées dans les neurosciences. Nous proposons enfin des implémentations matérielles adaptées pour plusieurs types de dispositifs. Nous évaluons leur potentiel en termes d'intégration, d'efficacité énergétique et également leur tolérance à la variabilité et aux défauts inhérents à l'échelle nano-métrique de ces dispositifs. Ce dernier point est d'une importance capitale, puisqu'il constitue aujourd'hui encore la principale difficulté pour l'intégration de ces technologies émergentes dans des mémoires numériques.
14

STDP Implementation Using CBRAM Devices in CMOS

January 2015 (has links)
abstract: Alternative computation based on neural systems on a nanoscale device are of increasing interest because of the massive parallelism and scalability they provide. Neural based computation systems also offer defect finding and self healing capabilities. Traditional Von Neumann based architectures (which separate the memory and computation units) inherently suffer from the Von Neumann bottleneck whereby the processor is limited by the number of instructions it fetches. The clock driven based Von Neumann computer survived because of technology scaling. However as transistor scaling is slowly coming to an end with channel lengths becoming a few nanometers in length, processor speeds are beginning to saturate. This lead to the development of multi-core systems which process data in parallel, with each core being based on the Von Neumann architecture. The human brain has always been a mystery to scientists. Modern day super computers are outperformed by the human brain in certain computations. The brain occupies far less space and consumes a fraction of the power a super computer does with certain processes such as pattern recognition. Neuromorphic computing aims to mimic biological neural systems on silicon to exploit the massive parallelism that neural systems offer. Neuromorphic systems are event driven systems rather than being clock driven. One of the issues faced by neuromorphic computing was the area occupied by these circuits. With recent developments in the field of nanotechnology, memristive devices on a nanoscale have been developed and show a promising solution. Memristor based synapses can be up to three times smaller than Complementary Metal Oxide Semiconductor (CMOS) based synapses. In this thesis, the Programmable Metallization Cell (a memristive device) is used to prove a learning algorithm known as Spike Time Dependant Plasticity (STDP). This learning algorithm is an extension to Hebb’s learning rule in which the synapses weight can be altered by the relative timing of spikes across it. The synaptic weight with the memristor will be its conductance, and CMOS oscillator based circuits will be used to produce spikes that can modulate the memristor conductance by firing with different phases differences. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2015
15

Frequency preference and reliability of signal integration

Schreiber, Susanne 21 July 2004 (has links)
Die Eigenschaften einzelner Nervenzellen sind von grundlegender Bedeutung für die Verarbeitung von Informationen im Nervensystem. Neuronen antworten auf Eingangsreize durch Veränderung der elektrischen Spannung über die Zellmembran. Die Spannungsantwort wird dabei durch die Dynamik der Ionenkanäle in der Zellmembran bestimmt. In dieser Arbeit untersuche ich anhand von leitfähigkeits-basierten Modellneuronen den Einfluss von Ionenkanälen auf zwei Aspekte der Signalverarbeitung: die Frequenz-Selektivität sowie die Zuverlässigkeit und zeitliche Präzision von Aktionspotentialen. Zunächst werden die zell-intrinsischen Mechanismen identifiziert, welche the Frequenz-Selektivität und die Zuverlässigkeit bestimmen. Weiterhin wird untersucht, wie Ionenkanäle diese Mechanismen modulieren können, um die Integration von Signalen zu optimieren. Im ersten Teil der Arbeit wird demonstriert, dass der Mechanismus der unterschwelligen Resonanz, so wie er bisher für periodische Signale beobachtet wurde, auch auf nicht-periodische Signale anwendbar ist und sich ebenfalls in den Feuerraten niederschlägt. Im zweiten Teil wird gezeigt, dass zeitliche Präzision und Zuverlässigkeit von Aktionspotentialen mit der Stimulusfrequenz variieren und dass, in Abhängigkeit davon, ob das Stimulusmittel über- oder unterhalb der Feuerschwelle liegt, zwei Stimulusregime unterschieden werden müssen. In beiden Regimen existiert eine bevorzugte Stimulusfrequenz, welche durch die Gesamtleitfähigkeit und die Dynamik spezifischer Ionenkanäle moduliert werden kann. Im dritten Teil wird belegt, dass Ionenkanäle die Zuverlässigkeit auch direkt über eine Veränderung der Sensitivität einer Zelle gegenüber neuronalem Rauschen bestimmen können. Die Ergebnisse der Arbeit lassen auf eine wichtige Rolle der dynamischen Regulierung der Ionenkanäle für die Frequenz-Selektivität und die zeitliche Präzision und Zuverlässigkeit der Spannungsantworten schließen. / The properties of individual neurons are of fundamental importance for the processing of information in the nervous system. The generation of voltage responses to input signals, in particular, depends on the properties of ion channels in the cell membrane. Within this thesis, I employ conductance-based model neurons to investigate the effect of ionic conductances and their dynamics on two aspects of signal processing: frequency-selectivity and temporal precision and reliability of spikes. First, the cell-intrinsic mechanisms that determine frequency selectivity and spike timing reliability are identified on the basis of conductance-based model neurons. Second, it is analyzed how ionic conductances can serve to modulate these mechanisms in order to optimize signal integration. In the first part, the frequency selectivity of subthreshold response amplitudes previously observed for periodic stimuli is proven to extend to nonperiodic stimuli and to translate into firing rates. In the second part, it is demonstrated that spike timing reliability is frequency-selective and that two different stimulus regimes have to be distinguished, depending on whether the stimulus mean is below or above threshold. In both cases, resonance effects determine the most reliable stimulus frequency. It is shown that this frequency preference can be modulated by the peak conductance and dynamics of specific ion channels. In the third part, evidence is provided that ionic conductances determine spike timing reliability beyond changes in the preferred frequency. It is demonstrated that ionic conductances also exert a direct influence on the sensitivity of the timing of spikes to neuronal noise. The findings suggest an important role for dynamic neuromodulation of ion channels with regard to frequency selectivity and spike timing reliability.
16

Mathematical Description of Differential Hebbian Plasticity and its Relation to Reinforcement Learning / Mathematische Beschreibung Hebb'scher Plastizität und deren Beziehung zu Bestärkendem Lernen

Kolodziejski, Christoph Markus 13 February 2009 (has links)
No description available.
17

Learning in silicon: a floating-gate based, biophysically inspired, neuromorphic hardware system with synaptic plasticity

Brink, Stephen Isaac 24 August 2012 (has links)
The goal of neuromorphic engineering is to create electronic systems that model the behavior of biological neural systems. Neuromorphic systems can leverage a combination of analog and digital circuit design techniques to enable computational modeling, with orders of magnitude of reduction in size, weight, and power consumption compared to the traditional modeling approach based upon numerical integration. These benefits of neuromorphic modeling have the potential to facilitate neural modeling in resource-constrained research environments. Moreover, they will make it practical to use neural computation in the design of intelligent machines, including portable, battery-powered, and energy harvesting applications. Floating-gate transistor technology is a powerful tool for neuromorphic engineering because it allows dense implementation of synapses with nonvolatile storage of synaptic weights, cancellation of process mismatch, and reconfigurable system design. A novel neuromorphic hardware system, featuring compact and efficient channel-based model neurons and floating-gate transistor synapses, was developed. This system was used to model a variety of network topologies with up to 100 neurons. The networks were shown to possess computational capabilities such as spatio-temporal pattern generation and recognition, winner-take-all competition, bistable activity implementing a "volatile memory", and wavefront-based robotic path planning. Some canonical features of synaptic plasticity, such as potentiation of high frequency inputs and potentiation of correlated inputs in the presence of uncorrelated noise, were demonstrated. Preliminary results regarding formation of receptive fields were obtained. Several advances in enabling technologies, including methods for floating-gate transistor array programming, and the creation of a reconfigurable system for studying adaptation in floating-gate transistor circuits, were made.
18

Deep learning in event-based neuromorphic systems / L'apprentissage profond dans les systèmes évènementiels, bio-inspirés

Thiele, Johannes C. 22 November 2019 (has links)
Inférence et apprentissage dans les réseaux de neurones profonds nécessitent une grande quantité de calculs qui, dans beaucoup de cas, limite leur intégration dans les environnements limités en ressources. Les réseaux de neurones évènementiels de type « spike » présentent une alternative aux réseaux de neurones artificiels classiques, et promettent une meilleure efficacité énergétique. Cependant, entraîner les réseaux spike demeure un défi important, particulièrement dans le cas où l’apprentissage doit être exécuté sur du matériel de calcul bio-inspiré, dit matériel neuromorphique. Cette thèse constitue une étude sur les algorithmes d’apprentissage et le codage de l’information dans les réseaux de neurones spike.A partir d’une règle d’apprentissage bio-inspirée, nous analysons quelles propriétés sont nécessaires dans les réseaux spike pour rendre possible un apprentissage embarqué dans un scénario d’apprentissage continu. Nous montrons qu’une règle basée sur le temps de déclenchement des neurones (type « spike-timing dependent plasticity ») est capable d’extraire des caractéristiques pertinentes pour permettre une classification d’objets simples comme ceux des bases de données MNIST et N-MNIST.Pour dépasser certaines limites de cette approche, nous élaborons un nouvel outil pour l’apprentissage dans les réseaux spike, SpikeGrad, qui représente une implémentation entièrement évènementielle de la rétro-propagation du gradient. Nous montrons comment cette approche peut être utilisée pour l’entrainement d’un réseau spike qui est capable d’inférer des relations entre valeurs numériques et des images MNIST. Nous démontrons que cet outil est capable d’entrainer un réseau convolutif profond, qui donne des taux de reconnaissance d’image compétitifs avec l’état de l’art sur les bases de données MNIST et CIFAR10. De plus, SpikeGrad permet de formaliser la réponse d’un réseau spike comme celle d’un réseau de neurones artificiels classique, permettant un entraînement plus rapide.Nos travaux introduisent ainsi plusieurs mécanismes d’apprentissage puissants pour les réseaux évènementiels, contribuant à rendre l’apprentissage des réseaux spike plus adaptés à des problèmes réels. / Inference and training in deep neural networks require large amounts of computation, which in many cases prevents the integration of deep networks in resource constrained environments. Event-based spiking neural networks represent an alternative to standard artificial neural networks that holds the promise of being capable of more energy efficient processing. However, training spiking neural networks to achieve high inference performance is still challenging, in particular when learning is also required to be compatible with neuromorphic constraints. This thesis studies training algorithms and information encoding in such deep networks of spiking neurons. Starting from a biologically inspired learning rule, we analyze which properties of learning rules are necessary in deep spiking neural networks to enable embedded learning in a continuous learning scenario. We show that a time scale invariant learning rule based on spike-timing dependent plasticity is able to perform hierarchical feature extraction and classification of simple objects of the MNIST and N-MNIST dataset. To overcome certain limitations of this approach we design a novel framework for spike-based learning, SpikeGrad, which represents a fully event-based implementation of the gradient backpropagation algorithm. We show how this algorithm can be used to train a spiking network that performs inference of relations between numbers and MNIST images. Additionally, we demonstrate that the framework is able to train large-scale convolutional spiking networks to competitive recognition rates on the MNIST and CIFAR10 datasets. In addition to being an effective and precise learning mechanism, SpikeGrad allows the description of the response of the spiking neural network in terms of a standard artificial neural network, which allows a faster simulation of spiking neural network training. Our work therefore introduces several powerful training concepts for on-chip learning in neuromorphic devices, that could help to scale spiking neural networks to real-world problems.
19

Pattern formation in neural circuits by the interaction of travelling waves with spike-timing dependent plasticity

Bennett, James Edward Matthew January 2014 (has links)
Spontaneous travelling waves of neuronal activity are a prominent feature throughout the developing brain and have been shown to be essential for achieving normal function, but the mechanism of their action on post-synaptic connections remains unknown. A well-known and widespread mechanism for altering synaptic strengths is spike-timing dependent plasticity (STDP), whereby the temporal relationship between the pre- and post-synaptic spikes determines whether a synapse is strengthened or weakened. Here, I answer the theoretical question of how these two phenomenon interact: what types of connectivity patterns can emerge when travelling waves drive a downstream area that implements STDP, and what are the critical features of the waves and the plasticity rules that shape these patterns? I then demonstrate how the theory can be applied to the development of the visual system, where retinal waves are hypothesised to play a role in the refinement of downstream connections. My major findings are as follows. (1) Mathematically, STDP translates the correlated activity of travelling waves into coherent patterns of synaptic connectivity; it maps the spatiotemporal structure in waves into a spatial pattern of synaptic strengths, building periodic structures into feedforward circuits. This is analogous to pattern formation in reaction diffusion systems. The theory reveals a role for the wave speed and time scale of the STDP rule in determining the spatial frequency of the connectivity pattern. (2) Simulations verify the theory and extend it from one-dimensional to two-dimensional cases, and from simplified linear wavefronts to more complex realistic and noisy wave patterns. (3) With appropriate constraints, these pattern formation abilities can be harnessed to explain a wide range of developmental phenomena, including how receptive fields (RFs) in the visual system are refined in size and topography and how simple-cell and direction selective RFs can develop. The theory is applied to the visual system here but generalises across different brain areas and STDP rules. The theory makes several predictions that are testable using existing experimental paradigms.
20

Redundant Input Cancellation by a Bursting Neural Network

Bol, Kieran G. 20 June 2011 (has links)
One of the most powerful and important applications that the brain accomplishes is solving the sensory "cocktail party problem:" to adaptively suppress extraneous signals in an environment. Theoretical studies suggest that the solution to the problem involves an adaptive filter, which learns to remove the redundant noise. However, neural learning is also in its infancy and there are still many questions about the stability and application of synaptic learning rules for neural computation. In this thesis, the implementation of an adaptive filter in the brain of a weakly electric fish, A. Leptorhynchus, was studied. It was found to require a cerebellar architecture that could supply independent frequency channels of delayed feedback and multiple burst learning rules that could shape this feedback. This unifies two ideas about the function of the cerebellum that were previously separate: the cerebellum as an adaptive filter and as a generator of precise temporal inputs.

Page generated in 0.0754 seconds