• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 6
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Information representation on a universal neural Chip

Galluppi, Francesco January 2013 (has links)
How can science possibly understand the organ through which the Universe knows itself? The scientific method can be used to study how electro-chemical signals represent information in the brain. However, modelling it by simulating its structures and functions is a computation- and communication-intensive task. Whilst supercomputers offer great computational power, brain-scale models are challenging in terms of communication overheads and power consumption. Dedicated neural hardware can be used to enhance simulation performance, but it is often optimised for specific models. While performance and flexibility are desirable simulation features, there is no perfect modelling platform, and the choice is subordinate to the specific research question being investigated. In this context SpiNNaker constitutes a novel parallel architecture, with communication and memory accesses optimised for spike-based computation, permitting simulation of large spiking neural networks in real time. To exploit SpiNNaker's performance and reconfigurability fully, a neural network model must be translated from its conceptual form into data structures for a parallel system. This thesis presents a flexible approach to distributing and mapping neural models onto SpiNNaker, within the constraints introduced by its specialised architecture. The conceptual map underlying this approach characterizes the interaction between the model and the system: during the build phase the model is placed on SpiNNaker; at runtime, placement information mediates communication with devices and instrumentation for data analysis. Integration within the computational neuroscience community is achieved by interfaces to two domain-specific languages: PyNN and Nengo. The real-time, event-driven nature of the SpiNNaker platform is explored using address-event representation sensors and robots, performing visual processing using a silicon retina, and navigation on a robotic platform based on a cortical, basal ganglia and hippocampal place cells model. The approach has been successfully exploited to run models on all iterations of SpiNNaker chips and development boards to date, and demonstrated live in workshops and conferences.
2

Brain-inspired Stochastic Models and Implementations

Al-Shedivat, Maruan 12 May 2015 (has links)
One of the approaches to building artificial intelligence (AI) is to decipher the princi- ples of the brain function and to employ similar mechanisms for solving cognitive tasks, such as visual perception or natural language understanding, using machines. The recent breakthrough, named deep learning, demonstrated that large multi-layer networks of arti- ficial neural-like computing units attain remarkable performance on some of these tasks. Nevertheless, such artificial networks remain to be very loosely inspired by the brain, which rich structures and mechanisms may further suggest new algorithms or even new paradigms of computation. In this thesis, we explore brain-inspired probabilistic mechanisms, such as neural and synaptic stochasticity, in the context of generative models. The two questions we ask here are: (i) what kind of models can describe a neural learning system built of stochastic components? and (ii) how can we implement such systems e ̆ciently? To give specific answers, we consider two well known models and the corresponding neural architectures: the Naive Bayes model implemented with a winner-take-all spiking neural network and the Boltzmann machine implemented in a spiking or non-spiking fashion. We propose and analyze an e ̆cient neuromorphic implementation of the stochastic neu- ral firing mechanism and study the e ̄ects of synaptic unreliability on learning generative energy-based models implemented with neural networks.
3

Pattern recognition with spiking neural networks and the ROLLS low-power online learning neuromorphic processor

Ternstedt, Andreas January 2017 (has links)
Online monitoring applications requiring advanced pattern recognition capabilities implemented in resource-constrained wireless sensor systems are challenging to construct using standard digital computers. An interesting alternative solution is to use a low-power neuromorphic processor like the ROLLS, with subthreshold mixed analog/digital circuits and online learning capabilities that approximate the behavior of real neurons and synapses. This requires that the monitoring algorithm is implemented with spiking neural networks, which in principle are efficient computational models for tasks such as pattern recognition. In this work, I investigate how spiking neural networks can be used as a pre-processing and feature learning system in a condition monitoring application where the vibration of a machine with healthy and faulty rolling-element bearings is considered. Pattern recognition with spiking neural networks is investigated using simulations with Brian -- a Python-based open source toolbox -- and an implementation is developed for the ROLLS neuromorphic processor. I analyze the learned feature-response properties of individual neurons. When pre-processing the input signals with a neuromorphic cochlea known as the AER-EAR system, the ROLLS chip learns to classify the resulting spike patterns with a training error of less than 1 %, at a combined power consumption of approximately 30 mW. Thus, the neuromorphic hardware system can potentially be realized in a resource-constrained wireless sensor for online monitoring applications.However, further work is needed for testing and cross validation of the feature learning and pattern recognition networks.i
4

Critical Branching Regulation of the E-I Net Spiking Neural Network Model

Öberg, Oskar January 2019 (has links)
Spiking neural networks (SNN) are dynamic models of biological neurons, that communicates with event-based signals called spikes. SNN that reproduce observed properties of biological senses like vision are developed to better understand how such systems function, and to learn how more efficient sensor systems can be engineered. A branching parameter describes the average probability for spikes to propagate between two different neuron populations. The adaptation of branching parameters towards critical values is known to be important for maximizing the sensitivity and dynamic range of SNN. In this thesis, a recently proposed SNN model for visual feature learning and pattern recognition known as the E-I Net model is studied and extended with a critical branching mechanism. The resulting modified E-I Net model is studied with numerical experiments and two different types of sensory queues. The experiments show that the modified E-I Net model demonstrates critical branching and power-law scaling behavior, as expected from SNN near criticality, but the power-laws are broken and the stimuli reconstruction error is higher compared to the error of the original E-I Net model. Thus, on the basis of these experiments, it is not clear how to properly extend the E-I Net model properly with a critical branching mechanism. The E-I Net model has a particular structure where the inhibitory neurons (I) are tuned to decorrelate the excitatory neurons (E) so that the visual features learned matches the angular and frequency distributions of feature detectors in visual cortex V1 and different stimuli are represented by sparse subsets of the neurons. The broken power-laws correspond to different scaling behavior at low and high spike rates, which may be related to the efficacy of inhibition in the model.
5

Micromagnetic Study of Current Induced Domain Wall Motion for Spintronic Synapses

Petropoulos, Dimitrios-Petros January 2021 (has links)
Neuromorphic computing applications could be made faster and more power efficient by emulating the function of a biological synapse. Non-conventional spintronic devices have been proposed that demonstrate synaptic behavior through domain wall (DW) driving. In this work, current induced domain wall motion has been studied through micromagnetic simulations. We investigate the synaptic behavior of a head to head domain wall driven by a spin polarized current in permalloy (Py) nanostrips with shape anisotropy, where triangular notches have been modeled to account for edge roughness and provide pinning sites for the domain wall. We seek optimal material parameters to keep the critical current density for driving the domain wall at order 1011 A/m2.
6

Neuro-inspired computing enhanced by scalable algorithms and physics of emerging nanoscale resistive devices

Parami Wijesinghe (6838184) 16 August 2019 (has links)
<p>Deep ‘Analog Artificial Neural Networks’ (AANNs) perform complex classification problems with high accuracy. However, they rely on humongous amount of power to perform the calculations, veiling the accuracy benefits. The biological brain on the other hand is significantly more powerful than such networks and consumes orders of magnitude less power, indicating some conceptual mismatch. Given that the biological neurons are locally connected, communicate using energy efficient trains of spikes, and the behavior is non-deterministic, incorporating these effects in Artificial Neural Networks (ANNs) may drive us few steps towards a more realistic neural networks. </p> <p> </p> <p>Emerging devices can offer a plethora of benefits including power efficiency, faster operation, low area in a vast array of applications. For example, memristors and Magnetic Tunnel Junctions (MTJs) are suitable for high density, non-volatile Random Access Memories when compared with CMOS implementations. In this work, we analyze the possibility of harnessing the characteristics of such emerging devices, to achieve neuro-inspired solutions to intricate problems.</p> <p> </p> <p>We propose how the inherent stochasticity of nano-scale resistive devices can be utilized to realize the functionality of spiking neurons and synapses that can be incorporated in deep stochastic Spiking Neural Networks (SNN) for image classification problems. While ANNs mainly dwell in the aforementioned classification problem solving domain, they can be adapted for a variety of other applications. One such neuro-inspired solution is the Cellular Neural Network (CNN) based Boolean satisfiability solver. Boolean satisfiability (k-SAT) is an NP-complete (k≥3) problem that constitute one of the hardest classes of constraint satisfaction problems. We provide a proof of concept hardware based analog k-SAT solver that is built using MTJs. The inherent physics of MTJs, enhanced by device level modifications, is harnessed here to emulate the intricate dynamics of an analog, CNN based, satisfiability (SAT) solver. </p> <p> </p> <p>Furthermore, in the effort of reaching human level performance in terms of accuracy, increasing the complexity and size of ANNs is crucial. Efficient algorithms for evaluating neural network performance is of significant importance to improve the scalability of networks, in addition to designing hardware accelerators. We propose a scalable approach for evaluating Liquid State Machines: a bio-inspired computing model where the inputs are sparsely connected to a randomly interlinked reservoir (or liquid). It has been shown that biological neurons are more likely to be connected to other neurons in the close proximity, and tend to be disconnected as the neurons are spatially far apart. Inspired by this, we propose a group of locally connected neuron reservoirs, or an ensemble of liquids approach, for LSMs. We analyze how the segmentation of a single large liquid to create an ensemble of multiple smaller liquids affects the latency and accuracy of an LSM. In our analysis, we quantify the ability of the proposed ensemble approach to provide an improved representation of the input using the Separation Property (SP) and Approximation Property (AP). Our results illustrate that the ensemble approach enhances class discrimination (quantified as the ratio between the SP and AP), leading to improved accuracy in speech and image recognition tasks, when compared to a single large liquid. Furthermore, we obtain performance benefits in terms of improved inference time and reduced memory requirements, due to lower number of connections and the freedom to parallelize the liquid evaluation process.</p>

Page generated in 0.1827 seconds