• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 12
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 166
  • 118
  • 74
  • 47
  • 41
  • 38
  • 36
  • 35
  • 29
  • 28
  • 20
  • 19
  • 19
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

ACCELERATION OF SPIKING NEURAL NETWORK ON GENERAL PURPOSE GRAPHICS PROCESSORS

Han, Bing 05 May 2010 (has links)
No description available.
12

Scalability and robustness of artificial neural networks

Stromatias, Evangelos January 2016 (has links)
Artificial Neural Networks (ANNs) appear increasingly and routinely to gain popularity today, as they are being used in several diverse research fields and many different contexts, which may range from biological simulations and experiments on artificial neuronal models to machine learning models intended for industrial and engineering applications. One example is the recent success of Deep Learning architectures (e.g., Deep Belief Networks [DBN]), which appear in the spotlight of machine learning research, as they are capable of delivering state-of-the-art results in many domains. While the performance of such ANN architectures is greatly affected by their scale, their capacity for scalability both for training and during execution is limited by the increased power consumption and communication overheads, implicitly posing a limiting factor on their real-time performance. The on-going work on the design and construction of spike-based neuromorphic platforms offers an alternative for running large-scale neural networks, such as DBNs, with significantly lower power consumption and lower latencies, but has to overcome the hardware limitations and model specialisations imposed by these type of circuits. SpiNNaker is a novel massively parallel fully programmable and scalable architecture designed to enable real-time spiking neural network (SNN) simulations. These properties render SpiNNaker quite an attractive neuromorphic exploration platform for running large-scale ANNs, however, it is necessary to investigate thoroughly both its power requirements as well as its communication latencies. This research focusses on around two main aspects. First, it aims at characterising the power requirements and communication latencies of the SpiNNaker platform while running large-scale SNN simulations. The results of this investigation lead to the derivation of a power estimation model for the SpiNNaker system, a reduction of the overall power requirements and the characterisation of the intra- and inter-chip spike latencies. Then it focuses on a full characterisation of spiking DBNs, by developing a set of case studies in order to determine the impact of (a) the hardware bit precision; (b) the input noise; (c) weight variation; and (d) combinations of these on the classification performance of spiking DBNs for the problem of handwritten digit recognition. The results demonstrate that spiking DBNs can be realised on limited precision hardware platforms without drastic performance loss, and thus offer an excellent compromise between accuracy and low-power, low-latency execution. These studies intend to provide important guidelines for informing current and future efforts around developing custom large-scale digital and mixed-signal spiking neural network platforms.
13

Use and Application of 2D Layered Materials-Based Memristors for Neuromorphic Computing

Alharbi, Osamah 01 February 2023 (has links)
This work presents a step forward in the use of 2D layered materials (2DLM), specifically hexagonal boron nitride (h-BN), for the fabrication of memristors. In this study, we fabricate, characterize, and use h-BN based memristors with Ag/few-layer h-BN/Ag structure to implement a fully functioning artificial leaky integrate-and-fire neuron on hardware. The devices showed volatile resistive switching behavior with no electro-forming process required, with relatively low VSET and long endurance of beyond 1.5 million cycles. In addition, we present some of the failure mechanisms in these devices with some statistical analyses to understand the causes, as well as a statistical study of both cycle-to-cycle and device-to-device variabilities in 20 devices. Moreover, we study the use of these devices in implementing a functioning artificial leaky integrate-and-fire neuron similar to a biological neuron in the brain. We provide SPICE simulation as well as hardware implementation of the artificial neuron that are in full agreement, showing that our device could be used for such application. Additionally, we study the use of these devices as an activation function for spiking neural networks (SNNs) by providing a SPICE simulation of a fully trained network, where the artificial spiking neuron is connected to the output terminal of a crossbar array. The SPICE simulations provide a proof of concept for using h-BN based memristor for activation function for SNNs.
14

Self organisation and hierarchical concept representation in networks of spiking neurons

Rumbell, Timothy January 2013 (has links)
The aim of this work is to introduce modular processing mechanisms for cortical functions implemented in networks of spiking neurons. Neural maps are a feature of cortical processing found to be generic throughout sensory cortical areas, and self-organisation to the fundamental properties of input spike trains has been shown to be an important property of cortical organisation. Additionally, oscillatory behaviour, temporal coding of information, and learning through spike timing dependent plasticity are all frequently observed in the cortex. The traditional self-organising map (SOM) algorithm attempts to capture the computational properties of this cortical self-organisation in a neural network. As such, a cognitive module for a spiking SOM using oscillations, phasic coding and STDP has been implemented. This model is capable of mapping to distributions of input data in a manner consistent with the traditional SOM algorithm, and of categorising generic input data sets. Higher-level cortical processing areas appear to feature a hierarchical category structure that is founded on a feature-based object representation. The spiking SOM model is therefore extended to facilitate input patterns in the form of sets of binary feature-object relations, such as those seen in the field of formal concept analysis. It is demonstrated that this extended model is capable of learning to represent the hierarchical conceptual structure of an input data set using the existing learning scheme. Furthermore, manipulations of network parameters allow the level of hierarchy used for either learning or recall to be adjusted, and the network is capable of learning comparable representations when trained with incomplete input patterns. Together these two modules provide related approaches to the generation of both topographic mapping and hierarchical representation of input spaces that can be potentially combined and used as the basis for advanced spiking neuron models of the learning of complex representations.
15

Spiking Neural Networks: Neuron Models, Plasticity, and Graph Applications

Donachy, Shaun 01 January 2015 (has links)
Networks of spiking neurons can be used not only for brain modeling but also to solve graph problems. With the use of a computationally efficient Izhikevich neuron model combined with plasticity rules, the networks possess self-organizing characteristics. Two different time-based synaptic plasticity rules are used to adjust weights among nodes in a graph resulting in solutions to graph prob- lems such as finding the shortest path and clustering.
16

Real time Spaun on SpiNNaker : functional brain simulation on a massively-parallel computer architecture

Mundy, Andrew January 2017 (has links)
Model building is a fundamental scientific tool. Increasingly there is interest in building neurally-implemented models of cognitive processes with the intention of modelling brains. However, simulation of such models can be prohibitively expensive in both the time and energy required. For example, Spaun - "the world's first functional brain model", comprising 2.5 million neurons - required 2.5 hours of computation for every second of simulation on a large compute cluster. SpiNNaker is a massively parallel, low power architecture specifically designed for the simulation of large neural models in biological real time. Ideally, SpiNNaker could be used to facilitate rapid simulation of models such as Spaun. However the Neural Engineering Framework (NEF), with which Spaun is built, maps poorly to the architecture - to the extent that models such as Spaun would consume vast portions of SpiNNaker machines and still not run as fast as biology. This thesis investigates whether real time simulation of Spaun on SpiNNaker is at all possible. Three techniques which facilitate such a simulation are presented. The first reduces the memory, compute and network loads consumed by the NEF. Consequently, it is demonstrated that only a twentieth of the cores are required to simulate a core component of the Spaun network than would otherwise have been needed. The second technique uses a small number of additional cores to significantly reduce the network traffic required to simulated this core component. As a result simulation in real time is shown to be feasible. The final technique is a novel logic minimisation algorithm which reduces the size of the routing tables which are used to direct information around the SpiNNaker machine. This last technique is necessary to allow the routing of models of the scale and complexity of Spaun. Together these provide the ability to simulate the Spaun model in biological real time - representing a speed-up of 9000 times over previously reported results - with room for much larger models on full-scale SpiNNaker machines.
17

Modulation of fast-spiking interneurons using two-pore channel blockers

Whittaker, Maximilian Anthony Erik January 2018 (has links)
The balance between excitatory and inhibitory synaptic transmission within and across neurons in active networks is crucial for cortical function and may allow for rapid transitions between stable network states. GABAergic interneurons mediate the majority of inhibitory transmission in the cortex, and therefore contribute to the global balance of activity in neuronal networks. Disruption in the network balance due to impaired inhibition has been implicated in several neuropsychiatric diseases (Marin 2012). Both schizophrenia and autism are two highly heritable cognitive disorders with complex genetic aetiologies but overlapping behavioural phenotypes that share common imbalances in neuronal network activity (Gao & Penzes 2015). An increasing body of evidence suggests that functional abnormalities in a particular group of cortical GABAergic interneurons expressing the calcium-binding protein parvalbumin (PV) are involved in the pathology of these disorders (Marin 2012). As deficits in this neuronal population have been linked to these disorders it could be useful to target them and increase their activity. A conserved feature in PV cells is their unusually low input resistance compared to other neuronal populations. This feature is regulated by the expression of leak K+ channels, believed to be mediated in part by TASK and TREK subfamily two-pore K+ channels (Goldberg et al. 2011). The selective blockade of specific leak K+ channels could therefore be applied to increase the activity of PV cells. In this thesis, specific TASK-1/3 and TREK-1 channel blockers were applied in cortical mouse slices in an attempt to increase the output of PV cells. The blockade of either channel did not successfully increase the amplitude of PV cell-evoked inhibitory postsynaptic currents (IPSCs) onto principal cells. However, while the blockade of TASK-1/3 channels failed to depolarise the membrane or alter the input resistance, the blockade of TREK-1 channels resulted in a small but significant depolarisation of the membrane potential in PV cells. Interestingly, TREK-1 channel blockade also increased action potential firing of PV cells in response to given current stimuli, suggesting that TREK-1 could be a useful target for PV cell modulation. These results demonstrate for the first time the functional effects of using specific two-pore K+ channel blockers in PV cells. Furthermore, these data provide electrophysiological evidence against the functional expression of TASK-1/3 in PV cells. It could therefore be interesting to further characterise the precise subtypes of leak K+ channels responsible for their low resistivity. This would help to classify the key contributors of the background K+ conductances present in PV cells in addition to finding suitable targets to increase their activity.
18

Learning in large-scale spiking neural networks

Bekolay, Trevor January 2011 (has links)
Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.
19

Spiking Phenomenon in High Intensity Beam Welding

Chen, Kuo-Hsin 04 July 2000 (has links)
Spiking representing a periodic melting and solidification in the depth of fusion zone during high-intensity beam welding is experimentally and theorectically investigated in this work . A spike is a sudden increase in penetration beyond what might be called the average penetration line. Many spikes have voids in their lower portions because molten metal does not fuse to the sides of the hole, producing a condition similar to a cold shut in a casting. These defects seriously reduce the strength of the joint. Due to the significant role of specular reflection on absorption, an investigation of the beam characteristics, especially the focal location, on spiking is important. Furthermore, as the cavity base oscillates upward and downward relatively from the focal location, a central region subject to direct irradiation changes instantaneously from maximum to zero and vice versa. This leads to several hundred time difference in energy absorption and strongly periodic melting at the cavity base. Physical phenomenon of spiking is obtained by comparing between the measured and predicted data based on scale anlaysis of transport process near the cavity base and energy absorption as a function of focal location.
20

Learning in large-scale spiking neural networks

Bekolay, Trevor January 2011 (has links)
Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.

Page generated in 0.0637 seconds