• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 6
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Memristor-Based Liquid State Machine for Auditory Signal Recognition

Henderson, Stephen Alexander, Jr. 09 August 2021 (has links)
No description available.
2

Aspects of learning within networks of spiking neurons

Carnell, Andrew Robert January 2008 (has links)
Spiking neural networks have, in recent years, become a popular tool for investigating the properties and computational performance of large massively connected networks of neurons. Equally as interesting is the investigation of the potential computational power of individual spiking neurons. An overview is provided of current and relevant research into the Liquid Sate Machine, biologically inspired artificial STDP learning mechanisms and the investigation of aspects of the computational power of artificial, recurrent networks of spiking neurons. First, it is shown that, using simple structures of spiking Leaky Integrate and Fire (LIF) neurons, a network n(P), can be built to perform any program P that can be performed by a general parallel programming language. Next, a form of STDP learning with normalisation is developed, referred to as STDP + N learning. The effects of applying this STDP + N learning within recurrently connected networks of neurons is then investigated. It is shown experimentally that, in very specific circumstances Anti-Hebbian and Hebbian STDP learning may be considered to be approximately equivalent processes. A metric is then developed that can be used to measure the distance between any two spike trains. The metric is then used, along with the STDP + N learning, in an experiment to examine the capacity of a single spiking neuron that receives multiple input spike trains, to simultaneously learn many temporally precise Input/Output spike train associations. The STDP +N learning is further modified for use in recurrent networks of spiking neurons, to give the STDP + NType2 learning methodology. An experiment is devised which demonstrates that the Type 2 method of applying learning to the synapses of a recurrent network — effectively a randomly shifting locality of learning — can enable the network to learn firing patterns that the typical application of learning is unable to learn. The resulting networks could, in theory, be used to create to simple structures discussed in the first chapter of original work.
3

Methodology and Techniques for Building Modular Brain-Computer Interfaces

Cummer, Jason 05 January 2015 (has links)
Commodity brain-computer interfaces (BCI) are beginning to accompany everything from toys and games to sophisticated health care devices. These contemporary interfaces allow for varying levels of interaction with a computer. Not surprisingly, the more intimately BCIs are integrated into the nervous system, the better the control a user can exert on a system. At one end of the spectrum, implanted systems can enable an individual with full body paralysis to utilize a robot arm and hold hands with their loved ones [28, 62]. On the other end of the spectrum, the untapped potential of commodity devices supporting electroencephalography (EEG) and electromyography (EMG) technologies require innovative approaches and further research. This thesis proposes a modularized software architecture designed to build flexible systems based on input from commodity BCI devices. An exploratory study using a commodity EEG provides concrete assessment of the potential for the modularity of the system to foster innovation and exploration, allowing for a combination of a variety of algorithms for manipulating data and classifying results. Specifically, this study analyzes a pipelined architecture for researchers, starting with the collection of spatio temporal brain data (STBD) from a commodity EEG device and correlating it with intentional behaviour involving keyboard and mouse input. Though classification proves troublesome in the preliminary dataset considered, the architecture demonstrates a unique and flexible combination of a liquid state machine (LSM) and a deep belief network (DBN). Research in methodologies and techniques such as these are required for innovation in BCIs, as commodity devices, processing power, and algorithms continue to improve. Limitations in terms of types of classifiers, their range of expected inputs, discrete versus continuous data, spatial and temporal considerations and alignment with neural networks are also identified. / Graduate / 0317 / 0984 / jasoncummer@gmail.com
4

On the Effect of Heterogeneity on the Dynamics and Performance of Dynamical Networks

Goudarzi, Alireza 01 January 2012 (has links)
The high cost of processor fabrication plants and approaching physical limits have started a new wave research in alternative computing paradigms. As an alternative to the top-down manufactured silicon-based computers, research in computing using natural and physical system directly has recently gained a great deal of interest. A branch of this research promotes the idea that any physical system with sufficiently complex dynamics is able to perform computation. The power of networks in representing complex interactions between many parts make them a suitable choice for modeling physical systems. Many studies used networks with a homogeneous structure to describe the computational circuits. However physical systems are inherently heterogeneous. We aim to study the effect of heterogeneity in the dynamics of physical systems that pertains to information processing. Two particularly well-studied network models that represent information processing in a wide range of physical systems are Random Boolean Networks (RBN), that are used to model gene interactions, and Liquid State Machines (LSM), that are used to model brain-like networks. In this thesis, we study the effects of function heterogeneity, in-degree heterogeneity, and interconnect irregularity on the dynamics and the performance of RBN and LSM. First, we introduce the model parameters to characterize the heterogeneity of components in RBN and LSM networks. We then quantify the effects of heterogeneity on the network dynamics. For the three heterogeneity aspects that we studied, we found that the effect of heterogeneity on RBN and LSM are very different. We find that in LSM the in-degree heterogeneity decreases the chaoticity in the network, whereas it increases chaoticity in RBN. For interconnect irregularity, heterogeneity decreases the chaoticity in LSM while its effects on RBN the dynamics depends on the connectivity. For {K} < 2, heterogeneity in the interconnect will increase the chaoticity in the dynamics and for {K} > 2 it decreases the chaoticity. We find that function heterogeneity has virtually no effect on the LSM dynamics. In RBN however, function heterogeneity actually makes the dynamics predictable as a function of connectivity and heterogeneity in the network structure. We hypothesize that node heterogeneity in RBN may help signal processing because of the variety of signal decomposition by different nodes.
5

Improving Liquid State Machines Through Iterative Refinement of the Reservoir

Norton, R David 18 March 2008 (has links) (PDF)
Liquid State Machines (LSMs) exploit the power of recurrent spiking neural networks (SNNs) without training the SNN. Instead, a reservoir, or liquid, is randomly created which acts as a filter for a readout function. We develop three methods for iteratively refining a randomly generated liquid to create a more effective one. First, we apply Hebbian learning to LSMs by building the liquid with spike-time dependant plasticity (STDP) synapses. Second, we create an eligibility based reinforcement learning algorithm for synaptic development. Third, we apply principles of Hebbian learning and reinforcement learning to create a new algorithm called separation driven synaptic modification (SDSM). These three methods are compared across four artificial pattern recognition problems, generating only fifty liquids for each problem. Each of these algorithms shows overall improvements to LSMs with SDSM demonstrating the greatest improvement. SDSM is also shown to generalize well and outperforms traditional LSMs when presented with speech data obtained from the TIMIT dataset.
6

Sensory input encoding and readout methods for in vitro living neuronal networks

Ortman, Robert L. 06 July 2012 (has links)
Establishing and maintaining successful communication stands as a critical prerequisite for achieving the goals of inducing and studying advanced computation in small-scale living neuronal networks. The following work establishes a novel and effective method for communicating arbitrary "sensory" input information to cultures of living neurons, living neuronal networks (LNNs), consisting of approximately 20 000 rat cortical neurons plated on microelectrode arrays (MEAs) containing 60 electrodes. The sensory coding algorithm determines a set of effective codes (symbols), comprised of different spatio-temporal patterns of electrical stimulation, to which the LNN consistently produces unique responses to each individual symbol. The algorithm evaluates random sequences of candidate electrical stimulation patterns for evoked-response separability and reliability via a support vector machine (SVM)-based method, and employing the separability results as a fitness metric, a genetic algorithm subsequently constructs subsets of highly separable symbols (input patterns). Sustainable input/output (I/O) bit rates of 16-20 bits per second with a 10% symbol error rate resulted for time periods of approximately ten minutes to over ten hours. To further evaluate the resulting code sets' performance, I used the system to encode approximately ten hours of sinusoidal input into stimulation patterns that the algorithm selected and was able to recover the original signal with a normalized root-mean-square error of 20-30% using only the recorded LNN responses and trained SVM classifiers. Response variations over the course of several hours observed in the results of the sine wave I/O experiment suggest that the LNNs may retain some short-term memory of the previous input sample and undergo neuroplastic changes in the context of repeated stimulation with sensory coding patterns identified by the algorithm.

Page generated in 0.0657 seconds