• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 13
  • 6
  • Tagged with
  • 99
  • 27
  • 24
  • 13
  • 11
  • 11
  • 10
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The role of chaotic transients in neural information processing

Goh, Wee Jin January 2008 (has links)
This thesis develops the concept of the Chaotic Transient Computation Machine (CTCM) where the mixing of trajectories creates "hot spots" that are characteristic to a particular input class. These hot spots emerge as input patterns are fed into the chaotic attractor. This scheme allows an observer neuron that is trained on these hot spots is able to classify patterns that would otherwise unclassifiable by such a simple neural setup (i.e. a nonlinearly separable problem space). This thesis also demonstrates that CTCM is applicable to a variety of chaotic attractors and thus the concept is generailizable to any chaotic attractor.
32

Supervised learning in multilayer spiking neural networks

Sporea, Ioana January 2012 (has links)
In this thesis, a new supervised learning algorithm for multilayer spik- ing neural networks is presented. Gradient descent learning algo- rithms have led traditional neural networks with multiple layers to be one of the most powerful and flexible computational models derived from artificial neural networks. However, more recent experimental evidence suggests that biological neural systems use the exact time of single action potentials to encode information. These findings have led to a new way of simulating neural networks based on temporal en- coding with single spikes. Analytical demonstrations show that these types of neural networks are computationally more powerful than net- works of rate neurons. Conversely, the existing learning algorithms no longer apply to spik- ing neural networks. Supervised learning algorithms based on gradient descent, such as SpikeProp and its extensions, have been developed for spiking neural networks with multiple layers, but these ate limited to a specific model of neurons, with only the first spike being consid- ered. Another learning algorithm, ReSuMe, for single layer networks is based on spike-timing dependent plasticity ~STDP) and uses the computational power of multiple spikes; moreover, this algorithm is not limited to a specific neuron model. The algorithm presented here is based on the gradient descent method, while making use of STDP and can be applied to networks with multi- ple layers. Furthermore, the algorithm is not limited to neurons firing single spikes or specific neuron models. Results on classic benchmarks, such as the XOR problem and the Iris data set, show that the algo- rithm is capable of non-linear transformations. Complex classification tasks have also been applied with fast convergence times. The results of the simulations show that the new learning rule is as efficient as SpikeProp while having all the advantages of STDP. The supervised learning algorithm for spiking neurons is compared with the back- propagation algorithm for rate neurons by modelling an audio-visual perceptual illusion, the McGurk effect.
33

Pair-associate learning in spiking neural networks

Yusoff, Nooraini January 2012 (has links)
We propose associative learning models that integrate spike-time dependent plasticity (STDP) and firing rate in two semi-supervised paradigms, Pavlovian and reinforcement learning. Through the Pavlovian approach, the learning rule associates paired stimuli (stimulus-stimulus) known as the predictor-choice pair. Synaptic plasticity is dependent on the timing and the rate of pre- and post synaptic spikes within a time window. The contribution of our learning model can be attributed to the implementation of the proposed learning rules using integration of STDP and firing rate in spatio-temporal neural networks, with Izhikevich's spiking neurons. There is no such model yet found in the literature. The model has been tested in recognition of real visual images. As a result of learning, synchronisation of activity among inter- and intra-subpopulation neurons demonstrates association between two stimulus groups. As an improvement to the stimulus-stimulus (S-S) association model, we extend the algorithm for stimulus-stimulus- response (S-S-R) association using a reinforcement approach with reward-modulated STDP. In the later model, firing rate in response groups determines a reward signal that modulates synaptic changes derived from STDP processes. The S-S-R model has been successfully tested in a visual recognition task with real images and simulation of the colour word Stroop effect. The learning algorithm is able to perform pair-associate learning as well as to recognise the sequence of the presented stimuli. Unlike other existing gradient-based learning models, the S-S-R model implements temporal sequence learning in more natural way through reward-based learning whose protocol follows a behavioural experiment from a psychology study. The key novelty of our S-S-R model can be ascribed to its lateral inhibition mechanism through a minimal anatomical constraint that enables learning in high competitive environments (e.g. temporal logic AND and XOR problems). The S-S model models for example the retrospective and prospective activity in the brain, whilst the S-S-R model exhibits reward acquisition behaviour in human learning. Furthermore, we have proven than, a goal directed learning can be implemented via a generic neural network with rich realistic dynamics based on neurophysiological data. Hence the loose dependency between the model's anatomical properties and functionalities could offer a wide range of applications especially in complex learning environments. Keywords: spiking neural network, spike timing dependent plasticity, associate learning, reinforcement learning, cognitive modelling
34

Biologically inspired neural network implementations on reconfigurable hardware

Glackin, Brendan January 2008 (has links)
For a considerable period of time, the goal of the computational intelligence research community has been the creation of an artificial system with the ability to leam for itself in a manner that replicates to some degree, the natural intelligence of the human brain. The development of the integrated circuit (IC), and the accompanying genesis of the modem day computer in the 1960s, was perhaps seen as a significant advancement towards this objective. However, whilst technology has progressed at an extraordinary rate, the fundamental issue is that it is very difficult to develop truly intelligent systems, irrespective of the vast number of computations that can be performed per second.
35

Advanced methods for neural modelling and applications

Zhang, Long January 2013 (has links)
Due to the simple structure and global approximation ability, single hidden layer neural networks have been widely used in many areas. These neural models have a standard structure consisting of one hidden layer and one output layer with linear output weights. Subset selection and gradient methods are widely used modelling methods. However, the former is not optimal and the latter may converge slowly. This thesis mainly focuses on addressing these problems. Least squares methods play a fundamental role in subset selection and gradient methods for parameter estimation and matrix inversion. In this thesis, it is found that five least squares methods are closely related as a small modification on each least squares method can lead to the formula for another one. To improve model compactness, a two-stage algorithm using orthogonal least squares methods is proposed where the first stage is equal to the forward subset selection and the second stage employs an refinement procedure to replace those insignificant terms, leading to a more compact model. Further, the idea of two-stage method is extended to leave-one-out cross validation and regularized approach to prevent the over-fitting problems when the training data is noisy. To speed up the convergence, the proposed discrete-continuous Levenberg-Marquardt algorithm considers the correlation between the hidden nodes and output weights, which is achieved by translating the output weights to dependent parameters, and optimizes all the parameters simultaneously. Computational complexity analysis is given to confirm the new method is more computationally efficient than the continuous fast algorithm. The continuous-discrete scheme is also extended to the alternative conjugate gradient and Newton methods. The advantages of all the proposed algorithms in the thesis are demonstrated by comparative results on a number of benchmark examples and a practical application.
36

A practical investigation of parallel genetic algorithms and their application to the structuring of artificial neural networks

Macfarlane, Donald January 1992 (has links)
Efficient and scalable implementations of parallel genetic algorithms (PGAs) have been devised for an affordable MIMD parallel processing system. PGAs with structured populations have been compared with traditional genetic algorithms and found to give superior search performance. The parallel processing system constructed has enabled empirical research into the evolutionary approach to constructing problem-specific artificial neural networks (ANNs). This work, involving a real world speech recognition problem, has shown that a high level parametric description of ANNs is an effective method of encoding their properties in a form suitable for manipulation by genetic algorithms.
37

Dynamical encoding in systems of globally coupled oscillators

Borresen, Jon Carl January 2005 (has links)
No description available.
38

Low power digital self organising map

Cambio, Roberta January 2003 (has links)
As the predicted application of the Self Organising Map, SOM, in portable devices makes power dissipation a critical design issue, recent hardware implementations of the SOM have focused on power. In more detail, the main principle shared among designers is to address power from the early stages of the design process. During the various phases of the process the power performance of differing design options should be estimated in order to choose the most power effective one. Clearly this level-by-level approach shortens times in comparison with a more old-fashioned approach which collects the feedback on the effect of a solution at the end of the process. The digital implementation of the SOM which is presented in this thesis achieves low power performance by means of reducing the number of clock cycles required to calculate the distance between one element of an input vector and the corresponding reference element in a neuron. This has resulted in the development of three designs of a neuron requiring two clock cycles, one clock cycle and half a clock cycle per element of the input vector. Detailed power figures for each model are given and the increase in silicon area, which allows for the reduction in clock cycles, is also discussed. Finally, the investigation moves to a higher level and the whole array of neurons is considered when the distance from an input vector is computed by all the neurons. Two methods which operate on the value of such a distance are illustrated as a way of reducing energy consumption. Both of them activate an automatic sleep mode within each neuron when the accumulated distance exceeds a given threshold. This is achieved by means of little additional hardware support. The energy performance achieved for three standard SOM benchmarks is discussed. In particular, the second method proves to guarantee an energy reduction of over 40%.
39

Modular maps : an implementation strategy for the Self-Organising Map

Lightowler, Neil January 1997 (has links)
This thesis presents a modular strategy towards the implementation and application of an artificial neural network (ANN) paradigm called the Self-Organising Map (SOM). This modular strategy was derived as an approach towards developing an implementable and scaleable ANN system. The design of these modules, called Modular Maps, is presented along with details of suitable implementation technologies. Different connection schemes for combining modules are investigated and their effects discussed. Comparisons of the Modular Map and a unitary SOM system are made. These comparisons include consideration of implementation, training times and performance. The differences between the two systems are then further explored with the aid of two application domains, human face recognition and ground anchorage integrity testing.
40

Real-time applications of artificial neural networks

D'Souza, Winston Anthony January 2007 (has links)
This research takes an innovative look at two distinct applications of Artificial Neural Networks (ANNs) concerning the manipulation of data within real-time systems. The first contribution of this research involves the filtering of errors associated with data emergent from Inertial Navigation Systems (INS) by adopting an ANN filter.  This novel approach when compared to present day optimal estimation filter techniques for random data such as the Kalman Filter (KF) and its variants, offers a better estimated response without the need to mathematically model error.  In addition to this advantage, due to its inherent properties of effortlessly handling nonlinear data, the ANN filter eliminates the need to convert such data into their linear forms thereby maintaining the integrity of the original data.  Furthermore, since the ANN filter is considerably more economical compared to a KF, it makes itself a likely candidate for low-cost applications.  Results from this research have indicated that the performance of an ANN filter when used for real-time applications within INS, offers a similar degree of accuracy of estimation as well as shorter correction times for such signals compared to the former.  ANN filters though, are not “plug-n-play” devices but require adequate training before they can function reliably and independently of any aiding or correction source (e.g. Kalman Filters).  However, with the continual growth in their knowledge and increased training, they perfect their correction ability considerably. The second contribution of this research was in the area of on-line data compression.  This innovative approach builds on the strengths of present day compression schemes.  However, unlike current schemes that continually compresses data (such as web pages) transmitted through a network every time they are requested, the ANN scheme points to data they may already be pre-compressed and stored within the client’s memory at an earlier stage.  If clients request such data that has already been pre-compressed using this scheme, these are decompressed locally from its resident memory.  If clients do not hold a pre-compressed page due to it being unknown or is an updated version of the web page in its memory, it downloads the web page using the contemporary on-line real-time compression scheme (e.g. mod­_gzip). With this approach therefore, a client’s browser does not have to download every web page requested from the Internet but just the previously unseen ones thereby reducing user perceived latency.  Results from this research have indicated that the ANN scheme for on-line data compression is fairly reliable in correctly recognising web pages previously used for training though there have been some difficulties with regard to web pages not adopting the standard 128 character ASCII code such as Unicode.  As this scheme presently operates entirely in software, it offers some difficulties with regard to the recognition time involved in identifying web pages previously browsed or even unseen by the user.  It is hoped that this problem will be mitigated if this scheme is migrated into hardware using these parallel processors.

Page generated in 0.0423 seconds