• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 13
  • 6
  • Tagged with
  • 99
  • 27
  • 24
  • 13
  • 11
  • 11
  • 10
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Training biologically plausible neurons for use in engineering tasks

Rowcliffe, Phillip January 2007 (has links)
No description available.
52

Learning systems for process identification and control

Govindhasamy, J. J. January 2005 (has links)
No description available.
53

Robust coding and its implications on neural information processing

Li, Sheng January 2006 (has links)
No description available.
54

Exploring developmental dynamics in evolved neural network controllers

Balaam, Andy January 2006 (has links)
No description available.
55

Unsupervised multimodal neural networks

Nyamapfene, Abel January 2006 (has links)
We extend the in-situ Hebbian-linked SOMs network by Miikkulainen to come up with two unsupervised neural networks that learn the mapping between the individual modes of a multimodal dataset. The first network, the single-pass Hebbian linked SOMs network, extends the in-situ Hebbian-linked SOMs network by enabling the Hebbian link weights to be computed through one- shot learning. The second network, a modified counter propagation network, extends the unsupervised learning of crossmodal mappings by making it possible for only one self-organising map to implement the crossmodal mapping. The two proposed networks each have a smaller computation time and achieve lower crossmodal mean squared errors than the in-situ Hebbian- linked SOMs network when assessed on two bimodal datasets, an audio-acoustic speech utterance dataset and a phonological-semantics child utterance dataset. Of the three network architectures, the modified counterpropagation network achieves the highest percentage of correct classifications comparable to that of the LVQ-2 algorithm by Kohonen and the neural network for category learning by de Sa and Ballard in classification tasks using the audio-acoustic speech utterance dataset. To facilitate multimodal processing of temporal data, we propose a Temporal Hypermap neural network architecture that learns and recalls multiple temporal patterns in an unsupervised manner. The Temporal Hypermap introduces flexibility in the recall of temporal patterns - a stored temporal pattern can be retrieved by prompting the network with the temporal pattern's identity vector, whilst the incorporation of short term memory allows the recall of a temporal pattern, starting from the pattern item specified by contextual information up to the last item in the pattern sequence. Finally, we extend the connectionist modelling of child language acquisition in two important respects. First, we introduce the concept of multimodal representation of speech utterances at the one-word and two-word stage. This allows us to model child language at the one-word utterance stage with a single modified counterpropagation network, which is an improvement on previous models in which multiple networks are required to simulate the different aspects of speech at the one-word utterance stage. Secondly, we present, for the time, a connectionist model of the transition of child language from the one-word utterance stage to the two-word utterance stage. We achieve this using a gated multi-net comprising a modified counterpropagation network and a Temporal Hypermap.
56

Configuration of neural networks to model seasonal and cyclic time series

Taskaya-Temizel, Tugba January 2006 (has links)
Time series often exhibit periodical patterns that can be analysed by conventional statistical techniques. These techniques rely upon an appropriate choice of model parameters that are often difficult to determine. Whilst neural networks also require an appropriate parameter configuration, they offer a way in which non-linear patterns may be modelled. However, evidence from a limited number of experiments has been used to argue that periodical patterns cannot be modelled using such networks. Researchers have argued that combining models for forecasting gives better estimates than single time series models particularly for seasonal and cyclic series. For example, a hybrid architecture comprising an autoregressive integrated moving average model (ARIMA) and a neural network is a well-known technique that has recently been shown to give better forecasts by taking advantage of each model's capabilities. However, this assumption carries the danger of underestimating the relationship between the model's linear and non-linear components, particularly by assuming that individual forecasting techniques are appropriate, say, for modelling the residuals. In this thesis, we show that such combinations do not necessarily outperform individual forecasts. On the contrary, we show that the combined forecast can underperform significantly compared to its constituents'. We also present a method to overcome the perceived limitations of neural networks by determining the configuration parameters of a time delayed neural network from the seasonal data it is being used to model. The motivation of our method is that Occam's razor should guide us in selecting a simpler solution compared to a complex solution. Our method uses a fast Fourier transform to calculate the number of input tapped delays, with results demonstrating improved performance as compared to that of other linear and hybrid seasonal modelling techniques on twelve benchmark time series. Keywords: neural networks, time series, cycles, ARIMA-NN hybrids, Fourier, TDNN.
57

Automatic relevance determination with ensembles of Bayesian MLPs

Fu, Yu January 2008 (has links)
The problem of controlling model complexity and data complexity are fundamental issues in neural network learning. Some researchers have used Bayesian-learning on neural networks to control the model complexity. The Bayesian-based technique, Automatic Relevance Determination (ARD), can effectively control the complexity of data, and automatically determine the relevance of input features by controlling the distribution of corresponding groups of weights in a network. However, we found that the relevance determination made by a single ARD model is not stable and accurate. Neural network ensemble techniques were used in our research to improve the accuracy of feature relevance determination. The accuracy of the ensemble feature relevance determination was evaluated using two synthetic datasets in which the relevance of each individual input feature was pre-determined. The results showed that ensemble feature relevance determination can effectively separate relevant features, redundancies and irrelevant features from each other, and provide useful suggestions of the boundaries between these relevance levels. Thus, the features selected, based on the ensemble feature relevance determination, benefits not only non-linear models such as neural networks, but also linear models such as the linear regression model, by enabling them to classify the samples in several real-world datasets more accurately than by using all the available input features, extracted principal components and independent components from the datasets. We also found that an ensemble of ARD models is good at selecting group relevance features, but not at ranking the relevance for each individual input feature, because the relevance rank determination of an input feature can be affected by any redundancies in the data which are highly correlated with it.
58

Combining unsupervised classifiers : a multimodal case study

Vrusias, Bogdan L. January 2004 (has links)
Neural computing systems are trained on the principle that if a network can compute then it will learn to compute. Multi-net neural computing systems are trained on the principle that if two or more networks compute, then the multi-net will learn to compute. In this thesis I will present a novel multi-net systems that can classify a set of objects using different sets of attributes of the objects: each set is used in the training of each unsupervised self-organised network and a Hebbian network simultaneously learns the association of the most highly active neurons in the constituent unsupervised networks of the multi-net system. The performance of the multi-net system is compared with a single net system trained with a single complex vector. The comparison suggests that for certain tasks, particularly classification, the multi-net system has a better retrieval effectiveness than the monolithic single net system. Furthermore, we demonstrate the effectiveness of using conventional statistical clustering techniques, especially k-means and hierarchical clustering techniques, in clustering the output map of an unsupervised network. Such sequential clustering, that is first clustering using the output map of an unsupervised network and then clustering the output map, facilitates in automatically clustering and in visualising the clusters that are otherwise implicit in the output map. One important use of our multi-net architecture is in using a partial query that activates some of the neurons in only one constituent unsupervised network and the Hebbian link then activates neurons in the other unsupervised network. I have trained a multi-net systems and a single net system on a collection of images, and associated collateral texts describing the contents of the image. Different training regimens were used, by changing the topology of the unsupervised network, specifically Kohonen's self-organising feature map, and the number of training cycles. Three testing regimens were developed for evaluating the performance of the multi-net system. The multi-net system can be used in the automatic annotation of images and the automatic illustration of keywords. The multi-net approach was also tried on a set of "real-world" images, images used in training of scene of crime officers: the initial results, especially the comparison between single net neural networks and the multi-net system suggest that the multi-net is a better classifier. Our approach is relevant to the needs of workers in the fields of multi-net community on the one hand and to needs of workers in multi-modal data fusion on the other.
59

Homeostatic adaptive networks

Williams, Hywel Thomas Parker January 2006 (has links)
Homeostasis is constancy in the face of perturbation. The concept was originally developed to describe the fixed internal environment of an organism and this descriptive view of homeostasis has been prevalent in literature. However, homeostasis cal also be seen as the dynamic process of self-regulation and as such it is an organising principle by which systems adapt their behaviour over time. In this thesis we adopt this casual view of homeostasis and develop a theory of homeostatic adaptive systems. We study homeostatic adaptive networks by looking at specific examples of homeo-static systems: the Homeostat, homeostatic plasticity in neural networks, and homeostatic regulation of the environment by the biota. Investigation of these cases studies forms the basis for the development of a generalised theory of homeostatic adaptive systems. The Homeostat was an electromechanical device designed by W.A. Ross Ashby to demonstrate the principle of ultra stability, where the stability of a system requires homeostasis of essential variables. Ashby put forward a theory of mammalian learning as a process of homeostatic adaptation that was based on the idea of the ultra stable system. Here we develop a simulated Homeostat and explore its properties as a homeostatic adaptive system, looking at its ultra stable nature and its ability to adapt to external perturbations. The second case study, neural homeostasis, has recently been a topic much interest in the neurosciences, with new data being presented concerning the existence and functioning of a variety of mechanisms by which neural activity is regulated. Homeostatic plastic mechanisms prevent long term quiescence or hyper-excitation in biological neurons and this suggests that such mechanisms may be used to solve the problem of node saturation in artificial neuron networks. Here we develop homeostatic plastic mechanisms for use in continuous-time recurrent neural networks; a kind of network often used in evolutionary robotics, and studies the effect of these mechanisms on network behaviour. Node saturation effects can make these networks difficult to evolve as robot controllers and we also look at the effect of homeostatic plasticity on evolvabilty. The third case study is the evolution of homeostatic regulation of the physical environment by the biota. The Gaia theory state that life regulates the entire biosphere to conditions suitable for life, but the general concept of biological regulation of the environment is applicable on a variety of scales. However, there are major theoretical issues concerning the compatibility of environmental regulation with environmental theory. Here we develop a modified version of the Daisyworld model and use it to determine the compatibility of global regulation with individual selection. We show that regulation in Daisyworld depends on several key assumptions and fails if these assumptions are removed. We develop the Flask model, in which environmental regulation by microbial communities evolves as a result of multi-level selection, in order to show how regulation can occur when the core assumptions of Daisyword are relaxed. At the end of the thesis we try to draw some general conclusions concerning homeostatic adaptive systems. We consider the adaptive and homeostatic properties of each of the case study systems, and then generalise from these to give some principles of homeostatic adaptation. Our analysis shows that the perturbations to a system can be classified in terms of their effects on homeostasis, and that the ability of a system to adapt to a perturbation and maintain homeostasis depends on the variety of responses it can produce. We argue that the parameter change caused by a loss of homeostasis depends on the variety of responses it can produce. We argue that parameter change caused by a loss of homeostasis causes 'organisation death' in a homeostatic adaptive system, where the system does not survive in its current form. This suggests a view of learning and evolution of organisms as second order homeostatic adaptive process.
60

Evolutionary search on fitness landscapes with neutral networks

Barnett, Lionel January 2003 (has links)
No description available.

Page generated in 0.0112 seconds