• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 13
  • 6
  • Tagged with
  • 99
  • 27
  • 24
  • 13
  • 11
  • 11
  • 10
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Multiscopic neuro-vision for two and three dimensional object recognition and classification

Shaw, Terence January 1998 (has links)
No description available.
42

Towards affective pervasive computing : emotion detection in intelligent inhabited environments

Villeda, Enrique Edgar León January 2007 (has links)
No description available.
43

An adaptive SOM model for document clustering using hybrid neural techniques

Hung, Chih-Li January 2004 (has links)
No description available.
44

Recurrent neural networks for text classification of news articles from the Reuters Corpus

Arevian, Garen Zohrab January 2007 (has links)
No description available.
45

Neural network heuristics for real-world classification : with an application to predict cancer recurrence

Smithies, R. G. January 2004 (has links)
No description available.
46

An infrastructure for neural network construction

Stewart, Richard January 2005 (has links)
After many years of research the area of Artificial Intelligence is still searching for ways to construct a truly intelligent system. One criticism is that current models are not 'rich' or complex enough to operate in many and varied real world situations. One way to tackle this criticism is to look at intelligent systems that already exist in nature and examine these to determine what complexities exist in these systems and not in the current Al models. The research begins by presenting an overview of the current knowledge of Biological Neural Networks, as examples of intelligent systems existing in nature, and how they function. Artificial Neural networks are then discussed and the thesis examines their similarities and dissimilarities with their biological counterparts. The research suggests ways that Artificial Neural Networks may be improved by borrowing ideas from Biological Neural Networks. By introducing new concepts drawn from the biological realm, the construction of the Artificial Neural Networks becomes more difficult. To solve this difficulty, the thesis introduces the area of Evolutionary Algorithms as a way of constructing Artificial Neural Networks. An intellectual infrastructure is developed that incorporates concepts from Biological Neural Networks into current models of Artificial Neural Networks and two models are developed to explore the concept that increased complexity can indeed add value to the current models of Artificial Neural Networks. The outcome of the thesis shows that increased complexity can have benefits in terms of learning speed of an Artificial Neural Network and in terms of robustness to damage.
47

Rule extraction from recurrent neural networks

Jacobsson, Henrik January 2006 (has links)
No description available.
48

Multistage neural network ensemble : adaptive combination of ensemble results

Yang, Shuang January 2003 (has links)
No description available.
49

Integrated learning in multi-net systems

Casey, M. C. January 2004 (has links)
Specific types of multi-net neural computing systems can give improved generalisation performance over single network solutions. In single-net systems learning is one way in which good generalisation can be achieved, where a number of neurons are combined through a process of collaboration. In this thesis we examine collaboration in multi-net systems through in-situ learning. Here we explore how generalisation can be improved through learning in the components and their combination at the same time. To achieve this we present a formal way in which multi-net systems can be described in an attempt to provide a method with which the general properties of multi-net systems can be explored. We then explore two novel learning algorithms for multi-net systems that exploit in-situ learning, evaluating them in comparison with multi-net and single-net solutions. Last, we simulate two cognitive processes with in-situ learning to examine the interaction between different numerical abilities in multi-net systems. Using single-net simulations of subitization and counting we build a multi-net simulation of quantification. Similarly, we combine single-net simulations of the fact retrieval and ‘count all’ addition strategies into a multi-net simulation of addition. Our results are encouraging, with improved generalisation performance obtained on benchmark problems, and the interaction of strategies with in-situ learning used to describe well known numerical ability phenomena. This learning through interaction in connectionist simulations we call integrated learning.
50

Combined optimization algorithms applied to pattern classification

Lappas, Georgios January 2006 (has links)
Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set.

Page generated in 0.0158 seconds