• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic Construction Algorithms for Supervised Neural Networks and Applications

Tsai, Hsien-Leing 28 July 2004 (has links)
The reseach on neural networks has been done for six decades. In this period, many neural models and learning rules have been proposed. Futhermore, they were popularly and successfully applied to many applications. They successfully solved many problems that traditional algorithms could not solve efficiently . However, applying multilayer neural networks to applications, users are confronted with the problem of determining the number of hidden layers and the number of hidden neurons in each hidden layer. It is too difficult for users to determine proper neural network architectures. However, it is very significant, because neural network architectures always influence critically their performance. We may solve problems efficiently, only when we has proper neural network architectures. To overcome this difficulty, several approaches have been proposed to generate the architecture of neural networks recently. However, they still have some drawbacks. The goal of our research is to discover better approachs to automatically determine proper neural network architectures. We propose a series of approaches in this thesis. First, we propose an approach based on decision trees. It successfully determines neural network architectures and greatly decreases learning time. However, it can deal only with two-class problems and it generates bigger neural network architectures. Next, we propose an information entropy based approach to overcome the above drawbacks. It can generate easily multi-class neural networks for standard domain problems. Finally, we expand the above method for sequential domain and structured domain problems. Therefore, our approaches can be applied to many applications. Currently, we are trying to work on quantum neural networks. We are also interested in ART neural networks. They are also incremental neural models. We apply them to digital signal processing. We propose a character recognition application, a spoken word recognition application, and an image compression application. All of them have good performances.
2

Evolution and learning in games

Josephson, Jens January 2001 (has links)
This thesis contains four essays that analyze the behaviors that evolve when populations of boundedly rational individuals interact strategically for a long period of time. Individuals are boundedly rational in the sense that their strategy choices are determined by simple rules of adaptation -- learning rules. Convergence results for general finite games are first obtained in a homogenous setting, where all populations consist either of stochastic imitators, who almost always imitate the most successful strategy in a sample from their own population's past strategy choices, or stochastic better repliers, who almost always play a strategy that gives at least as high expected payoff as a sample distribution of all populations' past play. Similar results are then obtained in a heterogeneous setting, where both of these learning rules are represented in each population. It is found that only strategies in certain sets are played in the limit, as time goes to infinity and the mutation rate tends to zero. Sufficient conditions for the selection of a Pareto efficient such set are also provided. Finally, the analysis is extended to natural selection among learning rules. The question is whether there exists a learning rule that is evolutionarily stable, in the sense that a population employing this learning rule cannot be invaded by individuals using a different rule. Monte Carlo simulations for a large class of learning rules and four different games indicate that only a learning rule that takes full account of hypothetical payoffs to strategies that are not played is evolutionarily stable in almost all cases. / Diss. Stockholm : Handelshögsk., 2001

Page generated in 0.0652 seconds