Return to search

Evolutionary neural networks

To create neural networks that work, one needs to specify a structure and the interconnection weights between each pair of connected computing elements. The structure of a network can be selected by the designer depending on the application, although the selection of interconnection weights is a much larger problem. Algorithms have been developed to alter the weights slightly in order to produce the desired results. Learning algorithms such as Hebb's rule, the Delta rule and error propagation have been used, with success, to learn the appropriate weights. The major objection to this class of algorithms is that one cannot specify what is not desired in the network in addition to what is desired. An alternate method to learning the correct interconnection weights is to evolve a network in an environment that rewards "good” behavior and punishes "bad" behavior, This technique allows interesting networks to appear which otherwise may not be discovered by other methods of learning. In order to teach a network the correct weights, this approach simply needs a direction where an acceptable solution can be obtained rather than a complete answer to the problem. / Master of Science

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/51904
Date January 1988
CreatorsLandry, Kenneth D.
ContributorsComputer Science and Applications
PublisherVirginia Polytechnic Institute and State University
Source SetsVirginia Tech Theses and Dissertation
Languageen_US
Detected LanguageEnglish
TypeThesis, Text
Formatvi, 65 leaves, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/
RelationOCLC# 18110130

Page generated in 0.0019 seconds