Return to search

Genetically Engineered Adaptive Resonance Theory (art) Neural Network Architectures

Fuzzy ARTMAP (FAM) is currently considered to be one of the premier neural network architectures in solving classification problems. One of the limitations of Fuzzy ARTMAP that has been extensively reported in the literature is the category proliferation problem. That is Fuzzy ARTMAP has the tendency of increasing its network size, as it is confronted with more and more data, especially if the data is of noisy and/or overlapping nature. To remedy this problem a number of researchers have designed modifications to the training phase of Fuzzy ARTMAP that had the beneficial effect of reducing this phenomenon. In this thesis we propose a new approach to handle the category proliferation problem in Fuzzy ARTMAP by evolving trained FAM architectures. We refer to the resulting FAM architectures as GFAM. We demonstrate through extensive experimentation that an evolved FAM (GFAM) exhibits good (sometimes optimal) generalization, small size (sometimes optimal size), and requires reasonable computational effort to produce an optimal or sub-optimal network. Furthermore, comparisons of the GFAM with other approaches, proposed in the literature, which address the FAM category proliferation problem, illustrate that the GFAM has a number of advantages (i.e. produces smaller or equal size architectures, of better or as good generalization, with reduced computational complexity). Furthermore, in this dissertation we have extended the approach used with Fuzzy ARTMAP to other ART architectures, such as Ellipsoidal ARTMAP (EAM) and Gaussian ARTMAP (GAM) that also suffer from the ART category proliferation problem. Thus, we have designed and experimented with genetically engineered EAM and GAM architectures, named GEAM and GGAM. Comparisons of GEAM and GGAM with other ART architectures that were introduced in the ART literature, addressing the category proliferation problem, illustrate similar advantages observed by GFAM (i.e, GEAM and GGAM produce smaller size ART architectures, of better or improved generalization, with reduced computational complexity). Moverover, to optimally cover the input space of a problem, we proposed a genetically engineered ART architecture that combines the category structures of two different ART networks, FAM and EAM. We named this architecture UART (Universal ART). We analyzed the order of search in UART, that is the order according to which a FAM category or an EAM category is accessed in UART. This analysis allowed us to better understand UART's functionality. Experiments were also conducted to compare UART with other ART architectures, in a similar fashion as GFAM and GEAM were compared. Similar conclusions were drawn from this comparison, as in the comparison of GFAM and GEAM with other ART architectures. Finally, we analyzed the computational complexity of the genetically engineered ART architectures and we compared it with the computational complexity of other ART architectures, introduced into the literature. This analytical comparison verified our claim that the genetically engineered ART architectures produce better generalization and smaller sizes ART structures, at reduced computational complexity, compared to other ART approaches. In review, a methodology was introduced of how to combine the answers (categories) of ART architectures, using genetic algorithms. This methodology was successfully applied to FAM, EAM and FAM and EAM ART architectures, with success, resulting in ART neural networks which outperformed other ART architectures, previously introduced into the literature, and quite often produced ART architectures that attained optimal classification results, at reduced computational complexity.

Identiferoai:union.ndltd.org:ucf.edu/oai:stars.library.ucf.edu:etd-1785
Date01 January 2006
CreatorsAl-Daraiseh, Ahmad
PublisherSTARS
Source SetsUniversity of Central Florida
LanguageEnglish
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceElectronic Theses and Dissertations

Page generated in 0.0027 seconds