Return to search

Hierarchical Mixtures of Experts and the EM Algorithm

We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/7206
Date01 August 1993
CreatorsJordan, Michael I., Jacobs, Robert A.
Source SetsM.I.T. Theses and Dissertation
Languageen_US
Detected LanguageEnglish
Format29 p., 190144 bytes, 678911 bytes, application/octet-stream, application/pdf
RelationAIM-1440, CBCL-083

Page generated in 0.0018 seconds