We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.
Identifer | oai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/7188 |
Date | 09 February 1996 |
Creators | Ghahramani, Zoubin, Jordan, Michael I. |
Source Sets | M.I.T. Theses and Dissertation |
Language | en_US |
Detected Language | English |
Format | 7 p., 198365 bytes, 244196 bytes, application/postscript, application/pdf |
Relation | AIM-1561, CBCL-130 |
Page generated in 0.002 seconds