Return to search

Training of Hidden Markov models as an instance of the expectation maximization algorithm

In Natural Language Processing (NLP), speech and text are parsed and generated with language models and parser models, and translated with translation models. Each model contains a set of numerical parameters which are found by applying a suitable training algorithm to a set of training data.

Many such training algorithms are instances of the Expectation-Maximization (EM) algorithm. In [BSV15], a generic EM algorithm for NLP is described. This work presents a particular speech model, the Hidden Markov model, and its standard training algorithm, the Baum-Welch algorithm. It is then shown that the Baum-Welch algorithm is an instance of the generic EM algorithm introduced by [BSV15], from which follows that all statements about the generic EM algorithm also apply to the Baum-Welch algorithm, especially its correctness and convergence properties.

Identiferoai:union.ndltd.org:DRESDEN/oai:qucosa.de:bsz:14-qucosa-226903
Date27 July 2017
CreatorsMajewsky, Stefan
ContributorsTechnische Universität Dresden, Fakultät Informatik, Dipl.-Inf. Kilian Gebhardt, Prof. Dr.-Ing. habil. Dr. h.c./Univ. Szeged Heiko Vogler, Dr. rer. nat. Daniel Borchmann
PublisherSaechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden
Source SetsHochschulschriftenserver (HSSS) der SLUB Dresden
LanguageEnglish
Detected LanguageEnglish
Typedoc-type:bachelorThesis
Formatapplication/pdf

Page generated in 0.0023 seconds