In Natural Language Processing (NLP), speech and text are parsed and generated with language models and parser models, and translated with translation models. Each model contains a set of numerical parameters which are found by applying a suitable training algorithm to a set of training data.
Many such training algorithms are instances of the Expectation-Maximization (EM) algorithm. In [BSV15], a generic EM algorithm for NLP is described. This work presents a particular speech model, the Hidden Markov model, and its standard training algorithm, the Baum-Welch algorithm. It is then shown that the Baum-Welch algorithm is an instance of the generic EM algorithm introduced by [BSV15], from which follows that all statements about the generic EM algorithm also apply to the Baum-Welch algorithm, especially its correctness and convergence properties.
Identifer | oai:union.ndltd.org:DRESDEN/oai:qucosa.de:bsz:14-qucosa-226903 |
Date | 27 July 2017 |
Creators | Majewsky, Stefan |
Contributors | Technische Universität Dresden, Fakultät Informatik, Dipl.-Inf. Kilian Gebhardt, Prof. Dr.-Ing. habil. Dr. h.c./Univ. Szeged Heiko Vogler, Dr. rer. nat. Daniel Borchmann |
Publisher | Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden |
Source Sets | Hochschulschriftenserver (HSSS) der SLUB Dresden |
Language | English |
Detected Language | English |
Type | doc-type:bachelorThesis |
Format | application/pdf |
Page generated in 0.0022 seconds