Spelling suggestions: "subject:"[een] MARKOV PROCESSES"" "subject:"[enn] MARKOV PROCESSES""
51 |
Joint-space adaptation technique for robust continuous speech recognition /Wang, Chien-Jen. January 1997 (has links)
Thesis (Ph. D.)--University of Washington, 1997. / Vita. Includes bibliographical references (leaves [81]-89).
|
52 |
A bayesian approach to motif-based protein modeling /Grundy, William Noble. January 1998 (has links)
Thesis (Ph. D.)--University of California, San Diego, 1998. / Vita. Includes bibliographical references (leaves 167-177).
|
53 |
Automating inhabitant interactions in home and workplace environments through data-driven generation of hierarchical partially-observable Markov decision processesYoungblood, Gregory Michael. Unknown Date (has links)
Thesis (Ph.D.)--The University of Texas at Arlington, 2005. / Source: Dissertation Abstracts International, Volume: 66-12, Section: B, page: 6746. Advisers: Lawrence B. Holder; Diane J. Cook.
|
54 |
A study of some variations on the hidden Markov modelling approach to speaker independent isolated word speech recognition /Leung, Shun Tak Albert. January 1990 (has links)
Thesis (M. Phil.)--University of Hong Kong, 1990.
|
55 |
Optimal strategies for electric energy contract decision making /Song, Haili. January 2000 (has links)
Thesis (Ph. D.)--University of Washington, 2000. / Vita. Includes bibliographical references (leaves 87-95).
|
56 |
Convergence analysis of MCMC method in the study of genetic linkage with missing dataFisher, Diana. January 2005 (has links)
Theses (M.A.)--Marshall University, 2005. / Title from document title page. Includes abstract. Document formatted into pages: contains x, 75 p. Bibliography: p. 56-59.
|
57 |
Piecewise linear Markov decision processes with an application to partially observable Markov modelsSawaki, Katsushige January 1977 (has links)
This dissertation applies policy improvement and successive
approximation or value iteration to a general class of Markov decision processes with discounted costs. In particular, a class of Markov decision processes, called piecewise-linear, is studied. Piecewise-linear processes are characterized by the property that the value function of a process observed for one period and then terminated is piecewise-linear if the terminal reward function is piecewise-linear. Partially observable Markov decision processes have this property.
It is shown that there are e-optimal piecewise-linear value functions and piecewise-constant policies which are simple. Simple means that there are only finitely many pieces, each of which is defined on a convex polyhedral set. Algorithms based on policy improvement and successive approximation are developed to compute simple approximations to an optimal policy and the optimal value function. / Business, Sauder School of / Graduate
|
58 |
On knowledge representation and decision making under uncertaintyTabaeh Izadi, Masoumeh. January 2007 (has links)
No description available.
|
59 |
Occupation Times of Continuous Markov ProcessesKorpas, Agata K. 28 June 2006 (has links)
No description available.
|
60 |
On estimation for a combined Markov and semi-Markov model with censoring /Yeo, Sungchil January 1987 (has links)
No description available.
|
Page generated in 0.0344 seconds