Spelling suggestions: "subject:"markov processes."" "subject:"darkov processes.""
31 |
Quantum chainsBose, A. (Amitava) January 1968 (has links)
No description available.
|
32 |
Techniques d'estimation pour les chaînes de Markov y compris les chaînes avec matrice causative constanteDansereau, Maryse. January 1974 (has links)
No description available.
|
33 |
Maximum likelihood estimation for Markov renewal processesSolvason, Diane Lynn January 1977 (has links)
No description available.
|
34 |
Higher-order Markov chain models for categorical data sequencesFung, Siu-leung., 馮紹樑. January 2003 (has links)
published_or_final_version / abstract / toc / Mathematics / Master / Master of Philosophy
|
35 |
High-dimensional Markov chain models for categorical data sequences with applicationsFung, Siu-leung., 馮紹樑. January 2006 (has links)
published_or_final_version / abstract / Mathematics / Doctoral / Doctor of Philosophy
|
36 |
Synthesis of dynamic systems with Markovian characteristicsXiong, Junlin., 熊軍林. January 2007 (has links)
published_or_final_version / abstract / Mechanical Engineering / Doctoral / Doctor of Philosophy
|
37 |
A game-theoretic model for repeated helicopter allocation between two squadsMcGowan, Jason M. 06 1900 (has links)
A platoon commander has a helicopter to support two squads, which encounter two types of missions -- critical or routine --on a daily basis. During a mission, a squad always benefits from having the helicopter, but the benefit is greater during a critical mission than during a routine mission. Because the commander cannot verify the mission type beforehand, a selfish squad would always claim a critical mission to compete for the helicopterâ which leaves the commander no choice but to assign the helicopter at random. In order to encourage truthful reports from the squads, we design a token system that works as follows. Each squad keeps a token bank, with tokens deposited at a certain frequency. A squad must spend either 1 or 2 tokens to request the helicopter, while the commander assigns the helicopter to the squad who spends more tokens, or breaks a tie at random. The two selfish squads become players in a two-person non-zero-sum game. We find the Nash Equilibrium of this game, and use numerical examples to illustrate the benefit of the token system. / US Navy (USN) author.
|
38 |
Modelling and control of birth and death processesGetz, Wayne Marcus 29 January 2015 (has links)
A thesis submitted to the Faculty of Science,
University of the Witwatersrand, Johannesburg,
in fulfilment of the requirements for the degree
of Doctor of Philosophy
February 1976 / This thesis treats systems of ordinary differential equations that
ar*? extracted from ch-_ Kolmogorov forward equations of a class of Markov
processes, known generally as birth and death processes. In particular
we extract and analyze systems of equations which describe the dynamic
behaviour of the second-order moments of the probability distribution
of population governed by birth and death processes. We show that
these systems form an important class of stochastic population models
and conclude that they are superior to those stochastic models derived
by adding a noise term to a deterministic population model. We also
show that these systems are readily used in population control studies,
in which the cost of uncertainty in the population mean size is taken
into account.
The first chapter formulates the univariate linear birth and
death process in its most general form. T i«- prvbo'. i: ity distribution
for the constant parameter case is obtained exactly, which allows one
to state, as special cases, results on the simple birth and death,
Poisson, Pascal, Polya, Palm and Arley processes. Control of a popu=
lation, modelled by the linear birth and death process, is considered
next. Particular attention is paid to system performance indecee
which take into account the cost associated with non-zero variance
and the cost of improving initial estimates of the size of the popula”
tion under control.
|
39 |
Generation of the steady state for Markov chains using regenerative simulation.January 1993 (has links)
by Yuk-ka Chung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 73-74). / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Regenerative Simulation --- p.5 / Chapter § 2.1 --- Discrete time discrete state space Markov chain --- p.5 / Chapter § 2.2 --- Discrete time continuous state space Markov chain --- p.8 / Chapter Chapter 3 --- Estimation --- p.14 / Chapter § 3.1 --- Ratio estimators --- p.14 / Chapter § 3.2 --- General method for generation of steady states from the estimated stationary distribution --- p.17 / Chapter § 3.3 --- Bootstrap method --- p.22 / Chapter § 3.4 --- A new approach: the scoring method --- p.26 / Chapter § 3.4.1 --- G(0) method --- p.29 / Chapter § 3.4.2 --- G(1) method --- p.31 / Chapter Chapter 4 --- Bias of the Scoring Sampling Algorithm --- p.34 / Chapter § 4.1 --- General form --- p.34 / Chapter § 4.2 --- Bias of G(0) estimator --- p.36 / Chapter § 4.3 --- Bias of G(l) estimator --- p.43 / Chapter § 4.4 --- Estimation of bounds for bias: stopping criterion for simulation --- p.51 / Chapter Chapter 5 --- Simulation Study --- p.54 / Chapter Chapter 6 --- Discussion --- p.70 / References --- p.73
|
40 |
Methods for modelling precipitation persistenceWeak, Brenda Ann January 2010 (has links)
Digitized by Kansas Correctional Industries
|
Page generated in 0.061 seconds