Spelling suggestions: "subject:"markov processes."" "subject:"darkov processes.""
141 |
Optimal asset allocation problems under the discrete-time regime-switching modelCheung, Ka-chun, 張家俊 January 2005 (has links)
published_or_final_version / abstract / toc / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
|
142 |
Stochastic models for optimal control problems with applicationsLeung, Ho-yin, 梁浩賢 January 2009 (has links)
published_or_final_version / Mathematics / Master / Master of Philosophy
|
143 |
Asset-liability management under regime-switching modelsChen, Ping, 陈平 January 2009 (has links)
published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
|
144 |
Some applications of Dirichlet forms in probability theoryMcGillivray, Ivor Edward January 1992 (has links)
No description available.
|
145 |
A wavelet-based prediction technique for concealment of loss-packet effects in wireless channelsGarantziotis, Anastasios 06 1900 (has links)
In this thesis, a wavelet-based prediction method is developed for concealing packet-loss effects in wireless channels. The proposed method utilizes a wavelet decomposition algorithm in order to process the data and then applies the well known linear prediction technique to estimate one or more approximation coefficients as necessary at the lowest resolution level. The predicted sample stream is produced by using the predicted approximation coefficients and by exploiting certain sample value patterns in the detail coefficients. In order to test the effectiveness of the proposed scheme, a wireless channel based on a three-state Markov model is developed and simulated. Simulation results for transmission of image and speech packet streams over a wireless channel are reported for both the wavelet-based prediction and direct linear prediction. In all the simulations run in this work, the wavelet-based method outperformed the direct linear prediction method. / Hellenic Navy author.
|
146 |
Information-driven pricing Kernel modelsParbhoo, Priyanka Anjali 30 July 2013 (has links)
A thesis submitted for the degree of
Doctor of Philosophy
2013 / This thesis presents a range of related pricing kernel models that are driven by
incomplete information about a series of future unknowns. These unknowns may,
for instance, represent fundamental macroeconomic, political or social random
variables that are revealed at future times. They may also represent latent or
hidden factors that are revealed asymptotically. We adopt the information-based
approach of Brody, Hughston and Macrina (BHM) to model the information processes
associated with the random variables. The market filtration is generated
collectively by these information processes. By directly modelling the pricing
kernel, we generate information-sensitive arbitrage-free models for the term structure
of interest rates, the excess rate of return required by investors, and security
prices. The pricing kernel is modelled by a supermartingale to ensure that nominal
interest rates remain non-negative. To begin with, we primarily investigate
finite-time pricing kernel models that are sensitive to Brownian bridge information.
The BHM framework for the pricing of credit-risky instruments is extended
to a stochastic interest rate setting. In addition, we construct recovery models,
which take into consideration information about, for example, the state of the
economy at the time of default. We examine various explicit examples of analytically
tractable information-driven pricing kernel models. We develop a model
that shares many of the features of the rational lognormal model, and investigate
examples of heat kernel models. It is shown that these models may result
in discount bonds and interest rates being bounded by deterministic functions.
In certain situations, incoming information about random variables may exhibit
jumps. To this end, we construct a more general class of nite-time pricing kernel
models that are driven by Levy random bridges. Finally, we model the aggregate
impact of uncertainties on a nancial market by randomised mixtures of
Levy and Markov processes respectively. It is assumed that market participants
have incomplete information about the underlying random mixture. We apply
results from non-linear ltering theory and construct Flesaker-Hughston models
and in nite-time heat kernel models based on these randomised mixtures.
|
147 |
Markov processes and Martingale generalisations on Riesz spacesVardy, Jessica Joy 25 July 2013 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfillment of the requirements for the degree of Doctor of Philosophy, April 2013. / In a series of papers by Wen-Chi Kuo, Coenraad Labuschagne and Bruce
Watson results of martingale theory were generalised to the abstract setting
of Riesz spaces. This thesis presents a survey of those results proved and aims
to expand upon the work of these authors. In particular, independence results
will be considered and these will be used to generalise well known results in
the theory of Markov processes to Riesz spaces.
Mixingales and quasi-martingales will be translated to the Riesz space
setting.
|
148 |
Accelerating decision making under partial observability using learned action priorsMabena, Ntokozo January 2017 (has links)
Thesis (M.Sc.)--University of the Witwatersrand, Faculty of Science, School of Computer Science and Applied Mathematics, 2017. / Partially Observable Markov Decision Processes (POMDPs) provide a principled mathematical
framework allowing a robot to reason about the consequences of actions and
observations with respect to the agent's limited perception of its environment. They
allow an agent to plan and act optimally in uncertain environments. Although they
have been successfully applied to various robotic tasks, they are infamous for their high
computational cost. This thesis demonstrates the use of knowledge transfer, learned
from previous experiences, to accelerate the learning of POMDP tasks. We propose
that in order for an agent to learn to solve these tasks quicker, it must be able to generalise
from past behaviours and transfer knowledge, learned from solving multiple tasks,
between di erent circumstances. We present a method for accelerating this learning
process by learning the statistics of action choices over the lifetime of an agent, known
as action priors. Action priors specify the usefulness of actions in situations and allow
us to bias exploration, which in turn improves the performance of the learning process.
Using navigation domains, we study the degree to which transferring knowledge
between tasks in this way results in a considerable speed up in solution times.
This thesis therefore makes the following contributions. We provide an algorithm
for learning action priors from a set of approximately optimal value functions and two
approaches with which a prior knowledge over actions can be used in a POMDP context.
As such, we show that considerable gains in speed can be achieved in learning subsequent
tasks using prior knowledge rather than learning from scratch. Learning with
action priors can particularly be useful in reducing the cost of exploration in the early
stages of the learning process as the priors can act as mechanism that allows the agent
to select more useful actions given particular circumstances. Thus, we demonstrate how
the initial losses associated with unguided exploration can be alleviated through the
use of action priors which allow for safer exploration. Additionally, we illustrate that
action priors can also improve the computation speeds of learning feasible policies in a
shorter period of time. / MT2018
|
149 |
Stochastic modelling in management sciences.January 1986 (has links)
by Shing-chiang Wong. / Bibliography: leaf 73 / Thesis (M.Ph.)--Chinese University of Hong Kong, 1986
|
150 |
Consecutive-k-OUT-OF-n: F repairable system with markov dependence of order (k-1).January 1999 (has links)
Ng Hon Keung Tony. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 85-87). / Abstracts in English and Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter Chapter 2 --- Probability Analysis of Consecutive-k-out-of-n: F system with (k-l)-step Markov Dependence --- p.11 / Chapter 2.1 --- Model and Assumptions --- p.11 / Chapter 2.2 --- Find Out the Failure Risk of the System --- p.17 / Chapter 2.3 --- The Linear Consecutive-3-out-of-4: F Repairable system with 2-step Markov Dependence --- p.27 / Chapter 2.3.1 --- Failure risk of the system --- p.27 / Chapter 2.3.2 --- Priority repair rule --- p.32 / Chapter 2.4 --- The Circular Consecutive-3-out-of-4: F Repairable system with 2-step Markov Dependence --- p.41 / Chapter 2.4.1 --- Failure risk of the system --- p.41 / Chapter 2.4.2 --- Priority repair rule --- p.47 / Chapter Chapter 3 --- System Analysis for Linear Consecutive-3-out-of-4: F system with 2-step Markov Dependence by Laplace Transform Method --- p.57 / Chapter Chapter 4 --- Reliability Indices for Consecutive-3-out-of-4: F system with 2-step Markov Dependence by Numerical Method and Simulation Study --- p.69 / Chapter 4.1 --- Numerical Method --- p.69 / Chapter 4.1.1 --- Linear consecutive-3-out-of-4: F repairable system with 2-step Markov dependence --- p.70 / Chapter 4.1.2 --- Comparison between Laplace transform method and numerical method --- p.72 / Chapter 4.1.3 --- Circular consecutive-3-out-of-4: F repairable system with 2-step Markov dependence --- p.75 / Chapter 4.2 --- Numerical Results --- p.75 / Chapter 4.3 --- Simulation Study --- p.78 / Chapter 4.4 --- Comparison between Simulation Method and Numerical Method --- p.79 / FIGURES 1 to 7 --- p.82-84 / REFERENCES --- p.85-87
|
Page generated in 0.0878 seconds