Spelling suggestions: "subject:"markov"" "subject:"darkov""
331 |
Modeling exotic options with maturity extensions by stochastic dynamic programmingTapeinos, Socratis January 2009 (has links)
The exotic options that are examined in this thesis have a combination of non-standard characteristics which can be found in shout, multi-callable, pathdependent and Bermudan options. These options are called reset options. A reset option is an option which allows the holder to reset, one or more times, certain terms of the contract based on pre-specified rules during the life of the option. Overall in this thesis, an attempt has been made to tackle the modeling challenges that arise from the exotic properties of the reset option embedded in segregated funds. Initially, the relevant literature was reviewed and the lack of published work, advanced enough to deal with the complexities of the reset option, was identified. Hence, there appears to be a clear and urgent need to have more sophisticated approaches which will model the reset option. The reset option on the maturity guarantee of segregated funds is formulated as a non-stationary finite horizon Markov Decision Process. The returns from the underlying asset are modeled using a discrete time approximation of the lognormal model. An Optimal Exercise Boundary of the reset option is derived where a threshold value is depicted such that if the value of the underlying asset price exceeds it then it is optimal for the policyholder to reset his maturity guarantee. Otherwise, it is optimal for the policyholder to rollover his maturity guarantee. It is noteworthy that the model is able to depict the Optimal Exercise Boundary of not just the first but of all the segregated fund contracts which can be issued throughout the planning horizon of the policyholder. The main finding of the model is that as the segregated fund contract approaches its maturity, the threshold value in the Optimal Exercise Boundary increases. However, in the last period before the maturity of the segregated fund, the threshold value decreases. The reason for this is that if the reset option is not exercised it will expire worthless. The model is then extended to re ect on the characteristics of the range of products which are traded in the market. Firstly, the issuer of the segregated fund contract is allowed to charge a management fee to the policyholder. The effect from incorporating this fee is that the policyholder requires a higher return in order to optimally reset his maturity guarantee while the total value of the segregated fund is diminished. Secondly, the maturity guarantee becomes a function of the number of times that the reset option has been exercised. The effect is that the policyholder requires a higher return in order to choose to reset his maturity guarantee while the total value of the segregated fund is diminished. Thirdly, the policyholder is allowed to reset the maturity guarantee at any point in time within each year from the start of the planning horizon, but only once. The effect is that the total value of the segregated fund is increased since the policyholder may lock in higher market gains as he has more reset decision points. In response to the well documented deficiencies of the lognormal model to capture the jumps experienced by stock markets, extensions were built which incorporate such jumps in the original model. The effect from incorporating such jumps is that the policyholder requires a higher return in order to choose to reset his maturity guarantee while the total value of the segregated fund is diminished due to the adverse effect of the negative jumps on the value of the underlying asset.
|
332 |
A wavelet-based prediction technique for concealment of loss-packet effects in wireless channelsGarantziotis, Anastasios 06 1900 (has links)
In this thesis, a wavelet-based prediction method is developed for concealing packet-loss effects in wireless channels. The proposed method utilizes a wavelet decomposition algorithm in order to process the data and then applies the well known linear prediction technique to estimate one or more approximation coefficients as necessary at the lowest resolution level. The predicted sample stream is produced by using the predicted approximation coefficients and by exploiting certain sample value patterns in the detail coefficients. In order to test the effectiveness of the proposed scheme, a wireless channel based on a three-state Markov model is developed and simulated. Simulation results for transmission of image and speech packet streams over a wireless channel are reported for both the wavelet-based prediction and direct linear prediction. In all the simulations run in this work, the wavelet-based method outperformed the direct linear prediction method. / Hellenic Navy author.
|
333 |
A Markov model for measuring artillery fire support effectivenessGuzik, Dennis M. 09 1900 (has links)
Approved for public release; distribution is unlimited / This thesis presents a Markov model, which, given an indirect fire weapon system's parameters, yields measures of the weapon's effectiveness in providing fire support to a maneuver element. These parameters may be determined for a variety of different scenarios. Any indirect fire weapon system may be a candidate for evaluation. This model may be used in comparing alternative weapon
systems for the role of direct support of a Marine Corps infantry battalion. The issue of light gun vs. heavy gun was the impetus for the study. The thesis also provides insight into the tactic of frequently moving an indirect fire weapon to avoid enemy detection, and possible subsequent attack. / http://archive.org/details/markovmodelforme00guzi / Captain, United States Marine Corps
|
334 |
Extensiones de un teorema límite para un modelo basado en agentesMuñoz Hernández, Felipe Andrés January 2016 (has links)
Magíster en Ciencias de la Ingeniería, Mención Matemáticas Aplicadas.
Ingeniero Civil Matemático / En el presente trabajo se busca extender un resultado del tipo ley de grandes números para la medida empírica reescalada asociada a un modelo estocástico basado en agentes, previamente introducido en la literatura, a una clase de modelo más general. Específicamente la extensión considerada toma en cuenta dos nuevos mecanismos de evolución aparte de los ya considerados anteriormente. De esta forma los agentes, quienes están caracterizados por su tipo, aleatoriamente pueden interactuar, cambiar su tipo, morir y producir nuevos agentes.
Se comienza construyendo el proceso de medida empírica a partir de su generador infinitesimal, lo cual permite obtener un proceso de Markov con saltos a valores en medidas. Posteriormente se obtienen algunas propiedades sobre él, en particular, se obtiene una representación trayectorial del proceso mediante medidas puntuales de Poisson. Esta representación trayectorial permite obtener una propiedad de martingala asociada, la cual nos entrega una idea sobre cómo luce cierto sistema de ecuaciones que debería satisfacer la medida límite. Una vez hecho esto se procede de acuerdo a un esquema clásico para probar este tipo de resultados. Se comienza probando que el sistema propuesto tiene una única solución, luego se muestra que la secuencia de leyes asociada a la secuencia de procesos de medidas empíricas reescaladas es una familia tensa de medidas, para posteriormente probar que cada punto límite de las leyes satisface el sistema. Como consecuencia, gracias a la unicidad de este último, se concluye la convergencia en distribución, al tomar límite en el reescalamiento, del proceso de medida empírica reescalada a un proceso determinista solución del sistema.
Por último se muestran aplicaciones del resultado obtenido sobre tres modelos propuestos y se concluye discutiendo la posibilidad de tener un teorema central del límite para este tipo de modelo.
|
335 |
Optimal threshold policy for opportunistic network coding under phase type arrivalsGunasekara, Charith 01 September 2016 (has links)
Network coding allows each node in a network to perform some coding operations on the data packets and improve the overall throughput of communication. However, network coding cannot be done unless there are enough packets to be coded so at times it may be advantageous to wait for packets to arrive.
We consider a scenario in which two wireless nodes each with its own buffer communicate via a single access point using network coding. The access point first pairs each data packet being sent from each node and then performs the network coding operation. Packets arriving at the access point that are unable to be paired are instead loaded into one of the two buffers at the access point. In the case where one of the buffers is empty and the other is not network coding is not possible. When this happens the access point must either wait for a network coding opportunity, or transmit the unpaired packet without coding. Delaying packet transmission is associated with an increased waiting cost but also allows for an increase in the overall efficiency of wireless spectrum usage, thus a decrease in packet transmission cost. Conversely, sending packets un-coded is associated with a decrease in waiting cost but also a decrease in the overall efficiency of the wireless spectrum usage. Hence, there is a trade-off between decreasing packet delay time, and increasing the efficiency of the wireless spectrum usage.
We show that the optimal waiting policy for this system with respect to total cost, under phase-type packet arrivals, is to have a separate threshold for the buffer size that is dependent on the current phase of each arrival. We then show that the solution to this optimization problem can be obtained by treating it as a double ended push-out queueing theory problem. We develop a new technique to keep track of the packet waiting time and the number of packets waiting in the two ended push-out queue. We use the resulting queueing model to resolve the optimal threshold policy and then analyze the performance of the system using numerical approach. / October 2016
|
336 |
Epidemics on complex networksSanatkar, Mohammad Reza January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Karen Garrett / Bala Natarajan / Caterina Scoglio / In this thesis, we propose a statistical model to predict disease dispersal in dynamic networks. We model the process of disease spreading using discrete time Markov chain. In this case, the vector of probability of infection is the state vector and every element of the state vector is a continuous variable between zero and one. In discrete time Markov chains, state probability vectors in each time step depends on state probability vector in the previous time step and one step transition probability matrix. The transition probability matrix can be time variant or time invariant. If this matrix’s elements are functions of elements of vector state probability in previous step, the corresponding Markov chain is non linear dynamical system. However, if those elements are independent of vector state probability, the corresponding Markov chain is a linear dynamical system.
We especially focus on the dispersal of soybean rust. In our problem, we have a network of US counties and we aim at predicting that which counties are more likely to get infected by soybean rust during a year based on observations of soybean rust up to that time as well as corresponding observations to previous years. Other data such as soybean and kudzu densities in each county, daily wind data, and distance between counties helps us to build the model.
The rapid growth in the number of Internet users in recent years has led malware generators to exploit this potential to attack computer users around the word. Internet users are frequent targets of malicious software every day. The ability of malware to exploit the infrastructures of networks for propagation determines how detrimental they can be to the network’s security. Malicious software can make large outbreaks if they are able to exploit the structure of the Internet and interactions between users to propagate.
Epidemics typically start with some initial infected nodes. Infected nodes can cause their
healthy neighbors to become infected with some probability. With time and in some cases with external intervention, infected nodes can be cured and go back to a healthy state. The study of epidemic dispersals on networks aims at explaining how epidemics evolve and spread in networks. One of the most interesting questions regarding an epidemic spread in a network is whether the epidemic dies out or results in a massive outbreak. Epidemic threshold is a parameter that addresses this question by considering both the network topology and epidemic strength.
|
337 |
Markov Operators on Banach LatticesHawke, Peter 26 February 2007 (has links)
Student Number : 0108851W -
MSc Dissertation -
School of Mathematics -
Faculty of Science / A brief search on www.ams.org with the keyword “Markov operator” produces some
684 papers, the earliest of which dates back to 1959. This suggests that the term
“Markov operator” emerged around the 1950’s, clearly in the wake of Andrey Markov’s
seminal work in the area of stochastic processes and Markov chains. Indeed, [17] and
[6], the two earliest papers produced by the ams.org search, study Markov processes
in a statistical setting and “Markov operators” are only referred to obliquely, with no
explicit definition being provided. By 1965, in [7], the situation has progressed to the
point where Markov operators are given a concrete definition and studied more directly.
However, the way in which Markov operators originally entered mathematical
discourse, emerging from Statistics as various attempts to generalize Markov processes
and Markov chains, seems to have left its mark on the theory, with a notable
lack of cohesion amongst its propagators.
The study of Markov operators in the Lp setting has assumed a place of importance in
a variety of fields. Markov operators figure prominently in the study of densities, and
thus in the study of dynamical and deterministic systems, noise and other probabilistic
notions of uncertainty. They are thus of keen interest to physicists, biologists and
economists alike. They are also a worthy topic to a statistician, not least of all since
Markov chains are nothing more than discrete examples of Markov operators (indeed, Markov operators earned their name by virtue of this connection) and, more recently,
in consideration of the connection between copulas and Markov operators. In the
realm of pure mathematics, in particular functional analysis, Markov operators have
proven a critical tool in ergodic theory and a useful generalization of the notion of a
conditional expectation.
Considering the origin of Markov operators, and the diverse contexts in which they
are introduced, it is perhaps unsurprising that, to the uninitiated observer at least,
the theory of Markov operators appears to lack an overall unity. In the literature there
are many different definitions of Markov operators defined on L1(μ) and/or L1(μ)
spaces. See, for example, [13, 14, 26, 2], all of which manage to provide different
definitions. Even at a casual glance, although they do retain the same overall flavour,
it is apparent that there are substantial differences in these definitions. The situation
is not much better when it comes to the various discussions surrounding ergodic
Markov operators: we again see a variety of definitions for an ergodic operator (for
example, see [14, 26, 32]), and again the connections between these definitions are
not immediately apparent.
In truth, the situation is not as haphazard as it may at first appear. All the definitions
provided for Markov operator may be seen as describing one or other subclass of
a larger class of operators known as the positive contractions. Indeed, the theory
of Markov operators is concerned with either establishing results for the positive
contractions in general, or specifically for one of the aforementioned subclasses. The
confusion concerning the definition of an ergodic operator can also be rectified in
a fairly natural way, by simply viewing the various definitions as different possible
generalizations of the central notion of a ergodic point-set transformation (such a
transformation representing one of the most fundamental concepts in ergodic theory).
The first, and indeed chief, aim of this dissertation is to provide a coherent and
reasonably comprehensive literature study of the theory of Markov operators. This
theory appears to be uniquely in need of such an effort. To this end, we shall present a wealth of material, ranging from the classical theory of positive contractions; to a
variety of interesting results arising from the study of Markov operators in relation
to densities and point-set transformations; to more recent material concerning the
connection between copulas, a breed of bivariate function from statistics, and Markov
operators. Our goals here are two-fold: to weave various sources into a integrated
whole and, where necessary, render opaque material readable to the non-specialist.
Indeed, all that is required to access this dissertation is a rudimentary knowledge of
the fundamentals of measure theory, functional analysis and Riesz space theory. A
command of measure and integration theory will be assumed. For those unfamiliar
with the basic tenets of Riesz space theory and functional analysis, we have included
an introductory overview in the appendix.
The second of our overall aims is to give a suitable definition of a Markov operator on
Banach lattices and provide a survey of some results achieved in the Banach lattice
setting, in particular those due to [5, 44]. The advantage of this approach is that
the theory is order theoretic rather than measure theoretic. As we proceed through
the dissertation, definitions will be provided for a Markov operator, a conservative
operator and an ergodic operator on a Banach lattice. Our guide in this matter will
chiefly be [44], where a number of interesting results concerning the spectral theory of
conservative, ergodic, so-called “stochastic” operators is studied in the Banach lattice
setting. We will also, and to a lesser extent, tentatively suggest a possible definition
for a Markov operator on a Riesz space. In fact, we shall suggest, as a topic for
further research, two possible approaches to the study of such objects in the Riesz
space setting.
We now offer a more detailed breakdown of each chapter.
In Chapter 2 we will settle on a definition for a Markov operator on an L1 space,
prove some elementary properties and introduce several other important concepts.
We will also put forward a definition for a Markov operator on a Banach lattice.
In Chapter 3 we will examine the notion of a conservative positive contraction. Conservative operators will be shown to demonstrate a number of interesting properties,
not least of all the fact that a conservative positive contraction is automatically a
Markov operator. The notion of conservative operator will follow from the Hopf decomposition,
a fundmental result in the classical theory of positive contractions and
one we will prove via [13]. We will conclude the chapter with a Banach lattice/Riesz
space definition for a conservative operator, and a generalization of an important
property of such operators in the L1 case.
In Chapter 4 we will discuss another well-known result from the classical theory of
positive contractions: the Chacon-Ornstein Theorem. Not only is this a powerful
convergence result, but it also provides a connection between Markov operators and
conditional expectations (the latter, in fact, being a subclass of theMarkov operators).
To be precise, we will prove the result for conservative operators, following [32].
In Chapter 5 we will tie the study of Markov operators into classical ergodic theory,
with the introduction of the Frobenius-Perron operator, a specific type of Markov
operator which is generated from a given nonsingular point-set transformation. The
Frobenius-Perron operator will provide a bridge to the general notion of an ergodic
operator, as the definition of an ergodic Frobenius-Perron operator follows naturally
from that of an ergodic transformation.
In Chapter 6 will discuss two approaches to defining an ergodic operator, and establish
some connections between the various definitions of ergodicity. The second definition,
a generalization of the ergodic Frobenius-Perron operator, will prove particularly
useful, and we will be able to tie it, following [26], to several interesting results
concerning the asymptotic properties of Markov operators, including the asymptotic
periodicity result of [26, 27]. We will then suggest a definition of ergodicity in the
Banach lattice setting and conclude the chapter with a version, due to [5], of the
aforementioned asymptotic periodicity result, in this case for positive contractions on
a Banach lattice.
In Chapter 7 we will move into more modern territory with the introduction of the copulas of [39, 40, 41, 42, 16]. After surveying the basic theory of copulas, including
introducing a multiplication on the set of copulas, we will establish a one-to-one
correspondence between the set of copulas and a subclass of Markov operators.
In Chapter 8 we will carry our study of copulas further by identifying them as a
Markov algebra under their aforementioned multiplication. We will establish several
interesting properties of this Markov algebra, in parallel to a second Markov algebra,
the set of doubly stochastic matrices. This chapter is chiefly for the sake of interest
and, as such, diverges slightly from our main investigation of Markov operators.
In Chapter 9, we will present the results of [44], in slightly more detail than the original
source. As has been mentioned previously, these concern the spectral properties of
ergodic, conservative, stochastic operators on a Banach lattice, a subclass of the
Markov operators on a Banach lattice.
Finally, as a conclusion to the dissertation, we present in Chapter 10 two possible
routes to the study of Markov operators in a Riesz space setting. The first definition
will be directly analogous to the Banach lattice case; the second will act as an analogue
to the submarkovian operators to be introduced in Chapter 2. We will not attempt
to develop any results from these definitions: we consider them a possible starting
point for further research on this topic.
In the interests of both completeness, and in order to aid those in need of more
background theory, the reader may find at the back of this dissertation an appendix
which catalogues all relevant results from Riesz space theory and operator theory.
|
338 |
Information-driven pricing Kernel modelsParbhoo, Priyanka Anjali 30 July 2013 (has links)
A thesis submitted for the degree of
Doctor of Philosophy
2013 / This thesis presents a range of related pricing kernel models that are driven by
incomplete information about a series of future unknowns. These unknowns may,
for instance, represent fundamental macroeconomic, political or social random
variables that are revealed at future times. They may also represent latent or
hidden factors that are revealed asymptotically. We adopt the information-based
approach of Brody, Hughston and Macrina (BHM) to model the information processes
associated with the random variables. The market filtration is generated
collectively by these information processes. By directly modelling the pricing
kernel, we generate information-sensitive arbitrage-free models for the term structure
of interest rates, the excess rate of return required by investors, and security
prices. The pricing kernel is modelled by a supermartingale to ensure that nominal
interest rates remain non-negative. To begin with, we primarily investigate
finite-time pricing kernel models that are sensitive to Brownian bridge information.
The BHM framework for the pricing of credit-risky instruments is extended
to a stochastic interest rate setting. In addition, we construct recovery models,
which take into consideration information about, for example, the state of the
economy at the time of default. We examine various explicit examples of analytically
tractable information-driven pricing kernel models. We develop a model
that shares many of the features of the rational lognormal model, and investigate
examples of heat kernel models. It is shown that these models may result
in discount bonds and interest rates being bounded by deterministic functions.
In certain situations, incoming information about random variables may exhibit
jumps. To this end, we construct a more general class of nite-time pricing kernel
models that are driven by Levy random bridges. Finally, we model the aggregate
impact of uncertainties on a nancial market by randomised mixtures of
Levy and Markov processes respectively. It is assumed that market participants
have incomplete information about the underlying random mixture. We apply
results from non-linear ltering theory and construct Flesaker-Hughston models
and in nite-time heat kernel models based on these randomised mixtures.
|
339 |
Markov processes and Martingale generalisations on Riesz spacesVardy, Jessica Joy 25 July 2013 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfillment of the requirements for the degree of Doctor of Philosophy, April 2013. / In a series of papers by Wen-Chi Kuo, Coenraad Labuschagne and Bruce
Watson results of martingale theory were generalised to the abstract setting
of Riesz spaces. This thesis presents a survey of those results proved and aims
to expand upon the work of these authors. In particular, independence results
will be considered and these will be used to generalise well known results in
the theory of Markov processes to Riesz spaces.
Mixingales and quasi-martingales will be translated to the Riesz space
setting.
|
340 |
Accelerating decision making under partial observability using learned action priorsMabena, Ntokozo January 2017 (has links)
Thesis (M.Sc.)--University of the Witwatersrand, Faculty of Science, School of Computer Science and Applied Mathematics, 2017. / Partially Observable Markov Decision Processes (POMDPs) provide a principled mathematical
framework allowing a robot to reason about the consequences of actions and
observations with respect to the agent's limited perception of its environment. They
allow an agent to plan and act optimally in uncertain environments. Although they
have been successfully applied to various robotic tasks, they are infamous for their high
computational cost. This thesis demonstrates the use of knowledge transfer, learned
from previous experiences, to accelerate the learning of POMDP tasks. We propose
that in order for an agent to learn to solve these tasks quicker, it must be able to generalise
from past behaviours and transfer knowledge, learned from solving multiple tasks,
between di erent circumstances. We present a method for accelerating this learning
process by learning the statistics of action choices over the lifetime of an agent, known
as action priors. Action priors specify the usefulness of actions in situations and allow
us to bias exploration, which in turn improves the performance of the learning process.
Using navigation domains, we study the degree to which transferring knowledge
between tasks in this way results in a considerable speed up in solution times.
This thesis therefore makes the following contributions. We provide an algorithm
for learning action priors from a set of approximately optimal value functions and two
approaches with which a prior knowledge over actions can be used in a POMDP context.
As such, we show that considerable gains in speed can be achieved in learning subsequent
tasks using prior knowledge rather than learning from scratch. Learning with
action priors can particularly be useful in reducing the cost of exploration in the early
stages of the learning process as the priors can act as mechanism that allows the agent
to select more useful actions given particular circumstances. Thus, we demonstrate how
the initial losses associated with unguided exploration can be alleviated through the
use of action priors which allow for safer exploration. Additionally, we illustrate that
action priors can also improve the computation speeds of learning feasible policies in a
shorter period of time. / MT2018
|
Page generated in 0.0301 seconds