• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 64
  • 64
  • 18
  • 18
  • 15
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A new approach to pricing real options on swaps : a new solution technique and extension to the non-a.s. finite stopping realm

Chu, Uran 07 June 2012 (has links)
This thesis consists of extensions of results on a perpetual American swaption problem. Companies routinely plan to swap uncertain benefits with uncertain costs in the future for their own benefits. Our work explores the choice of timing policies associated with the swap in the form of an optimal stopping problem. In this thesis, we have shown that Hu, Oksendal's (1998) condition given in their paper to guarantee that the optimal stopping time is a.s. finite is in fact both a necessary and sufficient condition. We have extended the solution to the problem from a region in the parameter space where optimal stopping times are a.s. finite to a region where optimal stopping times are non-a.s. finite, and have successfully calculated the probability of never stopping in this latter region. We have identified the joint distribution for stopping times and stopping locations in both the a.s. and non-a.s. finite stopping cases. We have also come up with an integral formula for the inner product of a generalized hyperbolic distribution with the Cauchy distribution. Also, we have applied our results to a back-end forestry harvesting model where stochastic costs are assumed to exponentiate upwards to infinity through time. / Graduation date: 2013
32

A Random Walk Version of Robbins' Problem

Allen, Andrew 12 1900 (has links)
Robbins' problem is an optimal stopping problem where one seeks to minimize the expected rank of their observations among all observations. We examine random walk analogs to Robbins' problem in both discrete and continuous time. In discrete time, we consider full information and relative ranks versions of this problem. For three step walks, we give the optimal stopping rule and the expected rank for both versions. We also give asymptotic upper bounds for the expected rank in discrete time. Finally, we give upper and lower bounds for the expected rank in continuous time, and we show that the expected rank in the continuous time problem is at least as large as the normalized asymptotic expected rank in the full information discrete time version.
33

Optimal Sequential Decisions in Hidden-State Models

Vaicenavicius, Juozas January 2017 (has links)
This doctoral thesis consists of five research articles on the general topic of optimal decision making under uncertainty in a Bayesian framework. The papers are preceded by three introductory chapters. Papers I and II are dedicated to the problem of finding an optimal stopping strategy to liquidate an asset with unknown drift. In Paper I, the price is modelled by the classical Black-Scholes model with unknown drift. The first passage time of the posterior mean below a monotone boundary is shown to be optimal. The boundary is characterised as the unique solution to a nonlinear integral equation. Paper II solves the same optimal liquidation problem, but in a more general model with stochastic regime-switching volatility. An optimal liquidation strategy and various structural properties of the problem are determined. In Paper III, the problem of sequentially testing the sign of the drift of an arithmetic Brownian motion with the 0-1 loss function and a constant cost of observation per unit of time is studied from a Bayesian perspective. Optimal decision strategies for arbitrary prior distributions are determined and investigated. The strategies consist of two monotone stopping boundaries, which we characterise in terms of integral equations. In Paper IV, the problem of stopping a Brownian bridge with an unknown pinning point to maximise the expected value at the stopping time is studied. Besides a few general properties established, structural properties of an optimal strategy are shown to be sensitive to the prior. A general condition for a one-sided optimal stopping region is provided. Paper V deals with the problem of detecting a drift change of a Brownian motion under various extensions of the classical Wiener disorder problem. Monotonicity properties of the solution with respect to various model parameters are studied. Also, effects of a possible misspecification of the underlying model are explored.
34

Stochastic optimal impulse control of jump diffusions with application to exchange rate

Unknown Date (has links)
We generalize the theory of stochastic impulse control of jump diffusions introduced by Oksendal and Sulem (2004) with milder assumptions. In particular, we assume that the original process is affected by the interventions. We also generalize the optimal central bank intervention problem including market reaction introduced by Moreno (2007), allowing the exchange rate dynamic to follow a jump diffusion process. We furthermore generalize the approximation theory of stochastic impulse control problems by a sequence of iterated optimal stopping problems which is also introduced in Oksendal and Sulem (2004). We develop new results which allow us to reduce a given impulse control problem to a sequence of iterated optimal stopping problems even though the original process is affected by interventions. / by Sandun C. Perera. / Thesis (Ph.D.)--Florida Atlantic University, 2009. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2009. Mode of access: World Wide Web.
35

Problem of hedging of a portfolio with a unique rebalancing moment

Mironenko, Georgy January 2012 (has links)
The paper deals with the problem of finding an optimal one-time rebalancing strategy for the Bachelier model, and makes some remarks for the similar problem within Black-Scholes model. The problem is studied on finite time interval under mean-square criterion of optimality. The methods of the paper are based on the results for optimal stopping problem and standard mean-square criterion. The solution of the problem, considered in the paper, let us interpret how and - that is more important for us -when investor should rebalance the portfolio, if he wants to hedge it in the best way.
36

Méthodes numériques pour les processus markoviens déterministes par morceaux / Numerical methods for piecewise-deterministic Markov processes

Brandejsky, Adrien 02 July 2012 (has links)
Les processus markoviens déterministes par morceaux (PMDM) ont été introduits dans la littérature par M.H.A. Davis en tant que classe générale de modèles stochastiques non-diffusifs. Les PMDM sont des processus hybrides caractérisés par des trajectoires déterministes entrecoupées de sauts aléatoires. Dans cette thèse, nous développons des méthodes numériques adaptées aux PMDM en nous basant sur la quantification d'une chaîne de Markov sous-jacente au PMDM. Nous abordons successivement trois problèmes : l'approximation d'espérances de fonctionnelles d'un PMDM, l'approximation des moments et de la distribution d'un temps de sortie et le problème de l'arrêt optimal partiellement observé. Dans cette dernière partie, nous abordons également la question du filtrage d'un PMDM et établissons l'équation de programmation dynamique du problème d'arrêt optimal. Nous prouvons la convergence de toutes nos méthodes (avec le plus souvent des bornes de la vitesse de convergence) et les illustrons par des exemples numériques. / Piecewise-deterministic Markov processes (PDMP’s) have been introduced by M.H.A. Davis as a general class of non-diffusive stochastic models. PDMP’s are hybrid Markov processes involving deterministic motion punctuated by random jumps. In this thesis, we develop numerical methods that are designed to fit PDMP's structure and that are based on the quantization of an underlying Markov chain. We deal with three issues : the approximation of expectations of functional of a PDMP, the approximation of the moments and of the distribution of an exit time and the partially observed optimal stopping problem. In the latter one, we also tackle the filtering of a PDMP and we establish the dynamic programming equation of the optimal stopping problem. We prove the convergence of all our methods (most of the time, we also obtain a bound for the speed of convergence) and illustrate them with numerical examples.
37

Some optimal visiting problems: from a single player to a mean-field type model

Marzufero, Luciano 19 July 2022 (has links)
In an optimal visiting problem, we want to control a trajectory that has to pass as close as possible to a collection of target points or regions. We introduce a hybrid control-based approach for the classic problem where the trajectory can switch between a group of discrete states related to the targets of the problem. The model is subsequently adapted to a mean-field game framework, that is when a huge population of agents plays the optimal visiting problem with a controlled dynamics and with costs also depending on the distribution of the population. In particular, we investigate a single continuity equation with possible sinks and sources and the field possibly depending on the mass of the agents. The same problem is also studied on a network framework. More precisely, we study a mean-field game model by proving the existence of a suitable definition of an approximated mean-field equilibrium and then we address the passage to the limit.
38

Learning and Earning : Optimal Stopping and Partial Information in Real Options Valuation

Sätherblom, Eric Marco Raymond January 2024 (has links)
In this thesis, we consider an optimal stopping problem interpreted as the task of valuating two so called real options written on an underlying asset following the dynamics of an observable geometric Brownian motion with non-observable drift; we have incomplete information. After exercising the first real option, however, the value of the underlying asset becomes observable with reduced noise; we obtain partial information. We then state some theoretical properties of the value function such as convexity and monotonicity. Furthermore, numerical solutions for the value functions are obtained by stating and solving a linear complementary problem. This is done in a Python implementation using the 2nd order backward differentiation formula and summation-by-parts operators for finite differences combined with an operator splitting method.
39

Optimal iterative solvers for linear systems with stochastic PDE origins : balanced black-box stopping tests

Pranjal, Pranjal January 2017 (has links)
The central theme of this thesis is the design of optimal balanced black-box stopping criteria in iterative solvers of symmetric positive-definite, symmetric indefinite, and nonsymmetric linear systems arising from finite element approximation of stochastic (parametric) partial differential equations. For a given stochastic and spatial approximation, it is known that iteratively solving the corresponding linear(ized) system(s) of equations to too tight algebraic error tolerance results in a wastage of computational resources without decreasing the usually unknown approximation error. In order to stop optimally-by avoiding unnecessary computations and premature stopping-algebraic error and a posteriori approximation error estimate must be balanced at the optimal stopping iteration. Efficient and reliable a posteriori error estimators do exist for close estimation of the approximation error in a finite element setting. But the algebraic error is generally unknown since the exact algebraic solution is not usually available. Obtaining tractable upper and lower bounds on the algebraic error in terms of a readily computable and monotonically decreasing quantity (if any) of the chosen iterative solver is the distinctive feature of the designed optimal balanced stopping strategy. Moreover, this work states the exact constants, that is, there are no user-defined parameters in the optimal balanced stopping tests. Hence, an iterative solver incorporating the optimal balanced stopping methodology that is presented here will be a black-box iterative solver. Typically, employing such a stopping methodology would lead to huge computational savings and in any case would definitely rule out premature stopping. The constants in the devised optimal balanced black-box stopping tests in MINRES solver for solving symmetric positive-definite and symmetric indefinite linear systems can be estimated cheaply on-the- fly. The contribution of this thesis goes one step further for the nonsymmetric case in the sense that it not only provides an optimal balanced black-box stopping test in a memory-expensive Krylov solver like GMRES but it also presents an optimal balanced black-box stopping test in memory-inexpensive Krylov solvers such as BICGSTAB(L), TFQMR etc. Currently, little convergence theory exists for the memory-inexpensive Krylov solvers and hence devising stopping criteria for them is an active field of research. Also, an optimal balanced black-box stopping criterion is proposed for nonlinear (Picard or Newton) iterative method that is used for solving the finite dimensional Navier-Stokes equations. The optimal balanced black-box stopping methodology presented in this thesis can be generalized for any iterative solver of a linear(ized) system arising from numerical approximation of a partial differential equation. The only prerequisites for this purpose are the existence of a cheap and tight a posteriori error estimator for the approximation error along with cheap and tractable bounds on the algebraic error.
40

Optimal Control and Estimation of Stochastic Systems with Costly Partial Information

Kim, Michael J. 31 August 2012 (has links)
Stochastic control problems that arise in sequential decision making applications typically assume that information used for decision-making is obtained according to a predetermined sampling schedule. In many real applications however, there is a high sampling cost associated with collecting such data. It is therefore of equal importance to determine when information should be collected as it is to decide how this information should be utilized for optimal decision-making. This type of joint optimization has been a long-standing problem in the operations research literature, and very few results regarding the structure of the optimal sampling and control policy have been published. In this thesis, the joint optimization of sampling and control is studied in the context of maintenance optimization. New theoretical results characterizing the structure of the optimal policy are established, which have practical interpretation and give new insight into the value of condition-based maintenance programs in life-cycle asset management. Applications in other areas such as healthcare decision-making and statistical process control are discussed. Statistical parameter estimation results are also developed with illustrative real-world numerical examples.

Page generated in 0.0904 seconds