71 
Stochastic maximum principle and dynamic convex duality in continuoustime constrained portfolio optimizationLi, Yusong January 2016 (has links)
This thesis seeks to gain further insight into the connection between stochastic optimal control and forward and backward stochastic differential equations and its applications in solving continuoustime constrained portfolio optimization problems. Three topics are studied in this thesis. In the first part of the thesis, we focus on stochastic maximum principle, which seeks to establish the connection between stochastic optimal control and backward stochastic differential differential equations coupled with static optimality condition on the Hamiltonian. We prove a weak neccessary and sufficient maximum principle for Markovian regime switching stochastic optimal control problems. Instead of insisting on the maxi mum condition of the Hamiltonian, we show that 0 belongs to the sum of Clarkes generalized gradient of the Hamiltonian and Clarkes normal cone of the control constraint set at the optimal control. Under a joint concavity condition on the Hamiltonian and a convexity condition on the terminal objective function, the necessary condition becomes sufficient. We give four examples to demonstrate the weak stochastic maximum principle. In the second part of the thesis, we study a continuoustime stochastic linear quadratic control problem arising from mathematical finance. We model the asset dynamics with random market coefficients and portfolio strategies with convex constraints. Following the convex duality approach,we show that the necessary and sufficient optimality conditions for both the primal and dual problems can be written in terms of processes satisfying a system of FBSDEs together with other conditions. We characterise explicitly the optimal wealth and portfolio processes as functions of adjoint processes from the dual FBSDEs in a dynamic fashion and vice versa. We apply the results to solve quadratic risk minimization problems with coneconstraints and derive the explicit representations of solutions to the extended stochastic Riccati equations for such problems. In the final section of the thesis, we extend the previous result to utility maximization problems. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint processes coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also find that the optimal primal wealth process coincides with the optimal adjoint process of the dual problem and vice versa. Finally we solve three constrained utility maximization problems and contrasts the simplicity of the duality approach we propose with the technical complexity in solving the primal problem directly.

72 
Efficient nonparametric inference for discretely observed compound Poisson processesCoca Cabrero, Alberto Jesús January 2017 (has links)
Compound Poisson processes are the textbook example of pure jump stochastic processes and the building blocks of Lévy processes. They have three defining parameters: the distribution of the jumps, the intensity driving the frequency at which these occur, and the drift. They are used in numerous applications and, hence, statistical inference on them is of great interest. In particular, nonparametric estimation is increasingly popular for its generality and reduction of misspecification issues. In many applications, the underlying process is not observed directly but at discrete times. Therefore, important information is missed between observations and we face a (nonlinear) inverse problem. Using the intimate relationship between Lévy processes and infinite divisible distributions, we construct new estimators of the jump distribution and of the socalled Lévy distribution. Under mild assumptions, we prove Donsker theorems for both (i.e. functional central limit theorems with the uniform norm) and identify the limiting Gaussian processes. This allows us to conclude that our estimators are efficient, or optimal from an information theory point of view, and to give new insight into the topic of efficiency in this and related problems. We allow the jump distribution to potentially have a discrete component and include a novel way of estimating the mass function using a kernel estimator. We also construct new estimators of the intensity and of the drift, and show joint asymptotic normality of all the estimators. Many relevant inference procedures are derived, including confidence regions, goodnessoffit tests, twosample tests and tests for the presence of discrete and absolutely continuous jump components. In related literature, two apparently different approaches have been taken: a natural direct approach, and the spectral approach we use. We show that these are formally equivalent and that the existing estimators are very close relatives of each other. However, those from the first approach can only be used in small compact intervals in the positive real line whilst ours work on the whole real line and, furthermore, are the first to be efficient. We describe how the former can attain efficiency and propose several open problems not yet identified in the field. We also include an exhaustive simulation study of our and other estimators in which we illustrate their behaviour in a number of realistic situations and their suitability for each of them. This type of study cannot be found in existing literature and provides several insights not yet pointed out and solid understanding of the practical side of the problem on which realdata studies can be based. The implementation of all the estimators is discussed in detail and practical recommendations are given.

73 
Characteristic functions of path signatures and applicationsChevyrev, Ilya January 2015 (has links)
The main object of study in this work is the extension of the classical characteristic function to the setting of path signatures. Our first fundamental result exhibits the following geometric interpretation: the path signature is completely determined by the development of the path into compact Lie groups. This faithful representation of the signature is the primary tool we use to define and study the characteristic function. Our investigation of the characteristic function can be divided into two parts. First, we employ the characteristic function to study the expected signature of a path as the natural generalisation of the moments of a real random variable. In this direction, we provide a solution to the moment problem, and study analyticity properties of the characteristic function. In particular, we solve the moment problem for signatures arising from families of Gaussian and Markovian rough paths. Second, we study the characteristic function in relation to the solution map of a rough differential equation. The connection stems from the fact that the signature of a geometric rough path completely determines the path's role as a driving signal. As an application, we demonstrate that the characteristic function can be used to determine weak convergence of flows arising from rough differential equations. Along the way, we develop tools to study càdlàg processes as rough paths and to determine tightness in pvariation topologies of random walks. As a consequence, we provide a classification of Lévy processes possessing sample paths of finite pvariation and determine a LévyKhintchine formula for the characteristic function of the signature of a Lévy process.

74 
Using Kolmogorov equations to investigate hedging errorsWhitley, Alan January 2015 (has links)
This thesis presents a study of hedging errors caused by model misspecification, making use of a Kolmogorov forward equation with dfunction initial conditions. The method is used to calculate the probability distribution of hedging errors at maturity for a number of realistic examples. An analysis of numerical schemes used to solve the Kolmogorov forward equation is presented, making particular use of Fourier transform techniques. Much of the analysis applies to a 'toy' model which captures the main features of the hedging problem. Detailed analysis of a semidiscrete Fourier numerical method for solving the toy model problem leads to a complete description of the accuracy and stability behaviour of the scheme. The thesis also includes a novel timechange method which solves the heat equation with dfunction initial conditions, using the CrankNicolson method, and which demonstrates improved convergence of the scheme. The numerical analysis included in the thesis explores further aspects of Fourier transform methods applied to the analysis of Partial Differential Equations with Dirac data and variable coefficients.

75 
Capturerecapture modelling for zerotruncated count data allowing for heterogeneityAnan, Orasa January 2016 (has links)
Capturerecapture modelling is a powerful tool for estimating an elusive target population size. This thesis proposes four new population size estimators allowing for population heterogeneity. The first estimator is developed under the zerotruncated of generalised Poisson distribution (ZTGP), called the MLEGP. The two parameters of the ZTGP are estimated by using a maximum likelihood with the ExpectationMaximisation algorithm (EM algorithm). The second estimator is the population size estimator under the zerotruncated ConwayMaxwellPoisson distribution (ZTCMP). The benefits of using the ConwayMaxwellPoisson distribution (CMP) are that it includes the Bernoulli, Poisson and geometric distribution as special cases. It is also flexible for over and underdispersions relative to the original Poisson model. Moreover, the parameter estimates can be achieved by a simple linear regression approach. The uncertainty in estimating variances of the unknown population size under new estimator is studied with analytic and resampling approaches. The geometric distribution is one of the nested models under the ConwayMaxwellPoisson distribution, the Turing and the Zelterman estimators are extended for the geometric distribution and its related model, respectively. Variance estimation and confidence intervals are constructed by the normal approximation method. An uncertainty of variance estimation of population size estimators for single marking capturerecapture data is studied in the final part of the research. Normal approximation and three resample approaches of variance estimation are compared for the Chapman and the Chao estimators. All of the approaches are assessed through simulations, and real data sets are provided as guidance for understanding the methodologies.

76 
Modelling and predicting decompression sickness : an investigationGaudoin, Jotham January 2016 (has links)
In this thesis, we shall consider the mathematical modelling of Decompression Sickness (DCS), more commonly known as 'the bends', and, in particular, we shall consider the probability of its occurrence on escaping from a damaged submarine. We shall begin by outlining the history of DCS modelling, before choosing one particular modeltype  that originally considered by Thalmann et al. (1997)  upon which to focus our attention. This model combines tissues in the body sharing similar characteristics, in particular the rate at which nitrogen is absorbed into, or eliminated from, the tissues in question, terming such combinations 'compartments'. We shall derive some previously unknown analytical results for the single compartment model, which we shall then use to assist us in using Markov Chain Monte Carlo (MCMC) methods to find estimates for the model's parameters using data provided by QinetiQ. These data concerned various tests on a range of subjects, who were exposed to various decompression conditions from a range of depths and at a range of breathing pressures. Next, we shall consider the multiple compartment model, making use of Reversible Jump MCMC to determine the 'best' number of compartments to use. We shall then move on to a slightly different problem, concerning a second dataset from QinetiQ that consists of subjective measurements on an ordinal scale of the number of bubbles passing the subjects' hearts (known as the KismanMasurel bubble score), for a different set of subjects. This dataset contains quite a number of gaps, and we shall seek to impute these before making use of our imputed datasets to identify logistic regression models that provide an alternative DCS probability. Finally, we shall combine these two approaches using a model averaging technique to improve upon previously generated predictions, thereby offering additional practical advice to submariners and those rescuing them following an incident.

77 
Gaussian processes for text regressionBeck, Daniel Emilio January 2017 (has links)
Text Regression is the task of modelling and predicting numerical indicators or response variables from textual data. It arises in a range of different problems, from sentiment and emotion analysis to textbased forecasting. Most models in the literature apply simple text representations such as bagofwords and predict response variables in the form of point estimates. These simplifying assumptions ignore important information coming from the data such as the underlying uncertainty present in the outputs and the linguistic structure in the textual inputs. The former is particularly important when the response variables come from human annotations while the latter can capture linguistic phenomena that go beyond simple lexical properties of a text. In this thesis our aim is to advance the stateoftheart in Text Regression by improving these two aspects, better uncertainty modelling in the response variables and improved text representations. Our main workhorse to achieve these goals is Gaussian Processes (GPs), a Bayesian kernelised probabilistic framework. GPbased regression models the response variables as wellcalibrated probability distributions, providing additional information in predictions which in turn can improve subsequent decision making. They also model the data using kernels, enabling richer representations based on similarity measures between texts. To be able to reach our main goals we propose new kernels for text which aim at capturing richer linguistic information. These kernels are then parameterised and learned from the data using efficient model selection procedures that are enabled by the GP framework. Finally we also capitalise on recent advances in the GP literature to better capture uncertainty in the response variables, such as multitask learning and models that can incorporate nonGaussian variables through the use of warping functions. Our proposed architectures are benchmarked in two Text Regression applications: Emotion Analysis and Machine Translation Quality Estimation. Overall we are able to obtain better results compared to baselines while also providing uncertainty estimates for predictions in the form of posterior distributions. Furthermore we show how these models can be probed to obtain insights about the relation between the data and the response variables and also how to apply predictive distributions in subsequent decision making procedures.

78 
Convex hulls of planar random walksXu, Chang January 2017 (has links)
For the perimeter length Ln and the area An of the convex hull of the first n steps of a planar random walk, this thesis study n â mean and variance asymptotics and establish distributional limits. The results apply to random walks both with drift (the mean of random walk increments) and with no drift under mild moments assumptions on the increments. Assuming increments of the random walk have finite second moment and non zero mean, Snyder and Steele showed that nâ1Ln converges almost surely to a deterministic limit, and proved an upper bound on the variance Var[Ln] = O(n).We show that nâ1Var[Ln] converges and give a simple expression for the limit,which is nonzero for walks outside a certain degenerate class. This answers a question of Snyder and Steele. Furthermore, we prove a central limit theorem for Ln in the nondegenerate case. Then we focus on the perimeter length with no drift and area with both drift and zerodrift cases. These results complement and contrast with previous work and establish nonGaussian distributional limits. We deduce these results from weak convergence statements for the convex hulls of random walks to scaling limits defined in terms of convex hulls of certain Brownian motions. We give bounds that confirm that the limiting variances in our results are nonzero.

79 
On Markovian approximation schemes of jump processesMina, Francesco January 2014 (has links)
The topic of this thesis is the study of approximation schemes of jump processes whose driving noise is a Levy process. In the first part of our work we study properties of the driving noise. We present a novel approximation method for the density of a Levy process. The scheme makes use of a continuous time Markov chain defined through a careful analysis of the generator. We identify the rate of convergence and carry out a detailed analysis of the error. We also analyse the case of multidimensional Levy processes in the form of subordinate Brownian motion. We provide a weak scheme to approximate the density that does not rely on discretising the Levy measure and results in better convergence rates. The second part of the thesis concerns the analysis of schemes for BSDEs driven by Brownian motion and a Poisson random measure. Such equations appear naturally in hedging problems, stochastic control and they provide a natural probabilistic approach to the solution of certain semi linear PIDEs. While the numerical approximation of the continuous case has been studied in the literature, there has been relatively little progress in the study of such equations with a discontinuous driver. We present a weak Monte Carlo scheme in this setting based on Picard iterations. We discuss its convergence and provide a numerical illustration.

80 
Quantification of uncertainty in probabilistic safety analysisElShanawany, Ashraf Ben Mamdouh January 2016 (has links)
This thesis develops methods for quantification and interpretation of uncertainty in probabilistic safety analysis, focussing on fault trees. The output of a fault tree analysis is, usually, the probability of occurrence of an undesirable event (top event) calculated using the failure probabilities of identified basic events. The standard method for evaluating the uncertainty distribution is by Monte Carlo simulation, but this is a computationally intensive approach to uncertainty estimation and does not, readily, reveal the dominant reasons for the uncertainty. A closed form approximation for the fault tree top event uncertainty distribution, for models using only lognormal distributions for model inputs, is developed in this thesis. Its output is compared with the output from two sampling based approximation methods; standard Monte Carlo analysis, and Wilks’ method, which is based on order statistics using small sample sizes. Wilks’ method can be used to provide an upper bound for the percentiles of top event distribution, and is computationally cheap. The combination of the lognormal approximation and Wilks’ Method can be used to give, respectively, the overall shape and high confidence on particular percentiles of interest. This is an attractive, practical option for evaluation of uncertainty in fault trees and, more generally, uncertainty in certain multilinear models. A new practical method of ranking uncertainty contributors in lognormal models is developed which can be evaluated in closed form, based on cutset uncertainty. The method is demonstrated via examples, including a simple fault tree model and a model which is the size of a commercial PSA model for a nuclear power plant. Finally, quantification of “hidden uncertainties” is considered; hidden uncertainties are those which are not typically considered in PSA models, but may contribute considerable uncertainty to the overall results if included. A specific example of the inclusion of a missing uncertainty is explained in detail, and the effects on PSA quantification are considered. It is demonstrated that the effect on the PSA results can be significant, potentially permuting the order of the most important cutsets, which is of practical concern for the interpretation of PSA models. Finally, suggestions are made for the identification and inclusion of further hidden uncertainties.

Page generated in 0.0737 seconds