Return to search

Efficient Numerical Methods for High-Dimensional Approximation Problems

In the field of uncertainty quantification, the effects of parameter uncertainties on scientific simulations may be studied by integrating or approximating a quantity of interest as a function over the parameter space. If this is done numerically, using regular grids with a fixed resolution, the required computational work increases exponentially with respect to the number of uncertain parameters – a phenomenon known as the curse of dimensionality. We study two methods that can help break this curse: discrete least squares polynomial approximation and kernel-based approximation. For the former, we adaptively determine sparse polynomial bases and use evaluations in random, quasi-optimally distributed evaluation nodes; for the latter, we use evaluations in sparse grids, as introduced by Smolyak. To mitigate the additional cost of solving differential equations at each evaluation node, we extend multilevel methods to the approximation of response surfaces. For this purpose, we provide a general analysis that exhibits multilevel algorithms as special cases of an abstract version of Smolyak’s algorithm.
In financial mathematics, high-dimensional approximation problems occur in the pricing of derivatives with multiple underlying assets. The value function of American options can theoretically be determined backwards in time using the dynamic programming principle. Numerical implementations, however, face the curse of dimensionality because each asset corresponds to a dimension in the domain of the value function. Lack of regularity of the value function at the optimal exercise boundary further increases the computational complexity. As an alternative, we propose a novel method that determines an optimal exercise strategy as the solution of a stochastic optimization problem and subsequently computes the option value by simple Monte Carlo simulation. For this purpose, we represent the American option price as the supremum of the expected payoff over a set of randomized exercise strategies. Unlike the corresponding classical representation over subsets of Euclidean space, this relax- ation gives rise to a well-behaved objective function that can be globally optimized
using standard optimization routines.

Identiferoai:union.ndltd.org:kaust.edu.sa/oai:repository.kaust.edu.sa:10754/630974
Date06 February 2019
CreatorsWolfers, Sören
ContributorsTempone, Raul, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, Keyes, David E., Mai, Paul Martin, Gobet, Emmanuel
Source SetsKing Abdullah University of Science and Technology
LanguageEnglish
Detected LanguageEnglish
TypeDissertation

Page generated in 0.0027 seconds