Return to search

Stochastic control and approximation for Boltzmann equation

In this thesis we study two problems concerning probability. The first is stochastic control problem, which essentially amounts to find an optimal probability in order to optimize some reward function of probability. The second is to approximate the solution of the Boltzmann equation. Thanks to conservation of mass, the solution can be regarded as a family of probability indexed by time. In the first part, we prove a dynamic programming principle for stochastic optimal control problem with expectation constraint by measurable selection approach. Since state constraint, drawdown constraint, target constraint, quantile hedging and floor constraint can all be reformulated into expectation constraint, we apply our results to prove the corresponding dynamic programming principles for these five classes of stochastic control problems in a continuous but non-Markovian setting. In order to solve the Boltzmann equation numerically, in the second part, we propose a new model equation to approximate the Boltzmann equation without angular cutoff. Here the approximate equation incorporates Boltzmann collision operator with angular cutoff and the Landau collision operator. As a first step, we prove the well-posedness theory for our approximate equation. Then in the next step, we show the error estimate between the solutions to the approximate equation and the original equation. Compared to the standard angular cutoff approximation method, our method results in higher order of accuracy.

Identiferoai:union.ndltd.org:hkbu.edu.hk/oai:repository.hkbu.edu.hk:etd_oa-1392
Date19 July 2017
CreatorsZhou, Yulong
PublisherHKBU Institutional Repository
Source SetsHong Kong Baptist University
LanguageEnglish
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceOpen Access Theses and Dissertations

Page generated in 0.0021 seconds