Spelling suggestions: "subject:"grade execution"" "subject:"grade dexecution""
1 |
Numerical Methods for Optimal Trade ExecutionTse, Shu Tong January 2012 (has links)
Optimal trade execution aims at balancing price impact and timing risk. With respect to the mathematical formulation of the optimization problem, we primarily focus on Mean Variance (MV) optimization, in which the two conflicting objectives are maximizing expected revenue (the flip side of trading impact) and minimizing variance of revenue (a measure of timing risk). We also consider the use of expected quadratic variation of the portfolio value process as an alternative measure of timing risk, which leads to Mean Quadratic Variation (MQV) optimization.
We demonstrate that MV-optimal strategies are quite different from MQV-optimal strategies in many aspects. These differences are in stark contrast to the common belief that MQV-optimal strategies are similar to, or even the same as, MV-optimal strategies. These differences should be of interest to practitioners since we prove that the classic Almgren-Chriss strategies (industry standard) are MQV-optimal, in contrary to the common belief that they are MV-optimal.
From a computational point of view, we extend theoretical results in the literature to prove that the mean variance efficient frontier computed using our method is indeed the complete Pareto-efficient frontier. First, we generalize the result in Li (2000) on the embedding technique and develop a post-processing algorithm that guarantees Pareto-optimality of numerically computed efficient frontier. Second, we extend the convergence result in Barles (1990) to viscosity solution of a system of nonlinear Hamilton Jacobi Bellman partial differential equations (HJB PDEs).
On the numerical aspect, we combine the techniques of similarity reduction, non-standard interpolation, and careful grid construction to significantly improve the efficiency of our numerical methods for solving nonlinear HJB PDEs.
|
2 |
Numerical Methods for Optimal Trade ExecutionTse, Shu Tong January 2012 (has links)
Optimal trade execution aims at balancing price impact and timing risk. With respect to the mathematical formulation of the optimization problem, we primarily focus on Mean Variance (MV) optimization, in which the two conflicting objectives are maximizing expected revenue (the flip side of trading impact) and minimizing variance of revenue (a measure of timing risk). We also consider the use of expected quadratic variation of the portfolio value process as an alternative measure of timing risk, which leads to Mean Quadratic Variation (MQV) optimization.
We demonstrate that MV-optimal strategies are quite different from MQV-optimal strategies in many aspects. These differences are in stark contrast to the common belief that MQV-optimal strategies are similar to, or even the same as, MV-optimal strategies. These differences should be of interest to practitioners since we prove that the classic Almgren-Chriss strategies (industry standard) are MQV-optimal, in contrary to the common belief that they are MV-optimal.
From a computational point of view, we extend theoretical results in the literature to prove that the mean variance efficient frontier computed using our method is indeed the complete Pareto-efficient frontier. First, we generalize the result in Li (2000) on the embedding technique and develop a post-processing algorithm that guarantees Pareto-optimality of numerically computed efficient frontier. Second, we extend the convergence result in Barles (1990) to viscosity solution of a system of nonlinear Hamilton Jacobi Bellman partial differential equations (HJB PDEs).
On the numerical aspect, we combine the techniques of similarity reduction, non-standard interpolation, and careful grid construction to significantly improve the efficiency of our numerical methods for solving nonlinear HJB PDEs.
|
3 |
Optimized Trade Execution with Reinforcement Learning / Optimal orderexekvering med reinforcement learningDahlén, Olle, Rantil, Axel January 2018 (has links)
In this thesis, we study the problem of buying or selling a given volume of a financial asset within a given time horizon to the best possible price, a problem formally known as optimized trade execution. Our approach is an empirical one. We use historical data to simulate the process of placing artificial orders in a market. This simulation enables us to model the problem as a Markov decision process (MDP). Given this MDP, we train and evaluate a set of reinforcement learning (RL) algorithms all with the objective to minimize the transaction cost on unseen test data. We train and evaluate these for various instruments and problem settings, such as different trading horizons. Our first model was developed with the goal to validate results achieved by Nevmyvaka, Feng and Kearns [9], and it is thus called NFK. We extended this model into what we call Dual NFK, in an attempt to regularize the model against external price movement. Furthermore, we implemented and evaluated a classical RL algorithm, namely Sarsa(λ) with a modified reward function. Lastly, we evaluated proximal policy optimization (PPO), an actor-critic RL algorithm incorporating neural networks in order to find the optimal policy. Along with these models, we implemented five simple baseline strategies with various characteristics. These baseline strategies have partly been found in the literature and partly been developed by us, and are used to the evaluate the performance of our models. We achieve results on par with those found by Nevmyvaka, Feng and Kearns [9], but only for a few cases. Furthermore, dual NFK performed very similar to NFK, indicating that one can train one model (for both the buy and sell case) instead of two for the optimized trade execution problem. We also found that Sarsa(λ) with a modified reward function performed better than both these models, but is still outperformed by baseline strategies for many problem settings. Finally, we evaluated PPO for one problem setting and found that it outperformed even the best of the baseline strategies and models, showing promise for deep reinforcement learning methods for the problem of optimized trade execution.
|
Page generated in 0.0629 seconds