Return to search

An Application of Sliding Mode Control to Model-Based Reinforcement Learning

The state-of-art model-free reinforcement learning algorithms can generate admissible controls for complicated systems with no prior knowledge of the system dynamics, so long as sufficient (oftentimes millions) of samples are available from the environ- ment. On the other hand, model-based reinforcement learning approaches seek to leverage known optimal or robust control to reinforcement learning tasks by mod- elling the system dynamics and applying well established control algorithms to the system model. Sliding-mode controllers are robust to system disturbance and modelling errors, and have been widely used for high-order nonlinear system control. This thesis studies the application of sliding mode control to model-based reinforcement learning. Computer simulation results demonstrate that sliding-mode control is viable in the setting of reinforcement learning. While the system performance may suffer from problems such as deviations in state estimation, limitations in the capacity of the system model to express the system dynamics, and the need for many samples to converge, this approach still performs comparably to conventional model-free reinforcement learning methods.

Identiferoai:union.ndltd.org:CALPOLY/oai:digitalcommons.calpoly.edu:theses-3488
Date01 September 2019
CreatorsParisi, Aaron Thomas
PublisherDigitalCommons@CalPoly
Source SetsCalifornia Polytechnic State University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceMaster's Theses

Page generated in 0.0017 seconds