abstract: The goal of reinforcement learning is to enable systems to autonomously solve tasks in the real world, even in the absence of prior data. To succeed in such situations, reinforcement learning algorithms collect new experience through interactions with the environment to further the learning process. The behaviour is optimized by maximizing a reward function, which assigns high numerical values to desired behaviours. Especially in robotics, such interactions with the environment are expensive in terms of the required execution time, human involvement, and mechanical degradation of the system itself. Therefore, this thesis aims to introduce sample-efficient reinforcement learning methods which are applicable to real-world settings and control tasks such as bimanual manipulation and locomotion. Sample efficiency is achieved through directed exploration, either by using dimensionality reduction or trajectory optimization methods. Finally, it is demonstrated how data-efficient reinforcement learning methods can be used to optimize the behaviour and morphology of robots at the same time. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
Identifer | oai:union.ndltd.org:asu.edu/item:55501 |
Date | January 2019 |
Contributors | Luck, Kevin Sebastian (Author), Ben Amor, Hani (Advisor), Aukes, Daniel (Committee member), Fainekos, Georgios (Committee member), Scholz, Jonathan (Committee member), Yang, Yezhou (Committee member), Arizona State University (Publisher) |
Source Sets | Arizona State University |
Language | English |
Detected Language | English |
Type | Doctoral Dissertation |
Format | 150 pages |
Rights | http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0018 seconds