• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Reinforcement Learning Methods for OpenAI Environments

Winberg, Andreas, Öhrstam Lindström, Oliver January 2020 (has links)
Using the powerful methods developed in the fieldof reinforcement learning requires an understanding of theadvantages and drawbacks of different methods as well as theeffects of the different adjustable parameters. This paper high-lights the differences in performance and applicability betweenthree different Q-learning methods: Q-table, deep Q-network anddouble deep Q-network where Q refers to the value assigned toa given state-action pair. The performance of these algorithms isevaluated on the two OpenAI gym environments MountainCar-v0 and CartPole-v0. The implementations are done in Pythonusing the Tensorflow toolkit with Keras. The results show thatthe Q-table was the best to use in the Mountain car environmentbecause it was the easiest to implement and was much fasterto compute, but it was also shown that the network methodsrequired far less training data. No significant difference inperformance was found between the deep Q-network and thedouble deep Q-network. In the end, there is a trade-off betweenthe number of episodes required and the computation time foreach episode. The network parameters were also harder to tunesince much more time was needed to compute and visualize theresult. / Att använda de kraftfulla metoderna som utvecklats inom området reinforcement learning kräver en förståelse av fördelar och nackdelar mellan olika metoder samt effekterna av de olika justerbara parametrarna. Denna artikel belyser skillnaderna i prestanda och funktionalitet mellan tre olika metoder: Q-table, deep Q-network och double deep Q- network. Prestandan för dessa algoritmer utvärderas i de två OpenAI gym-miljöerna MountainCar-v0 samt Cartpole-v0. Implementeringarna görs i python med hjälp av programvarubiblioteket Tensorflow tillsammans med Keras. Resultaten visar att Q-table var lättast att implementera och tränade snabbast i båda miljöerna. Nätverksmetoderna krävde dock mindre träningsdata även om det tog lång tid att träna på den data som fanns. Inga stora skillnader i prestanda hittades mellan deep Q-network och double deep Q-network. I slutändan kommer det alltid vara en balansgång mellan mängden träningsdata som krävs och tiden det tar att träna på den data som finns.

Page generated in 0.0305 seconds