• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A scalable species-based genetic algorithm for reinforcement learning / En skalbar artbaserad genetisk algoritm för förstärkningsinlärning

Seth, Anirudh January 2021 (has links)
Existing methods in Reinforcement Learning (RL) that rely on gradient estimates suffer from the slow rate of convergence, poor sample efficiency, and computationally expensive training, especially when dealing with complex real-world problems with a sizable dimensionality of the state and action space. In this work, we attempt to leverage the benefits of evolutionary computation as a competitive, scalable, and gradient-free alternative to training deep neural networks for RL-specific problems. In this context, we present a novel distributed algorithm based on an efficient model encoding that allows the intuitive application of genetic operators. Our results demonstrate improved exploration and considerable reduction of trainable parameters while maintaining comparable performance with algorithms like Deep Q-Network (DQN), Asynchronous Advantage Actor Critic (A3C), and Evolution Strategy (ES) when evaluated on Atari 2600 games. A scalability assessment of the algorithm revealed a significant parallel speedup and over 10,000 fold improvement in memory requirement. Sample efficiency improved in some experiments, but not significantly. Finally, the algorithm was applied on a Remote Electrical Tilt (RET) optimization task, the improvements in Key Performance Indicators (KPIs) show that the algorithm is also effective in other domains. / gradientskattningar är begränsade av långsam konvergenshastighet, låg samplingeffektivitet och beräkningsmässigt dyra träningsprocedurer. Detta är särskilt fallet när dessa hanterar komplexa och verkliga problem med högdimensionella tillstånds- och handlingsrum. I detta arbete försöker vi utnyttja fördelarna med evolutionär beräkning som ett konkurrenskraftigt, skalbart och gradientfritt alternativ till att träna djupa neurala nätverk för RL-specifika problem. I detta sammanhang presenterar vi en ny distribuerad algoritm baserad på en effektiv modellkodning som möjliggör intuitiv tillämpning av genetiska operatorer. Våra resultat visar ett förbättrat utforskande och en avsevärd minskning av träningsbara.

Page generated in 0.1211 seconds