Networked control systems, commonly employed in domains such as space exploration and robotics utilize network communication for efficient and coordinated control among distributed components. In these scenarios, effectively managing communication to prevent network overload poses a critical challenge. Previous research has explored the use of reinforcement learning methods combined with event-triggered control to autonomously have agents learn efficient policies for control and communication. Nevertheless, these approaches have encountered limitations in terms of performance and scalability when applied in multiagent scenarios. This thesis examines the underlying causes of these challenges and propose potential solutions. With the findings suggesting that training agents in a decentralized manner, coupled with modeling of the missing communication, can improve agent performance. This allows the agents to achieve performance levels comparable to those of agents trained with full communication, while reducing unnecessary communication
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-504563 |
Date | January 2023 |
Creators | Pagliaro, Filip |
Publisher | Uppsala universitet, Avdelningen för systemteknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Relation | UPTEC IT, 1401-5749 ; 23017 |
Page generated in 0.0018 seconds