Return to search

Sustainable IoT Data Caching Policy using Deep Reinforcement Learning

Over the years, the Internet of Things has grown significantly and integrated into many fields such as medicine, agriculture, smart homes, etc. This growth has resulted in a significant increase in the amount of data generated. IoT devices are constrained by various factors, including transient data properties, limited memory, energy consumption, and computation power. Edge caching has been used as a solution to alleviate the problem caused by the increase while also improving service quality. Because of IoT constraints, cutting-edge caching policies have proven inefficient due to the file’s limited lifetime. Several caching methods have been proposed over the years to address ephemeral IoT data because data freshness plays a significant role in caching policy for IoT. It is not only essential to develop innovative technologies and solutions; we must also consider the long-term impact on the environment. This paper proposes a collaborative edge caching method to optimise transmission latency, traffic cost, and carbon footprint, thereby improving sustainability issues. A deep reinforcement learning approach is used where each edge learns its best caching policy. It is compared with a state-of-the-art cache replacement policy LRU, a DRL model proposed in another paper, and a model that doesn’t utilise caching policies. The simulation result proves that our proposed DRL-based IoT data caching policy outperforms other baseline policies.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:ltu-92686
Date January 2022
CreatorsWoldeselassie Ogbazghi, Hanna
PublisherLuleå tekniska universitet, Institutionen för system- och rymdteknik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0019 seconds