The Internet of Things (IoT) is widely adopted across various fields due to its flexibility and low cost. Energy-harvesting Wireless Sensor Networks (WSNs) are becoming a building block of many IoT applications and provide a perpetual source of energy to power energy-constrained IoT devices. However, the dynamic and stochastic nature of the available harvested energy drives the need for adaptive energy management solutions. Duty cycling is among the most prominent adaptive approaches that help consolidate the effort of energy management solutions at the routing and application layers to ensure energy sustainability and, hence, continuous network operation. The IEEE 802.15.4 standard defines the physical layer and the Medium Access Control (MAC) sub-layer of low-data-rate wireless devices with limited energy consumption requirements. The MAC sub-layer’s functionalities include the scheduling of the duty cycle of individual devices. However, the scheduling of the duty cycle is left open to the industry. Various computational mechanisms are used to compute the duty cycle of IoT nodes to ensure optimal performance in energy sustainability and Quality of Service (QoS). Reinforcement Learning (RL) is the most employed mechanism in this context. The literature depicts various RL-based solutions to adjust the duty cycle of IoT devices to adapt to changes in the IoT environment. However, these solutions are usually tailored to specific scenarios or focus mainly on one aspect of the problem, namely QoS performance or energy limitation. This work proposes a generic adaptive duty cycling solution and evaluates its performance under different energy generation and traffic conditions. Moreover, it emphasizes the energy sustainability aspect while taking the QoS performance into account. While different approaches exist to achieve energy sustainability, Energy Neutral Operation (ENO)-based solutions provide the most prominent approach to ensure energy-sustainable performance. Nevertheless, these approaches do not necessarily guarantee optimal performance in QoS. This work adopts a Markov Decision Process (MDP) model from the literature that aims to minimize the distance from energy neutrality given the energy harvesting and ENO conditions. We introduce QoS penalties to the reward formulation to improve QoS performance. We start by examining the performance in QoS against the benchmarking solution. Then, we analyze the performance using different energy harvesting and consumption profiles to further assess QoS performance and determine if energy sustainability is still maintained under different conditions. The results prove more efficient utilization of harvested energy when available in abundance. However, one limitation to our solution occurs when energy demand is high, or harvested energy is scarce. In such cases, we observe degradation in QoS due to IoT nodes adopting a low-duty cycle to avoid energy depletion. We further study the effect this limitation has on the solution's scalability. We also attempt to address this problem by assessing the performance using a routing solution that balances load distribution and, hence, energy demand across the network.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:ltu-101533 |
Date | January 2023 |
Creators | Charef, Nadia |
Publisher | Luleå tekniska universitet, Institutionen för system- och rymdteknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0021 seconds