Edge cloud is a distributed computing architecture that is growing in popularity. It aims to bring the cloud closer to the edge of a network, reducing latency and improving performance through the use of distributed servers (edge nodes) spread out geographically. However, in the case of sudden increases in user requests, a node may run short of resources and need to implement a strategy that allows the node's service to degrade its service quality to a level that requires fewer resources so that the service can still be delivered. One such strategy is brownout, a control theory-based algorithm that dynamically controls the node's service quality in order to meet e.g., a latency goal. This thesis explores the use of brownout, previously used in combination with load balancing in the cloud, in conjunction with load balancing in an edge-cloud environment. In this thesis, four load-balancing strategies are evaluated in a Kubernetes-based edge-cloud environment, along with an application that implements the brownout feature. Two of the strategies are originally designed to be used with brownout but made for the regular cloud, one is a recently introduced strategy that performs well in the edge cloud but is brownout unaware, and the last is a random load balancer used as a baseline (also brownout unaware). The goal of the evaluation is to determine the efficiency of these strategies in different edge-cloud scenarios, with regard to service quality-weighted throughput, average latency, adherence to a set latency goal, and outsourcing (requests load balanced to another edge node). The results show that the first two strategies perform worse than the random load balancer in many regards. Their performance is also less predictable and tends to get worse with increasing network delays. The edge cloud strategy, however, shows an improvement in performance when the brownout is introduced in the majority of the test scenarios. Furthermore, the thesis introduced three possible modifications to make one of the cloud-based strategies perform better in the edge cloud. These modifications were tested in the same environment as the other load-balancing strategies and compared against each other. The first modification consisted of making the load-balancing logic treat its own node differently from other edge nodes. The second version was devised to only outsource when a certain resource threshold is exceeded and the last version was designed to prioritize its own node when below a certain resource threshold. The last version improved on the others and performed better than the base version in all measured metrics. Compared to the edge cloud strategy with brownout, it performed better with regard to service quality-weighted throughput but was outperformed in all other metrics.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-192728 |
Date | January 2023 |
Creators | Homssi, Rachel, Möller, Jacob |
Publisher | Linköpings universitet, Institutionen för datavetenskap, Linköpings universitet, Tekniska fakulteten |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0027 seconds