Return to search

Towards Communication-Efficient Federated Learning Through Particle Swarm Optimization and Knowledge Distillation

The widespread popularity of Federated Learning (FL) has led researchers to delve into its various facets, primarily focusing on personalization, fair resource allocation, privacy, and global optimization, with less attention puts towards the crucial aspect of ensuring efficient and cost-optimized communication between the FL server and its agents. A major challenge in achieving successful model training and inference on distributed edge devices lies in optimizing communication costs amid resource constraints, such as limited bandwidth, and selecting efficient agents. In resource-limited FL scenarios, where agents often rely on unstable networks, the transmission of large model weights can substantially degrade model accuracy and increase communication latency between the FL server and agents. Addressing this challenge, we propose a novel strategy that integrates a knowledge distillation technique with a Particle Swarm Optimization (PSO)-based FL method. This approach focuses on transmitting model scores instead of weights, significantly reducing communication overhead and enhancing model accuracy in unstable environments. Our method, with potential applications in smart city services and industrial IoT, marks a significant step forward in reducing network communication costs and mitigating accuracy loss, thereby optimizing the communication efficiency between the FL server and its agents.

Identiferoai:union.ndltd.org:siu.edu/oai:opensiuc.lib.siu.edu:theses-4266
Date01 May 2024
CreatorsZaman, Saika
PublisherOpenSIUC
Source SetsSouthern Illinois University Carbondale
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses

Page generated in 0.0021 seconds