Spelling suggestions: "subject:"[een] FEDERATED LEARNING"" "subject:"[enn] FEDERATED LEARNING""
51 |
Heterogeneous IoT Network Architecture Design for Age of Information MinimizationXia, Xiaohao 01 February 2023 (has links) (PDF)
Timely data collection and execution in heterogeneous Internet of Things (IoT) networks in which different protocols and spectrum bands coexist such as WiFi, RFID, Zigbee, and LoRa, requires further investigation. This thesis studies the problem of age-of-information minimization in heterogeneous IoT networks consisting of heterogeneous IoT devices, an intermediate layer of multi-protocol mobile gateways (M-MGs) that collects and relays data from IoT objects and performs computing tasks, and heterogeneous access points (APs). A federated matching framework is presented to model the collaboration between different service providers (SPs) to deploy and share M-MGs and minimize the average weighted sum of the age-of-information and energy consumption. Further, we develop a two-level multi-protocol multi-agent actor-critic (MP-MAAC) to solve the optimization problem, where M-MGs and SPs can learn collaborative strategies through their own observations. The M-MGs' strategies include selecting IoT objects for data collection, execution, relaying, and/or offloading to SPs’ access points while SPs decide on spectrum allocation. Finally, to improve the convergence of the learning process we incorporate federated learning into the multi-agent collaborative framework. The numerical results show that our Fed-Match algorithm reduces the AoI by factor four, collects twice more packets than existing approaches, reduces the penalty by factor five when enabling relaying, and establishes design principles for the stability of the training process.
|
52 |
Unlearn with Your Contribution : A Machine Unlearning Framework in Federated Learning / Avlär dig med ditt bidrag : Ett ramverk för maskinavlärning inom federerad inlärningWang, Yixiong January 2023 (has links)
Recent years have witnessed remarkable advancements in machine learning, but with these advances come concerns about data privacy. Machine learning inherently involves learning functions from data, and this process can potentially lead to information leakage through various attacks on the learned model. Additionally, the presence of malicious actors who may poison input data to manipulate the model has become a growing concern. Consequently, the ability to unlearn specific data samples on demand has become critically important. Federated Learning (FL) has emerged as a powerful approach to address these challenges. In FL, multiple participants or clients collaborate to train a single global machine learning model without sharing their training data. However, the issue of machine unlearning is particularly pertinent in FL, especially in scenarios where clients are not fully trustworthy. This paper delves into the investigation of the efficacy of solving machine unlearning problems within the FL framework. The central research question this work tackles is: How can we effectively unlearn the entire dataset from one or multiple clients once an FL training is completed, while maintaining privacy and without access to the data? To address this challenge, we introduce the concept of ”contribution,” which quantifies how much each client contributes to the training of the global FL model. In our implementation, we employ an Encoder-Decoder model on the server’s end to disentangle these contributions as the FL process progresses. Notably, our approach is unique in that there is no existing work that utilizes a similar concept nor similar models. Our findings, supported by extensive experiments on datasets MNIST and FashionMNIST, demonstrate that our proposed approach successfully solves the unlearning task in FL. Remarkably, it achieves results comparable to retraining from scratch without requiring the participation of the specific client whose data needs to be unlearned. Moreover, additional ablation studies indicate the sensitivity of the proposed model to specific structural hyperparameters. / Här har de senaste åren bevittnat enastående framsteg inom maskininlärning, men med dessa framsteg kommer bekymmer om dataskydd. Maskininlärning innebär i grunden att lära sig funktioner från data, och denna process kan potentiellt leda till läckage av information genom olika attacker mot den inlärda modellen. Dessutom har närvaron av illvilliga aktörer som kan förgifta indata för att manipulera modellen blivit en växande oro. Följaktligen har förmågan att avlära specifika datasatser på begäran blivit av avgörande betydelse. Federerad inlärning (FL) har framträtt som en kraftfull metod för att ta itu med dessa utmaningar. I FL samarbetar flera deltagare eller klienter för att träna en enda global maskininlärningsmodell utan att dela sina träningsdata. Emellertid är problemet med maskinavlärande särskilt relevant inom FL, särskilt i situationer där klienterna inte är fullt pålitliga. Denna artikel fördjupar sig i undersökningen av effektiviteten av att lösa problem med maskinavlärande inom FL-ramverket. Den centrala forskningsfråga som detta arbete behandlar är: Hur kan vi effektivt avlära hela datasamlingen från en eller flera klienter när FL-utbildningen är klar, samtidigt som vi bevarar integritet och inte har tillgång till datan? För att ta itu med denna utmaning introducerar vi begreppet ”bidrag,” som kvantifierar hur mycket varje klient bidrar till träningen av den globala FLmodellen. I vår implementering använder vi en Encoder-Decoder-modell på serverns sida för att reda ut dessa bidrag när FL-processen fortskrider. Det är värt att notera att vår metod är unik eftersom det inte finns något befintligt arbete som använder ett liknande koncept eller liknande modeller. Våra resultat, som stöds av omfattande experiment på dataseten MNIST och FashionMNIST, visar att vår föreslagna metod framgångsrikt löser avlärandeuppgiften i FL. Anmärkningsvärt uppnår den resultat som är jämförbara med att träna om från grunden utan att kräva deltagandet av den specifika klient vars data behöver avläras. Dessutom indikerar ytterligare avläggningsstudier känsligheten hos den föreslagna modellen för specifika strukturella hyperparametrar.
|
53 |
Learning in Stochastic Stackelberg GamesPranoy Das (18369306) 19 April 2024 (has links)
<p dir="ltr">The original definition of Nash Equilibrium applied to normal form games, but the notion has now been extended to various other forms of games including leader-follower games (Stackelberg games), extensive form games, stochastic games, games of incomplete information, cooperative games, and so on. We focus on general-sum stochastic Stackelberg games in this work. An example where such games would be natural to consider is in security games where a defender wishes to protect some targets through deployment of limited resources and an attacker wishes to strategically attack the targets to benefit themselves. The hierarchical order of play arises naturally since the defender typically acts first and deploys a strategy, while the attacker observes the strategy ofthe defender before attacking. Another example where this framework fits is in testing during epidemics, where the leader (the government) sets testing policies and the follower (the citizens) decide at every time step whether to get tested. The government wishes to minimize the number of infected people in the population while the follower wishes to minimize the cost of getting sick and testing. This thesis presents a learning algorithm for players to converge to their stationary policies in a general sum stochastic sequential Stackelberg game. The algorithm is a two time scale implicit policy gradient algorithm that provably converges to stationary points of the optimization problems of the two players. Our analysis allows us to move beyond the assumptions of zero-sum or static Stackelberg games made in the existing literature for learning algorithms to converge.</p><p dir="ltr"><br></p>
|
54 |
Analyzing Image Classification in Decentralized Environments via Advanced Federated LearningNordin, Julian January 2024 (has links)
Detta arbete syftar till att undersöka effektiviteten av federated learning (FL) för bildklassificering i decentraliserade databehandlingsmiljöer. Med den ökande mängden av datagenerering från mobil- och ‘edge computing’, särskilt bilddata, så finns ett behov av att förbättra metoderna för bildklassificering. Dessa metoder bör inte bara adressera de utmaningar som ställs av traditionella centraliserade djupinlärningsmodeller, utan även värna om integriteten, minska kommunikationskostnaderna och övervinna skalbarhetshinder. Federated learning erbjuder en lovande lösning som tillhandahåller en ram för modellträning över decentraliserade noder med fokus på datasekretess. Denna studie analyserar FL Förmåga att förbättra bildklassificering med dess distinkta metoder, jämför dess prestanda med konventionella modeller, och granskar dess vidare implikationer och begränsningar i praktiska, verkliga inställningar. Resultatet av denna studie visar att med lämplig hantering av brus kan FL-modeller uppnå jämförbar noggrannhet med traditionella metoder, där datasekretessen förbättras betydelsefull. Vilket demonstrerar en potential balans mellan prestanda och skydd av integritet i decentraliserade miljöer. / This study aims to explore the effectiveness of Federated Learning (FL) in image classification across decentralized computing environments. With the increasing amount of data generated from mobile and edge computing, particularly image data, there is a need to improve image classification methods that not only address the challenges posed by traditional centralized deep learning models but also respect privacy, reduce communication costs, and overcome scalability barriers. Federated Learning is a promising solution that offers a framework for model training across decentralized nodes with a focus on data privacy. This study analyzes FL's capabilities to enhance image classification using its distinct methodologies, compares its performance with conventional models, and examines its wider implications and limitations in practical, real-world settings. The result of the study indicates that with appropriate noise management, FL models can achieve comparable accuracy to traditional approaches while significantly enhancing data privacy. which demonstrates a potential balance between performance and privacy protection in decentralized environments.
|
55 |
Personalized Federated Learning for mmWave Beam Prediction Using Non-IID Sub-6 GHz Channels / Personaliserad Federerad Inlärning för mmWave Beam Prediction Användning Icke-IID Sub-6 GHz-kanalerCheng, Yuan January 2022 (has links)
While it is difficult for base stations to estimate the millimeter wave (mmWave) channels and find the optimal mmWave beam for user equipments (UEs) quickly, the sub-6 GHz channels which are usually easier to obtain and more robust to blockages could be used to reduce the time before initial access and enhance the reliability of mmWave communication. Considering that the channel information is collected by a massive number of radio base stations and would be sensitive to privacy and security, Federated Learning (FL) is a match for this use case. In practice, the channel vectors are usually subject to Non-Independently Distributed (non-IID) distributions due to the greatly varying wireless communication environments between different radio base stations and their UEs. To achieve satisfying performance for all radio base stations instead of only the majority of them, a useful solution is designing personalized methods for each radio base station. In this thesis, we implement two personalized FL methods including 1) Finetuning FL Model on Private Dataset of Each Client and 2) Adaptive Expert Models for FL to predict the optimal mmWave beamforming vector directly from the non-IID sub-6 GHz channel vectors generated from DeepMIMO. According to our experimental results, Finetuning FL Model on Private Dataset of Each Client achieves higher average mmWave downlink spectral efficiency than the global FL. Besides, in terms of the average Top-1 and Top-3 classification accuracies, its performance improvement over the global FL model even exceeds the improvement of the global FL over the pure local models. / Även om det är svårt för en basstation att uppskatta en kanal för millimetervåg (mmWave) och snabbt hitta den bästa mmWave-strålen för en användarutrustning (UE), kan den dra fördel av kanaler under 6 GHz, som i allmänhet är mer lättillgängliga och mer motståndskraftig mot blockering, för att minska tid för första besök och förbättra tillförlitligheten hos mmWave-kommunikation. Med tanke på att kanalinformation samlas in av ett stort antal radiobasstationer och är känslig för integritet och säkerhet är federated learning (FL) väl lämpat för detta användningsfall. I praktiken, eftersom den trådlösa kommunikationsmiljön varierar mycket mellan olika radiobasstationer och deras UE, följer kanalvektorer vanligtvis en icke-oberoende distribution (icke-IID). För att uppnå tillfredsställande prestanda för alla radiobasstationer, inte bara de flesta radiobasstationer, är en användbar lösning att utforma ett individuellt tillvägagångssätt för varje radiobasstation. I detta dokument implementerar vi två personliga FL-metoder, inklusive 1) finjustering av FL-modellen på varje klients privata datauppsättning och 2) en adaptiv expertmodell av FL för att direkt generera icke-IID sub-6 GHz kanalvektorer förutsäga optimal mmWave beamforming vektorer. Enligt våra experimentella resultat uppnår finjustering av FL-modellen på varje klients privata datauppsättning högre genomsnittlig mmWave-nedlänksspektral effektivitet än global FL. Dessutom överträffar dess prestandaförbättring jämfört med den globala FL-modellen till och med den för den globala FL jämfört med den rent lokala modellen vad gäller genomsnittlig klassificeringsnoggrannhet i topp-1 och topp-3.
|
56 |
Federated Learning with FEDn for Financial Market SurveillanceVoltaire Edoh, Isak January 2022 (has links)
Machine Learning (ML) is the current trend that most industries opt for to improve their business and operations. ML has also been adopted in the financial markets, where well-funded financial institutions employ the latest ML algorithms to gain an advantage on the market. The darker side of ML is the potential emergence of complex algorithmic trading schemes that are abusive and manipulative. Because of this, it is inevitable that ML will be applied to financial market surveillance in order to detect these abusive and manipulative trading strategies. Ideally, an accurate ML detection model would be developed with data from many financial institutions or trading venues. However, such ML models require vast quantities of data, which poses a problem in market surveillance where data is sensitive or limited. Data sharing between companies or countries is typically accompanied by legal and privacy concerns. By training ML models on distributed datasets, Federated Learning (FL) overcomes these issues by eliminating the need to centralise sensitive data. This thesis aimed to address these ML related issues in market surveillance by implementing and evaluating a FL model. FL enables a group of independent data-holding clients with the same intention to build a shared ML model collaboratively without compromising private data. In this work, a ML model is initially deployed in a centralised data setting and trained to detect the manipulative trading scheme known as spoofing. The LSTM-Autoencoder was the model chosen method for this task. The same model is also implemented in a federated setting but with decentralised data, using the FL framework FEDn. Another FL framework, Flower, is also employed to evaluate the performance of FEDn. Experiments were conducted comparing the FL models to the conventional centralised learning model, as well as comparing the two frameworks to each other. The results showed that under certain circumstances, the FL models performed better than the centralised model in detecting spoofing. FEDn was equivalent to Flower in terms of detection performance. In addition, the results indicated that Flower was marginally faster than FEDn. It is assumed that variations in the experimental setup and stochasticity account for the performance disparity.
|
57 |
Models and Representation Learning Mechanisms for Graph DataSusheel Suresh (14228138) 15 December 2022 (has links)
<p>Graph representation learning (GRL) has been increasing used to model and understand data from a wide variety of complex systems spanning social, technological, bio-chemical and physical domains. GRL consists of two main components (1) a parametrized encoder that provides representations of graph data and (2) a learning process to train the encoder parameters. Designing flexible encoders that capture the underlying invariances and characteristics of graph data are crucial to the success of GRL. On the other hand, the learning process drives the quality of the encoder representations and developing principled learning mechanisms are vital for a number of growing applications in self-supervised, transfer and federated learning settings. To this end, we propose a suite of models and learning algorithms for GRL which form the two main thrusts of this dissertation.</p>
<p><br></p>
<p>In Thrust I, we propose two novel encoders which build upon on a widely popular GRL encoder class called graph neural networks (GNNs). First, we empirically study the prediction performance of current GNN based encoders when applied to graphs with heterogeneous node mixing patterns using our proposed notion of local assortativity. We find that GNN performance in node prediction tasks strongly correlates with our local assortativity metric---thereby introducing a limit. We propose to transform the input graph into a computation graph with proximity and structural information as distinct types of edges. We then propose a novel GNN based encoder that operates on this computation graph and adaptively chooses between structure and proximity information. Empirically, adopting our transformation and encoder framework leads to improved node classification performance compared to baselines in real-world graphs that exhibit diverse mixing.</p>
<p>Secondly, we study the trade-off between expressivity and efficiency of GNNs when applied to temporal graphs for the task of link ranking. We develop an encoder that incorporates a labeling approach designed to allow for efficient inference over the candidate set jointly, while provably boosting expressivity. We also propose to optimize a list-wise loss for improved ranking. With extensive evaluation on real-world temporal graphs, we demonstrate its improved performance and efficiency compared to baselines.</p>
<p><br></p>
<p>In Thrust II, we propose two principled encoder learning mechanisms for challenging and realistic graph data settings. First, we consider a scenario where only limited or even no labelled data is available for GRL. Recent research has converged on graph contrastive learning (GCL), where GNNs are trained to maximize the correspondence between representations of the same graph in its different augmented forms. However, we find that GNNs trained by traditional GCL often risk capturing redundant graph features and thus may be brittle and provide sub-par performance in downstream tasks. We then propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training by optimizing adversarial graph augmentation strategies used in GCL. We pair AD-GCL with theoretical explanations and design a practical instantiation based on trainable edge-dropping graph augmentation. We experimentally validate AD-GCL by comparing with state-of-the-art GCL methods and achieve performance gains in semi-supervised, unsupervised and transfer learning settings using benchmark chemical and biological molecule datasets. </p>
<p>Secondly, we consider a scenario where graph data is silo-ed across clients for GRL. We focus on two unique challenges encountered when applying distributed training to GRL: (i) client task heterogeneity and (ii) label scarcity. We propose a novel learning framework called federated self-supervised graph learning (FedSGL), which first utilizes a self-supervised objective to train GNNs in a federated fashion across clients and then, each client fine-tunes the obtained GNNs based on its local task and available labels. Our framework enables the federated GNN model to extract patterns from the common feature (attribute and graph topology) space without the need of labels or being biased by heterogeneous local tasks. Extensive empirical study of FedSGL on both node and graph classification tasks yields fruitful insights into how the level of feature / task heterogeneity, the adopted federated algorithm and the level of label scarcity affects the clients’ performance in their tasks.</p>
|
58 |
NETWORK-AWARE FEDERATED LEARNING ACROSS HIGHLY HETEROGENEOUS EDGE/FOG NETWORKSSu Wang (17592381) 09 December 2023 (has links)
<p dir="ltr">The parallel growth of contemporary machine learning (ML) technologies alongside edge/-fog networking has necessitated the development of novel paradigms to effectively manage their intersection. Specifically, the proliferation of edge devices equipped with data generation and ML model training capabilities has given rise to an alternative paradigm called federated learning (FL), moving away from traditional centralized ML common in cloud-based networks. FL involves training ML models directly on edge devices where data are generated.</p><p dir="ltr">A fundamental challenge of FL lies in the extensive heterogeneity inherent to edge/fog networks, which manifests in various forms such as (i) statistical heterogeneity: edge devices have distinct underlying data distributions, (ii) structural heterogeneity: edge devices have diverse physical hardware, (iii) data quality heterogeneity: edge devices have varying ratios of labeled and unlabeled data, and (iv) adversarial compromise: some edge devices may be compromised by adversarial attacks. This dissertation endeavors to capture and model these intricate relationships at the intersection of FL and highly heterogeneous edge/fog networks. To do so, this dissertation will initially develop closed-form expressions for the trade-offs between ML performance and resource cost considerations within edge/fog networks. Subsequently, it optimizes the fundamental processes of FL, encompassing aspects such as batch size control for stochastic gradient descent (SGD) and sampling for global aggregations. This optimization is jointly formulated with networking considerations, which include communication resource consumption and device-to-device (D2D) cooperation.</p><p dir="ltr">In the former half of the dissertation, the emphasis is first on optimizing device sampling for global aggregations in FL, and then on developing a self-sufficient hierarchical meta-learning approach for FL. These methodologies maximize expected ML model performance while addressing common challenges associated with statistical and system heterogeneity. Novel techniques, such as management of D2D data offloading, adaptive CPU clock cycle control, integration of meta-learning, and much more, enable these methodologies. In particular, the proposed hierarchical meta-learning approach enables rapid integration of new devices in large-scale edge/fog networks.</p><p dir="ltr">The latter half of the dissertation directs its ocus towards emerging forms of heterogeneity in FL scenarios, namely (i) heterogeneity in quantity and quality of local labeled and unlabeled data at edge devices and (ii) heterogeneity in terms of adversarially comprised edge devices. To deal with heterogeneous labeled/unlabeled data across edge networks, this dissertation proposes a novel methodology that enables multi-source to multi-target federated domain adaptation. This proposed methodology views edge devices as sources – devices with mostly labeled data that perform ML model training, or targets - devices with mostly unlabeled data that rely on sources’ ML models, and subsequently optimizes the network relationships. In the final chapter, a novel methodology to improve FL robustness is developed in part by viewing adversarial attacks on FL as a form of heterogeneity.</p>
|
59 |
Confidential Federated Learning with Homomorphic Encryption / Konfidentiellt federat lärande med homomorf krypteringWang, Zekun January 2023 (has links)
Federated Learning (FL), one variant of Machine Learning (ML) technology, has emerged as a prevalent method for multiple parties to collaboratively train ML models in a distributed manner with the help of a central server normally supplied by a Cloud Service Provider (CSP). Nevertheless, many existing vulnerabilities pose a threat to the advantages of FL and cause potential risks to data security and privacy, such as data leakage, misuse of the central server, or the threat of eavesdroppers illicitly seeking sensitive information. Promisingly advanced cryptography technologies such as Homomorphic Encryption (HE) and Confidential Computing (CC) can be utilized to enhance the security and privacy of FL. However, the development of a framework that seamlessly combines these technologies together to provide confidential FL while retaining efficiency remains an ongoing challenge. In this degree project, we develop a lightweight and user-friendly FL framework called Heflp, which integrates HE and CC to ensure data confidentiality and integrity throughout the entire FL lifecycle. Heflp supports four HE schemes to fit diverse user requirements, comprising three pre-existing schemes and one optimized scheme that we design, named Flashev2, which achieves the highest time and spatial efficiency across most scenarios. The time and memory overheads of all four HE schemes are also evaluated and a comparison between the pros and cons of each other is summarized. To validate the effectiveness, Heflp is tested on the MNIST dataset and the Threat Intelligence dataset provided by CanaryBit, and the results demonstrate that it successfully preserves data privacy without compromising model accuracy. / Federated Learning (FL), en variant av Maskininlärning (ML)-teknologi, har framträtt som en dominerande metod för flera parter att samarbeta om att distribuerat träna ML-modeller med hjälp av en central server som vanligtvis tillhandahålls av en molntjänstleverantör (CSP). Trots detta utgör många befintliga sårbarheter ett hot mot FL:s fördelar och medför potentiella risker för datasäkerhet och integritet, såsom läckage av data, missbruk av den centrala servern eller risken för avlyssnare som olagligt söker känslig information. Lovande avancerade kryptoteknologier som Homomorf Kryptering (HE) och Konfidentiell Beräkning (CC) kan användas för att förbättra säkerheten och integriteten för FL. Utvecklingen av en ramverk som sömlöst kombinerar dessa teknologier för att erbjuda konfidentiellt FL med bibehållen effektivitet är dock fortfarande en pågående utmaning. I detta examensarbete utvecklar vi en lättviktig och användarvänlig FL-ramverk som kallas Heflp, som integrerar HE och CC för att säkerställa datakonfidentialitet och integritet under hela FLlivscykeln. Heflp stöder fyra HE-scheman för att passa olika användarbehov, bestående av tre befintliga scheman och ett optimerat schema som vi designar, kallat Flashev2, som uppnår högsta tids- och rumeffektivitet i de flesta scenarier. Tids- och minneskostnaderna för alla fyra HE-scheman utvärderas också, och en jämförelse mellan fördelar och nackdelar sammanfattas. För att validera effektiviteten testas Heflp på MNIST-datasetet och Threat Intelligence-datasetet som tillhandahålls av CanaryBit, och resultaten visar att det framgångsrikt bevarar datasekretessen utan att äventyra modellens noggrannhet.
|
60 |
Implementation of Federated Learning on Raspberry Pi Boards : Implementation of Compressed FedAvg to reduce communication cost on Raspberry Pi BoardsPurba, Rini Apriyanti January 2021 (has links)
With the development of intelligent services and applications enabled by Artificial Intelligence (AI), the Internet of Things (IoT) is infiltrating many aspects of our everyday lives. The usability of phones and tablets is largely increasing as the primary computing device, since the powerful sensors allow these devices to have access to an unprecedented amount of data. However, there are risks and responsibilities to collect the data in a centralized location due to privacy issues. To overcome this challenge, Federated Learning (FL) allows users to collectively taking the benefits of shared models trained from this big data, without the need to centrally store it. In this research, we present and evaluate the implementation of federated learning on Raspberry Pi boards using the FedAvg method. In addition, the compression method such as quantization and sparsification was applied to the baseline implementation to improve communication efficiency. We accomplished the implementation by comparing the baseline implementation and the compressed Federated-Averaging (FedAvg) on Raspberry Pi boards in order to achieve the smallest cost and higher accuracy to fit IoT environment. / Med utvecklingen av intelligenta tjänster och applikationer möjliggjord av AI infiltrerar IoT många aspekter av vår vardag. Användbarheten för telefoner och surfplattor ökar till stor del som den primära datorenheten, eftersom de kraftfulla sensorerna tillåter dessa enheter att få tillgång till en oöverträffad mängd data. Det finns dock risker och ansvar för att lagra data på en central plats på grund av integritetsfrågor. För att övervinna denna utmaning tillåter Federated Learning (FL) användare att kollektivt ta fördelarna av delade modeller utbildade från denna stora data utan att behöva lagra den centralt. I denna forskning presenterar och utvärderar vi implementeringen av federerat lärande på Raspberry Pi-kort med FedAVG-metoden. Dessutom hade komprimeringsmetoden som kvantisering och versifiering tillämpats på basimplementeringen för att förbättra kommunikationseffektiviteten. Vi slutför implementeringen genom att jämföra baslinjeimplementeringen och den komprimerade FedAVG på Raspberry-Pi-kort för att uppnå lägsta kostnad och högre noggrannhet för att passa IoT-miljö
|
Page generated in 0.0593 seconds