• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 124
  • 19
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 193
  • 109
  • 61
  • 45
  • 41
  • 40
  • 37
  • 28
  • 26
  • 24
  • 22
  • 22
  • 22
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Federated Learning for Time Series Forecasting Using LSTM Networks: Exploiting Similarities Through Clustering / Federerad inlärning för tidserieprognos genom LSTM-nätverk: utnyttjande av likheter genom klustring

Díaz González, Fernando January 2019 (has links)
Federated learning poses a statistical challenge when training on highly heterogeneous sequence data. For example, time-series telecom data collected over long intervals regularly shows mixed fluctuations and patterns. These distinct distributions are an inconvenience when a node not only plans to contribute to the creation of the global model but also plans to apply it on its local dataset. In this scenario, adopting a one-fits-all approach might be inadequate, even when using state-of-the-art machine learning techniques for time series forecasting, such as Long Short-Term Memory (LSTM) networks, which have proven to be able to capture many idiosyncrasies and generalise to new patterns. In this work, we show that by clustering the clients using these patterns and selectively aggregating their updates in different global models can improve local performance with minimal overhead, as we demonstrate through experiments using realworld time series datasets and a basic LSTM model. / Federated Learning utgör en statistisk utmaning vid träning med starkt heterogen sekvensdata. Till exempel så uppvisar tidsseriedata inom telekomdomänen blandade variationer och mönster över längre tidsintervall. Dessa distinkta fördelningar utgör en utmaning när en nod inte bara ska bidra till skapandet av en global modell utan även ämnar applicera denna modell på sin lokala datamängd. Att i detta scenario införa en global modell som ska passa alla kan visa sig vara otillräckligt, även om vi använder oss av de mest framgångsrika modellerna inom maskininlärning för tidsserieprognoser, Long Short-Term Memory (LSTM) nätverk, vilka visat sig kunna fånga komplexa mönster och generalisera väl till nya mönster. I detta arbete visar vi att genom att klustra klienterna med hjälp av dessa mönster och selektivt aggregera deras uppdateringar i olika globala modeller kan vi uppnå förbättringar av den lokal prestandan med minimala kostnader, vilket vi demonstrerar genom experiment med riktigt tidsseriedata och en grundläggande LSTM-modell.
162

Federated Learning for Time Series Forecasting Using Hybrid Model

Li, Yuntao January 2019 (has links)
Time Series data has become ubiquitous thanks to affordable edge devices and sensors. Much of this data is valuable for decision making. In order to use these data for the forecasting task, the conventional centralized approach has shown deficiencies regarding large data communication and data privacy issues. Furthermore, Neural Network models cannot make use of the extra information from the time series, thus they usually fail to provide time series specific results. Both issues expose a challenge to large-scale Time Series Forecasting with Neural Network models. All these limitations lead to our research question:Can we realize decentralized time series forecasting with a Federated Learning mechanism that is comparable to the conventional centralized setup in forecasting performance?In this work, we propose a Federated Series Forecasting framework, resolving the challenge by allowing users to keep the data locally, and learns a shared model by aggregating locally computed updates. Besides, we design a hybrid model to enable Neural Network models utilizing the extra information from the time series to achieve a time series specific learning. In particular, the proposed hybrid outperforms state-of-art baseline data-central models with NN5 and Ericsson KPI data. Meanwhile, the federated settings of purposed model yields comparable results to data-central settings on both NN5 and Ericsson KPI data. These results together answer the research question of this thesis. / Tidseriedata har blivit allmänt förekommande tack vare överkomliga kantenheter och sensorer. Mycket av denna data är värdefull för beslutsfattande. För att kunna använda datan för prognosuppgifter har den konventionella centraliserade metoden visat brister avseende storskalig datakommunikation och integritetsfrågor. Vidare har neurala nätverksmodeller inte klarat av att utnyttja den extra informationen från tidsserierna, vilket leder till misslyckanden med att ge specifikt tidsserierelaterade resultat. Båda frågorna exponerar en utmaning för storskalig tidsserieprognostisering med neurala nätverksmodeller. Alla dessa begränsningar leder till vår forskningsfråga:Kan vi realisera decentraliserad tidsserieprognostisering med en federerad lärningsmekanism som presterar jämförbart med konventionella centrala lösningar i prognostisering?I det här arbetet föreslår vi ett ramverk för federerad tidsserieprognos som löser utmaningen genom att låta användaren behålla data lokalt och lära sig en delad modell genom att aggregera lokalt beräknade uppdateringar. Dessutom utformar vi en hybrid modell för att möjliggöra neurala nätverksmodeller som kan utnyttja den extra informationen från tidsserierna för att uppnå inlärning av specifika tidsserier. Den föreslagna hybrida modellen presterar bättre än state-of-art centraliserade grundläggande modeller med NN5och Ericsson KPIdata. Samtidigt ger den federerade ansatsen jämförbara resultat med de datacentrala ansatserna för både NN5och Ericsson KPI-data. Dessa resultat svarar tillsammans på forskningsfrågan av denna avhandling.
163

Models and Representation Learning Mechanisms for Graph Data

Susheel Suresh (14228138) 15 December 2022 (has links)
<p>Graph representation learning (GRL) has been increasing used to model and understand data from a wide variety of complex systems spanning social, technological, bio-chemical and physical domains. GRL consists of two main components (1) a parametrized encoder that provides representations of graph data and (2) a learning process to train the encoder parameters. Designing flexible encoders that capture the underlying invariances and characteristics of graph data are crucial to the success of GRL. On the other hand, the learning process drives the quality of the encoder representations and developing principled learning mechanisms are vital for a number of growing applications in self-supervised, transfer and federated learning settings. To this end, we propose a suite of models and learning algorithms for GRL which form the two main thrusts of this dissertation.</p> <p><br></p> <p>In Thrust I, we propose two novel encoders which build upon on a widely popular GRL encoder class called graph neural networks (GNNs). First, we empirically study the prediction performance of current GNN based encoders when applied to graphs with heterogeneous node mixing patterns using our proposed notion of local assortativity. We find that GNN performance in node prediction tasks strongly correlates with our local assortativity metric---thereby introducing a limit. We propose to transform the input graph into a computation graph with proximity and structural information as distinct types of edges. We then propose a novel GNN based encoder that operates on this computation graph and adaptively chooses between structure and proximity information. Empirically, adopting our transformation and encoder framework leads to improved node classification performance compared to baselines in real-world graphs that exhibit diverse mixing.</p> <p>Secondly, we study the trade-off between expressivity and efficiency of GNNs when applied to temporal graphs for the task of link ranking. We develop an encoder that incorporates a labeling approach designed to allow for efficient inference over the candidate set jointly, while provably boosting expressivity. We also propose to optimize a list-wise loss for improved ranking. With extensive evaluation on real-world temporal graphs, we demonstrate its improved performance and efficiency compared to baselines.</p> <p><br></p> <p>In Thrust II, we propose two principled encoder learning mechanisms for challenging and realistic graph data settings. First, we consider a scenario where only limited or even no labelled data is available for GRL. Recent research has converged on graph contrastive learning (GCL), where GNNs are trained to maximize the correspondence between representations of the same graph in its different augmented forms. However, we find that GNNs trained by traditional GCL often risk capturing redundant graph features and thus may be brittle and provide sub-par performance in downstream tasks. We then propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training by optimizing adversarial graph augmentation strategies used in GCL. We pair AD-GCL with theoretical explanations and design a practical instantiation based on trainable edge-dropping graph augmentation. We experimentally validate AD-GCL by comparing with state-of-the-art GCL methods and achieve performance gains in semi-supervised, unsupervised and transfer learning settings using benchmark chemical and biological molecule datasets. </p> <p>Secondly, we consider a scenario where graph data is silo-ed across clients for GRL. We focus on two unique challenges encountered when applying distributed training to GRL: (i) client task heterogeneity and (ii) label scarcity. We propose a novel learning framework called federated self-supervised graph learning (FedSGL), which first utilizes a self-supervised objective to train GNNs in a federated fashion across clients and then, each client fine-tunes the obtained GNNs based on its local task and available labels. Our framework enables the federated GNN model to extract patterns from the common feature (attribute and graph topology) space without the need of labels or being biased by heterogeneous local tasks. Extensive empirical study of FedSGL on both node and graph classification tasks yields fruitful insights into how the level of feature / task heterogeneity, the adopted federated algorithm and the level of label scarcity affects the clients’ performance in their tasks.</p>
164

Privacy leaks from deep linear networks : Information leak via shared gradients in federated learning systems / Sekretessläckor från djupa linjära nätverk : Informationsläckor via delning av gradienter i distribuerade lärande system

Shi, Guangze January 2022 (has links)
The field of Artificial Intelligence (AI) has always faced two major challenges. The first is that data is kept scattered and cannot be collected for more efficiently use. The second is that data privacy and security need to be continuously strengthened. Based on these two points, federated learning is proposed as an emerging machine learning scheme. The idea of federated learning is to collaboratively train neural networks on servers. Each user receives the current weights of the network and then sequentially sends parameter updates (gradients) based on their own data. Because the input data remains on-device and only the parameter gradients are shared, this scheme is considered to be effective in preserving data privacy. Some previous attacks also provide a false sense of security since they only succeed in contrived settings, even for a single image. Our research mainly focus on attacks on shared gradients, showing experimentally that private training data can be obtained from publicly shared gradients. We do experiments on both linear-based and convolutional-based deep networks, whose results show that our attack is capable of creating a threat to data privacy, and this threat is independent of the specific structure of neural networks. The method presented in this paper is only to illustrate that it is feasible to recover user data from shared gradients, and cannot be used as an attack to obtain privacy in large quantities. The goal is to spark further research on federated learning, especially gradient security. We also make some brief discussion on possible strategies against our attack methods of privacy. Different methods have their own advantages and disadvantages in terms of privacy protection. Therefore, data pre-processing and network structure adjustment may need to be further researched, so that the process of training the models can achieve better privacy protection while maintaining high precision. / Området artificiell intelligens har alltid stått inför två stora utmaningar. Den första är att data hålls utspridda och inte kan samlas in för mer effektiv användning. Det andra är att datasekretess och säkerhet behöver stärkas kontinuerligt. Baserat på dessa två punkter föreslås federerat lärande som ett framväxande angreppssätt inom maskininlärning. Tanken med federerat lärande är att tillsammans träna neurala nätverk på servrar. Varje användare får nätverkets aktuella vikter och skickar sedan parameteruppdateringar (gradienter) sekventiellt baserat på sina egna data. Eftersom indata förblir på enheten och endast parametergradienterna delas, anses detta schema vara effektivt för att bevara datasekretessen. Vissa tidigare attacker ger också en falsk känsla av säkerhet eftersom de bara lyckas i konstruerade inställningar, även för en enda bild. Vår forskning fokuserar främst på attacker på delade gradienter, och visar experimentellt att privat träningsdata kan erhållas från offentligt delade gradienter. Vi gör experiment på både linjärbaserade och faltningsbaserade djupa nätverk, vars resultat visar att vår attack kan skapa ett hot mot dataintegriteten, och detta hot är oberoende av den specifika strukturen hos djupa nätverk. Metoden som presenteras i denna rapport är endast för att illustrera att det är möjligt att rekonstruera användardata från delade gradienter, och kan inte användas som en attack för att erhålla integritet i stora mängder. Målet är att få igång ytterligare forskning om federerat lärande, särskilt gradientsäkerhet. Vi gör också en kort diskussion om möjliga strategier mot våra attackmetoder för integritet. Olika metoder har sina egna fördelar och nackdelar när det gäller integritetsskydd. Därför kan förbearbetning av data och justering av nätverksstruktur behöva undersökas ytterligare, så att processen med att träna modellerna kan uppnå bättre integritetsskydd samtidigt som hög precision bibehålls.
165

NETWORK-AWARE FEDERATED LEARNING ACROSS HIGHLY HETEROGENEOUS EDGE/FOG NETWORKS

Su Wang (17592381) 09 December 2023 (has links)
<p dir="ltr">The parallel growth of contemporary machine learning (ML) technologies alongside edge/-fog networking has necessitated the development of novel paradigms to effectively manage their intersection. Specifically, the proliferation of edge devices equipped with data generation and ML model training capabilities has given rise to an alternative paradigm called federated learning (FL), moving away from traditional centralized ML common in cloud-based networks. FL involves training ML models directly on edge devices where data are generated.</p><p dir="ltr">A fundamental challenge of FL lies in the extensive heterogeneity inherent to edge/fog networks, which manifests in various forms such as (i) statistical heterogeneity: edge devices have distinct underlying data distributions, (ii) structural heterogeneity: edge devices have diverse physical hardware, (iii) data quality heterogeneity: edge devices have varying ratios of labeled and unlabeled data, and (iv) adversarial compromise: some edge devices may be compromised by adversarial attacks. This dissertation endeavors to capture and model these intricate relationships at the intersection of FL and highly heterogeneous edge/fog networks. To do so, this dissertation will initially develop closed-form expressions for the trade-offs between ML performance and resource cost considerations within edge/fog networks. Subsequently, it optimizes the fundamental processes of FL, encompassing aspects such as batch size control for stochastic gradient descent (SGD) and sampling for global aggregations. This optimization is jointly formulated with networking considerations, which include communication resource consumption and device-to-device (D2D) cooperation.</p><p dir="ltr">In the former half of the dissertation, the emphasis is first on optimizing device sampling for global aggregations in FL, and then on developing a self-sufficient hierarchical meta-learning approach for FL. These methodologies maximize expected ML model performance while addressing common challenges associated with statistical and system heterogeneity. Novel techniques, such as management of D2D data offloading, adaptive CPU clock cycle control, integration of meta-learning, and much more, enable these methodologies. In particular, the proposed hierarchical meta-learning approach enables rapid integration of new devices in large-scale edge/fog networks.</p><p dir="ltr">The latter half of the dissertation directs its ocus towards emerging forms of heterogeneity in FL scenarios, namely (i) heterogeneity in quantity and quality of local labeled and unlabeled data at edge devices and (ii) heterogeneity in terms of adversarially comprised edge devices. To deal with heterogeneous labeled/unlabeled data across edge networks, this dissertation proposes a novel methodology that enables multi-source to multi-target federated domain adaptation. This proposed methodology views edge devices as sources – devices with mostly labeled data that perform ML model training, or targets - devices with mostly unlabeled data that rely on sources’ ML models, and subsequently optimizes the network relationships. In the final chapter, a novel methodology to improve FL robustness is developed in part by viewing adversarial attacks on FL as a form of heterogeneity.</p>
166

Using Vocabulary Mappings for Federated RDF Query Processing / Att använda vokabulär mappning för federerad RDF frågebehandling

Winneroth, Juliette January 2023 (has links)
Federated RDF querying systems provide an interface to multiple autonomous RDF data sources, allowing a user to execute a SPARQL query on multiple data sources at once and get one unified result. When these autonomous data sources use different vocabularies, the SPARQL query must be rewritten to the vocabulary of the data source in order to get the desired results. This thesis describes how vocabulary mappings can be used to rewrite SPARQL queries for federated RDF query processing. In this thesis, different types of vocabulary mappings are explored to find a suitable vocabulary mapping representation to use in formulating an approach for query rewriting. The approach describes how the SPARQL subqueries and solution mappings can be rewritten in order to handle heterogeneous vocabularies. The thesis then presents how the query federation engine HeFQUIN is extended to rewrite the federated queries and their results. A final evaluation of the implementation shows how implementing a query rewriting approach can improve the federated query engine’s execution times.
167

Confidential Federated Learning with Homomorphic Encryption / Konfidentiellt federat lärande med homomorf kryptering

Wang, Zekun January 2023 (has links)
Federated Learning (FL), one variant of Machine Learning (ML) technology, has emerged as a prevalent method for multiple parties to collaboratively train ML models in a distributed manner with the help of a central server normally supplied by a Cloud Service Provider (CSP). Nevertheless, many existing vulnerabilities pose a threat to the advantages of FL and cause potential risks to data security and privacy, such as data leakage, misuse of the central server, or the threat of eavesdroppers illicitly seeking sensitive information. Promisingly advanced cryptography technologies such as Homomorphic Encryption (HE) and Confidential Computing (CC) can be utilized to enhance the security and privacy of FL. However, the development of a framework that seamlessly combines these technologies together to provide confidential FL while retaining efficiency remains an ongoing challenge. In this degree project, we develop a lightweight and user-friendly FL framework called Heflp, which integrates HE and CC to ensure data confidentiality and integrity throughout the entire FL lifecycle. Heflp supports four HE schemes to fit diverse user requirements, comprising three pre-existing schemes and one optimized scheme that we design, named Flashev2, which achieves the highest time and spatial efficiency across most scenarios. The time and memory overheads of all four HE schemes are also evaluated and a comparison between the pros and cons of each other is summarized. To validate the effectiveness, Heflp is tested on the MNIST dataset and the Threat Intelligence dataset provided by CanaryBit, and the results demonstrate that it successfully preserves data privacy without compromising model accuracy. / Federated Learning (FL), en variant av Maskininlärning (ML)-teknologi, har framträtt som en dominerande metod för flera parter att samarbeta om att distribuerat träna ML-modeller med hjälp av en central server som vanligtvis tillhandahålls av en molntjänstleverantör (CSP). Trots detta utgör många befintliga sårbarheter ett hot mot FL:s fördelar och medför potentiella risker för datasäkerhet och integritet, såsom läckage av data, missbruk av den centrala servern eller risken för avlyssnare som olagligt söker känslig information. Lovande avancerade kryptoteknologier som Homomorf Kryptering (HE) och Konfidentiell Beräkning (CC) kan användas för att förbättra säkerheten och integriteten för FL. Utvecklingen av en ramverk som sömlöst kombinerar dessa teknologier för att erbjuda konfidentiellt FL med bibehållen effektivitet är dock fortfarande en pågående utmaning. I detta examensarbete utvecklar vi en lättviktig och användarvänlig FL-ramverk som kallas Heflp, som integrerar HE och CC för att säkerställa datakonfidentialitet och integritet under hela FLlivscykeln. Heflp stöder fyra HE-scheman för att passa olika användarbehov, bestående av tre befintliga scheman och ett optimerat schema som vi designar, kallat Flashev2, som uppnår högsta tids- och rumeffektivitet i de flesta scenarier. Tids- och minneskostnaderna för alla fyra HE-scheman utvärderas också, och en jämförelse mellan fördelar och nackdelar sammanfattas. För att validera effektiviteten testas Heflp på MNIST-datasetet och Threat Intelligence-datasetet som tillhandahålls av CanaryBit, och resultaten visar att det framgångsrikt bevarar datasekretessen utan att äventyra modellens noggrannhet.
168

[pt] BUSCA POR PALAVRAS-CHAVE SOBRE GRAFOS RDF FEDERADOS EXPLORANDO SEUS ESQUEMAS / [en] KEYWORD SEARCH OVER FEDERATED RDF GRAPHS BY EXPLORING THEIR SCHEMAS

YENIER TORRES IZQUIERDO 28 July 2017 (has links)
[pt] O Resource Description Framework (RDF) foi adotado como uma recomendação do W3C em 1999 e hoje é um padrão para troca de dados na Web. De fato, uma grande quantidade de dados foi convertida em RDF, muitas vezes em vários conjuntos de dados fisicamente distribuídos ao longo de diferentes localizações. A linguagem de consulta SPARQL (sigla do inglês de SPARQL Protocol and RDF Query Language) foi oficialmente introduzido em 2008 para recuperar dados RDF e fornecer endpoints para consultar fontes distribuídas. Uma maneira alternativa de acessar conjuntos de dados RDF é usar consultas baseadas em palavras-chave, uma área que tem sido extensivamente pesquisada, com foco recente no conteúdo da Web. Esta dissertação descreve uma estratégia para compilar consultas baseadas em palavras-chave em consultas SPARQL federadas sobre conjuntos de dados RDF distribuídos, assumindo que cada conjunto de dados RDF tem um esquema e que a federação tem um esquema mediado. O processo de compilação da consulta SPARQL federada é explicado em detalhe, incluindo como computar o conjunto de joins externos entre as subconsultas locais geradas, como combinar, com a ajuda de cláusulas UNION, os resultados de consultas locais que não têm joins entre elas, e como construir a cláusula TARGET, de acordo com a composição da cláusula WHERE. Finalmente, a dissertação cobre experimentos com dados do mundo real para validar a implementação. / [en] The Resource Description Framework (RDF) was adopted as a W3C recommendation in 1999 and today is a standard for exchanging data in the Web. Indeed, a large amount of data has been converted to RDF, often as multiple datasets physically distributed over different locations. The SPARQL Protocol and RDF Query Language (SPARQL) was officially introduced in 2008 to retrieve RDF datasets and provide endpoints to query distributed sources. An alternative way to access RDF datasets is to use keyword-based queries, an area that has been extensively researched, with a recent focus on Web content. This dissertation describes a strategy to compile keyword-based queries into federated SPARQL queries over distributed RDF datasets, under the assumption that each RDF dataset has a schema and that the federation has a mediated schema. The compilation process of the federated SPARQL query is explained in detail, including how to compute a set of external joins between the local subqueries, how to combine, with the help of the UNION clauses, the results of local queries which have no external joins between them, and how to construct the TARGET clause, according to the structure of the WHERE clause. Finally, the dissertation covers experiments with real-world data to validate the implementation.
169

Model-Driven Development of Complex and Data-Intensive Integration Processes

Boehm, Matthias, Habich, Dirk, Lehner, Wolfgang, Wloka, Uwe 12 January 2023 (has links)
Due to the changing scope of data management from centrally stored data towards the management of distributed and heterogeneous systems, the integration takes place on different levels. The lack of standards for information integration as well as application integration resulted in a large number of different integration models and proprietary solutions. With the aim of a high degree of portability and the reduction of development efforts, the model-driven development—following the Model-Driven Architecture (MDA)—is advantageous in this context as well. Hence, in the GCIP project (Generation of Complex Integration Processes), we focus on the model-driven generation and optimization of integration tasks using a process-based approach. In this paper, we contribute detailed generation aspects and finally discuss open issues and further challenges.
170

Dynamic GAN-based Clustering in Federated Learning

Kim, Yeongwoo January 2020 (has links)
As the era of Industry 4.0 arises, the number of devices that are connectedto a network has increased. The devices continuously generate data that hasvarious information from power consumption to the configuration of thedevices. Since the data have the raw information about each local node inthe network, the manipulation of the information brings a potential to benefitthe network with different methods. However, due to the large amount ofnon-IID data generated in each node, manual operations to process the dataand tune the methods became challenging. To overcome the challenge, therehave been attempts to apply automated methods to build accurate machinelearning models by a subset of collected data or cluster network nodes byleveraging clustering algorithms and using machine learning models withineach cluster. However, the conventional clustering algorithms are imperfectin a distributed and dynamic network due to risk of data privacy, the nondynamicclusters, and the fixed number of clusters. These limitations ofthe clustering algorithms degrade the performance of the machine learningmodels because the clusters may become obsolete over time. Therefore, thisthesis proposes a three-phase clustering algorithm in dynamic environmentsby leveraging 1) GAN-based clustering, 2) cluster calibration, and 3) divisiveclustering in federated learning. GAN-based clustering preserves data becauseit eliminates the necessity of sharing raw data in a network to create clusters.Cluster calibration adds dynamics to fixed clusters by continuously updatingclusters and benefits methods that manage the network. Moreover, the divisiveclustering explores the different number of clusters by iteratively selectingand dividing a cluster into multiple clusters. As a result, we create clustersfor dynamic environments and improve the performance of machine learningmodels within each cluster. / ett nätverk ökat. Enheterna genererar kontinuerligt data som har varierandeinformation, från strömförbrukning till konfigurationen av enheterna. Eftersomdatan innehåller den råa informationen om varje lokal nod i nätverket germanipulation av informationen potential att gynna nätverket med olika metoder.På grund av den stora mängden data, och dess egenskap av att vara icke-o.l.f.,som genereras i varje nod blir manuella operationer för att bearbeta data ochjustera metoderna utmanande. För att hantera utmaningen finns försök med attanvända automatiserade metoder för att bygga precisa maskininlärningsmodellermed hjälp av en mindre mängd insamlad data eller att gruppera nodergenom att utnyttja klustringsalgoritmer och använda maskininlärningsmodellerinom varje kluster. De konventionella klustringsalgoritmerna är emellertidofullkomliga i ett distribuerat och dynamiskt nätverk på grund av risken fördataskydd, de icke-dynamiska klusterna och det fasta antalet kluster. Dessabegränsningar av klustringsalgoritmerna försämrar maskininlärningsmodellernasprestanda eftersom klustren kan bli föråldrade med tiden. Därför föreslårdenna avhandling en trefasklustringsalgoritm i dynamiska miljöer genom attutnyttja 1) GAN-baserad klustring, 2) klusterkalibrering och 3) klyvning avkluster i federerad inlärning. GAN-baserade klustring bevarar dataintegriteteneftersom det eliminerar behovet av att dela rådata i ett nätverk för att skapakluster. Klusterkalibrering lägger till dynamik i klustringen genom att kontinuerligtuppdatera kluster och fördelar metoder som hanterar nätverket. Dessutomdelar den klövlande klustringen olika antal kluster genom att iterativt välja ochdela ett kluster i flera kluster. Som ett resultat skapar vi kluster för dynamiskamiljöer och förbättrar prestandan hos maskininlärningsmodeller inom varjekluster.

Page generated in 0.0776 seconds