Spelling suggestions: "subject:"decentralized cachine 1earning"" "subject:"decentralized cachine c1earning""
1 |
Towards Peer-to-Peer Federated Learning: Algorithms and Comparisons to Centralized Federated LearningMäenpää, Dylan January 2021 (has links)
Due to privacy and regulatory reasons, sharing data between institutions can be difficult. Because of this, real-world data are not fully exploited by machine learning (ML). An emerging method is to train ML models with federated learning (FL) which enables clients to collaboratively train ML models without sharing raw training data. We explored peer-to-peer FL by extending a prominent centralized FL algorithm called Fedavg to function in a peer-to-peer setting. We named this extended algorithm FedavgP2P. Deep neural networks at 100 simulated clients were trained to recognize digits using FedavgP2P and the MNIST data set. Scenarios with IID and non-IID client data were studied. We compared FedavgP2P to Fedavg with respect to models' convergence behaviors and communication costs. Additionally, we analyzed the connection between local client computation, the number of neighbors each client communicates with, and how that affects performance. We also attempted to improve the FedavgP2P algorithm with heuristics based on client identities and per-class F1-scores. The findings showed that by using FedavgP2P, the mean model convergence behavior was comparable to a model trained with Fedavg. However, this came with a varying degree of variation in the 100 models' convergence behaviors and much greater communications costs (at least 14.9x more communication with FedavgP2P). By increasing the amount of local computation up to a certain level, communication costs could be saved. When the number of neighbors a client communicated with increased, it led to a lower variation of the models' convergence behaviors. The FedavgP2P heuristics did not show improved performance. In conclusion, the overall findings indicate that peer-to-peer FL is a promising approach.
|
2 |
Efficient Decentralized Learning Methods for Deep Neural NetworksSai Aparna Aketi (18258529) 26 March 2024 (has links)
<p dir="ltr">Decentralized learning is the key to training deep neural networks (DNNs) over large distributed datasets generated at different devices and locations, without the need for a central server. They enable next-generation applications that require DNNs to interact and learn from their environment continuously. The practical implementation of decentralized algorithms brings about its unique set of challenges. In particular, these algorithms should be (a) compatible with time-varying graph structures, (b) compute and communication efficient, and (c) resilient to heterogeneous data distributions. The objective of this thesis is to enable efficient decentralized learning in deep neural networks addressing the abovementioned challenges. Towards this, firstly a communication-efficient decentralized algorithm (Sparse-Push) that supports directed and time-varying graphs with error-compensated communication compression is proposed. Second, a low-precision decentralized training that aims to reduce memory requirements and computational complexity is proposed. Here, we design ”Range-EvoNorm” as the normalization activation layer which is better suited for low-precision decentralized training. Finally, addressing the problem of data heterogeneity, three impactful advancements namely Neighborhood Gradient Mean (NGM), Global Update Tracking (GUT), and Cross-feature Contrastive Loss (CCL) are proposed. NGM utilizes extra communication rounds to obtain cross-agent gradient information whereas GUT tracks global update information with no communication overhead, improving the performance on heterogeneous data. CCL explores an orthogonal direction of using a data-free knowledge distillation approach to handle heterogeneous data in decentralized setups. All the algorithms are evaluated on computer vision tasks using standard image-classification datasets. We conclude this dissertation by presenting a summary of the proposed decentralized methods and their trade-offs for heterogeneous data distributions. Overall, the methods proposed in this thesis address the critical limitations of training deep neural networks in a decentralized setup and advance the state-of-the-art in this domain.</p>
|
3 |
Decentralized machine learning on massive heterogeneous datasets : A thesis about vertical federated learningLundberg, Oskar January 2021 (has links)
The need for a method to create a collaborative machine learning model which can utilize data from different clients, each with privacy constraints, has recently emerged. This is due to privacy restrictions, such as General Data Protection Regulation, together with the fact that machine learning models in general needs large size data to perform well. Google introduced federated learning in 2016 with the aim to address this problem. Federated learning can further be divided into horizontal and vertical federated learning, depending on how the data is structured at the different clients. Vertical federated learning is applicable when many different features is obtained on distributed computation nodes, where they can not be shared in between. The aim of this thesis is to identify the current state of the art methods in vertical federated learning, implement the most interesting ones and compare the results in order to draw conclusions of the benefits and drawbacks of the different methods. From the results of the experiments, a method called FedBCD shows very promising results where it achieves massive improvements in the number of communication rounds needed for convergence, at the cost of more computations at the clients. A comparison between synchronous and asynchronous approaches shows slightly better results for the synchronous approach in scenarios with no delay. Delay refers to slower performance in one of the workers, either due to lower computational resources or due to communication issues. In scenarios where an artificial delay is implemented, the asynchronous approach shows superior results due to its ability to continue training in the case of delays in one or several of the clients.
|
4 |
Decentralized Large-Scale Natural Language Processing Using Gossip Learning / Decentraliserad Storskalig Naturlig Språkbehandling med Hjälp av SkvallerinlärningAlkathiri, Abdul Aziz January 2020 (has links)
The field of Natural Language Processing in machine learning has seen rising popularity and use in recent years. The nature of Natural Language Processing, which deals with natural human language and computers, has led to the research and development of many algorithms that produce word embeddings. One of the most widely-used of these algorithms is Word2Vec. With the abundance of data generated by users and organizations and the complexity of machine learning and deep learning models, performing training using a single machine becomes unfeasible. The advancement in distributed machine learning offers a solution to this problem. Unfortunately, due to reasons concerning data privacy and regulations, in some real-life scenarios, the data must not leave its local machine. This limitation has lead to the development of techniques and protocols that are massively-parallel and data-private. The most popular of these protocols is federated learning. However, due to its centralized nature, it still poses some security and robustness risks. Consequently, this led to the development of massively-parallel, data private, decentralized approaches, such as gossip learning. In the gossip learning protocol, every once in a while each node in the network randomly chooses a peer for information exchange, which eliminates the need for a central node. This research intends to test the viability of gossip learning for large- scale, real-world applications. In particular, it focuses on implementation and evaluation for a Natural Language Processing application using gossip learning. The results show that application of Word2Vec in a gossip learning framework is viable and yields comparable results to its non-distributed, centralized counterpart for various scenarios, with an average loss on quality of 6.904%. / Fältet Naturlig Språkbehandling (Natural Language Processing eller NLP) i maskininlärning har sett en ökande popularitet och användning under de senaste åren. Naturen av Naturlig Språkbehandling, som bearbetar naturliga mänskliga språk och datorer, har lett till forskningen och utvecklingen av många algoritmer som producerar inbäddningar av ord. En av de mest använda av dessa algoritmer är Word2Vec. Med överflödet av data som genereras av användare och organisationer, komplexiteten av maskininlärning och djupa inlärningsmodeller, blir det omöjligt att utföra utbildning med hjälp av en enda maskin. Avancemangen inom distribuerad maskininlärning erbjuder en lösning på detta problem, men tyvärr får data av sekretesskäl och datareglering i vissa verkliga scenarier inte lämna sin lokala maskin. Denna begränsning har lett till utvecklingen av tekniker och protokoll som är massivt parallella och dataprivata. Det mest populära av dessa protokoll är federerad inlärning (federated learning), men på grund av sin centraliserade natur utgör det ändock vissa säkerhets- och robusthetsrisker. Följaktligen ledde detta till utvecklingen av massivt parallella, dataprivata och decentraliserade tillvägagångssätt, såsom skvallerinlärning (gossip learning). I skvallerinlärningsprotokollet väljer varje nod i nätverket slumpmässigt en like för informationsutbyte, vilket eliminerarbehovet av en central nod. Syftet med denna forskning är att testa livskraftighetenav skvallerinlärning i större omfattningens verkliga applikationer. I synnerhet fokuserar forskningen på implementering och utvärdering av en NLP-applikation genom användning av skvallerinlärning. Resultaten visar att tillämpningen av Word2Vec i en skvallerinlärnings ramverk är livskraftig och ger jämförbara resultat med dess icke-distribuerade, centraliserade motsvarighet för olika scenarier, med en genomsnittlig kvalitetsförlust av 6,904%.
|
5 |
Comparing decentralized learning to Federated Learning when training Deep Neural Networks under churnVikström, Johan January 2021 (has links)
Decentralized Machine Learning could address some problematic facets with Federated Learning. There is no central server acting as an arbiter of whom or what may benefit from Machine Learning models created by the vast amount of data becoming available in recent years. It could also increase the reliability and scalability of Machine Learning systems thereby drawing the benefit of having more data accessible. Gossip Learning is such a protocol, but has primarily been designed with linear models in mind. How does Gossip Learning perform when training Deep Neural Networks? Could it be a viable alternative to Federated Learning? In this thesis, we implement Gossip Learning using two different model merging strategies. We also design and implement two extensions to this protocol with the goal of achieving higher performance when training under churn. The training methods are compared on two tasks: image classification on the Federated Extended MNIST dataset and time- series forecasting on the NN5 dataset. Additionally, we also run an experiment where learners churn, alternating between being available and unavailable. We find that Gossip Learning performs slightly better in settings where learners do not churn but is vastly outperformed in the setting where they do. / Decentraliserad Maskinginlärning kan lösa några problematiska aspekter med Federated Learning. Det finns ingen central server som agerar som domare för vilka som får gagna av Maskininlärningsmodellerna skapad av den stora mäng data som blivit tillgänglig på senare år. Det skulle också kunna öka pålitligheten och skalbarheten av Maskininlärningssystem och därav dra nytta av att mer data är tillgänglig. Gossip Learning är ett sånt protokoll, men det är primärt designat med linjära modeller i åtanke. Hur presterar Gossip Learning när man tränar Djupa Neurala Nätverk? Kan det vara ett möjligt alternativ till Federated Learning? I det här exjobbet implementerar vi Gossip Learning med två olika modelsammanslagningstekniker. Vi designar och implementerar även två tillägg till protokollet med målet att uppnå bättre prestanda när man tränar i system där noder går ner och kommer up. Träningsmetoderna jämförs på två uppgifter: bildklassificering på Federated Extended MNIST datauppsättningen och tidsserieprognostisering på NN5 datauppsättningen. Dessutom har vi även experiment då noder alternerar mellan att vara tillgängliga och otillgängliga. Vi finner att Gossip Learning presterar marginellt bättre i miljöer då noder alltid är tillgängliga men är kraftigt överträffade i miljöer då noder alternerar mellan att vara tillgängliga och otillgängliga.
|
Page generated in 0.0795 seconds