Spelling suggestions: "subject:"federation leveraging"" "subject:"federativa leveraging""
1 |
A Comparative Study on Aggregation Schemes in Heterogeneous Federated Learning ScenariosBakambekova, Adilya 03 1900 (has links)
The rapid development of Machine Learning algorithms and a growing range of its applications, as well as an increasing number of Edge Computing devices, created a need for a new paradigm that would benefit from both fields. Federated Learning, which emerged as an answer to this need, is a technique that also solves privacy-related issues arising when large amounts of information are collected on many individual devices and being used for a Machine Learning model by sending only the local updates and keeping the data.
At the same time, Federated Learning heavily relies on the computational and communicational capabilities of the devices that calculate the updates and send them to the main server to be integrated into a global model using one or the other Aggregation Scheme, which is one of the most important aspects of the Federated Learning. Carefully choosing how to aggregate local updates can diminish the impacts present from a huge variety of devices.
Therefore, this thesis work presents a thorough investigation of the Aggregation Schemes and analyzes their behaviors in heterogeneous Federated Learning scenarios. It provides an extensive description of the main features of schemes studied, defines the evaluation criteria, presents the resource costs associated with computational and communicational resources of the devices, and shows a fair assessment.
|
2 |
Federated Neural Collaborative Filtering for privacy-preserving recommender systemsLangelaar, Johannes, Strömme Mattsson, Adam January 2021 (has links)
In this thesis a number of models for recommender systems are explored, all using collaborative filtering to produce their recommendations. Extra focus is put on two models: Matrix Factorization, which is a linear model and Multi-Layer Perceptron, which is a non-linear model. With an additional purpose of training the models without collecting any sensitive data from the users, both models were implemented with a learning technique that does not require the server's knowledge of the users' data, called federated learning. The federated version of Matrix Factorization is already well-researched, and has proven not to protect the users' data at all; the data is derivable from the information that the users communicate to the server that is necessary for the learning of the model. However, on the federated Multi-Layer Perceptron model, no research could be found. In this thesis, such a model is therefore designed and presented. Arguments are put forth in support of the privacy preservability of the model, along with a proof of the user data not being analytically derivable for the central server. In addition, new ways to further put the protection of the users' data on the test are discussed. All models are evaluated on two different data sets. The first data set contains data on ratings of movies and is called MovieLens 1M. The second is a data set that consists of anonymized fund transactions, provided by the Swedish bank SEB for this thesis. Test results suggest that the federated versions of the models can achieve similar recommendation performance as their non-federated counterparts.
|
Page generated in 0.1035 seconds