Spelling suggestions: "subject:"dedo"" "subject:"ded""
1 |
A Comprehensive study on Federated Learning frameworks : Assessing Performance, Scalability, and Benchmarking with Deep Learning ModelHamsath Mohammed Khan, Riyas January 2023 (has links)
Federated Learning now a days has emerged as a promising standard for machine learning model training, which can be executed collaboratively on decentralized data sources. As the adoption of Federated Learning grows, the selection of the apt frame work for our use case has become more important. This study mainly concentrates on a comprehensive overview of three prominent Federated Learning frameworks Flower, FedN, and FedML. The performance, scalability, and utilization these frame works is assessed on the basis of an NLP use case. The study commences with an overview of Federated Learning and its significance in distributed learning scenarios. Later on, we explore into the examination of the Flower framework in-depth covering its structure, communication methods and interaction with deep learning libraries. The performance of Flower is evaluated by conducting experiments on a standard benchmark dataset. Metrics provide measurements for accuracy, speed and scalability. Tests are also conducted to assess Flower's ability to handle large-scale Federated Learning setups. The same is carried out with the other two frameworks FedN and FedML also. To gain better insight into the strengths, limitations, and suitability of Flower, FedN, and FedML for different Federated Learning scenarios, the study utilizes the above stated comparative analysis on a real time use case. The possibilities for integrating these frameworks with current machine learning workflows are discussed. Furthermore, the final results and conclusions may help researchers and practitioners to make conversant decisions regarding framework selection for their Federated Learning applications. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p>
|
2 |
Federated Learning with FEDn for Financial Market SurveillanceVoltaire Edoh, Isak January 2022 (has links)
Machine Learning (ML) is the current trend that most industries opt for to improve their business and operations. ML has also been adopted in the financial markets, where well-funded financial institutions employ the latest ML algorithms to gain an advantage on the market. The darker side of ML is the potential emergence of complex algorithmic trading schemes that are abusive and manipulative. Because of this, it is inevitable that ML will be applied to financial market surveillance in order to detect these abusive and manipulative trading strategies. Ideally, an accurate ML detection model would be developed with data from many financial institutions or trading venues. However, such ML models require vast quantities of data, which poses a problem in market surveillance where data is sensitive or limited. Data sharing between companies or countries is typically accompanied by legal and privacy concerns. By training ML models on distributed datasets, Federated Learning (FL) overcomes these issues by eliminating the need to centralise sensitive data. This thesis aimed to address these ML related issues in market surveillance by implementing and evaluating a FL model. FL enables a group of independent data-holding clients with the same intention to build a shared ML model collaboratively without compromising private data. In this work, a ML model is initially deployed in a centralised data setting and trained to detect the manipulative trading scheme known as spoofing. The LSTM-Autoencoder was the model chosen method for this task. The same model is also implemented in a federated setting but with decentralised data, using the FL framework FEDn. Another FL framework, Flower, is also employed to evaluate the performance of FEDn. Experiments were conducted comparing the FL models to the conventional centralised learning model, as well as comparing the two frameworks to each other. The results showed that under certain circumstances, the FL models performed better than the centralised model in detecting spoofing. FEDn was equivalent to Flower in terms of detection performance. In addition, the results indicated that Flower was marginally faster than FEDn. It is assumed that variations in the experimental setup and stochasticity account for the performance disparity.
|
Page generated in 0.0323 seconds