• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 3
  • Tagged with
  • 85
  • 85
  • 36
  • 35
  • 29
  • 26
  • 23
  • 20
  • 19
  • 18
  • 15
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Study on Federated Learning Systems in Healthcare

Smith, Arthur, M.D. 18 August 2021 (has links)
No description available.
22

Federated Emotion Recognition with Physiological Signals- GSR

Hassani, Tara January 2021 (has links)
Background: Human-computer interaction (HCI) is one of the daily triggering emotional events in today’s world and researchers in this area have been exploring different techniques to enhance emotional ability in computers. Due to privacy concerns and the laboratory's limited capability for gathering data from a large number of users, common machine learning techniques that are extensively used in emotion recognition tasks lack adequate data collection. To address these issues, we propose a decentralized framework based on the Federated Learning architecture where raw data is collected and analyzed locally. The effects of these analyses in large numbers of updates are transferred to a server to aggregate for the creation of a global model for the emotion recognition task using only Galvanic Skin Response (GSR) signals and their extracted features.  Objectives: This thesis aims to explore how the CNN based federated learning approach can be used in emotion recognition considering data privacy protection and investigate if it reaches the same performance as basic centralized CNN.Methods: To investigate the effect of the proposed method in emotion recognition, two architectures including centralized and federated are designed with the CNN model. Then the results of these two architectures are compared to each other. The dataset used in our work is the CASE dataset. In federated architecture, we employ neurons and weights to train the models instead of raw data, which is used in the centralized architecture.  Results: The performance results indicate that the proposed model not only can work well but also performs better than some other related work methods regarding valance accuracy. Besides, it also has the ability to collect more data from various sources and also protecting sensitive users’ data better by supporting tighter privacy regulations. The physiological data is inherently anonymous but when it comes to using it with other modalities such as video or voice, maintaining the same anonymity is challenging.  Conclusions: This thesis concludes that the federated CNN based model can be used in emotion recognition systems and obtains the same accuracy performance as centralized architecture. Regarding classifying the valance, it outperforms some other state-of-the-art methods. Meanwhile, its federated nature can provide better privacy protection and data diversity for the emotion recognition system.
23

Implementation of Federated Learning on Raspberry Pi Boards : Implementation of Federated Learning on Raspberry Pi Boards with Paillier Encryption

Wang, Wenhao January 2021 (has links)
The development of innovative applications of Artificial Intelligence (AI) is inseparable from the sharing of public data. However, as people strengthen their awareness of the protection of personal data privacy, it is more and more difficult to collect data from multiple data sources and there is also a risk of leakage in unified data management. But neural networks need a lot of data for model learning and analysis. Federated learning (FL) can solve the above difficulties. It allows the server to learn from the local data of multiple clients without collecting them. This thesis mainly deploys FL on the Raspberry Pi (RPi) and achieves federated averaging (FedAvg) as aggregation method. First in the simulation, we compare the difference between FL and centralized learning (CL). Then we build a reliable communication system based on socket on testbed and implement FL on those devices. In addition, the Paillier encryption algorithm is configured for the communication in FL to avoid model parameters being exposed to public network directly. In other words, the project builds a complete and secure FL system based on hardware. / Utvecklingen av innovativa applikationer för artificiell intelligens (AI) är oskiljaktig från delning av offentlig data. Men eftersom människor stärker sin medvetenhet om skyddet av personuppgiftsskydd är det allt svårare att samla in data från flera datakällor och det finns också risk för läckage i enhetlig datahantering. Men neurala nätverk behöver mycket data för modellinlärning och analys. Federated learning (FL) kan lösa ovanstående svårigheter. Det gör det möjligt för servern att lära av lokala klientdata utan att samla in dem. Denna avhandling använder huvudsakligen FL på Raspberry Pi (RPi) och uppnår federerad genomsnitt (FedAvg) som aggregeringsmetod. Först i simuleringen jämför vi skillnaden mellan FL och CL. Sedan bygger vi ett pålitligt kommunikationssystem baserat på uttag på testbädd och implementerar FL på dessa enheter. Dessutom är Paillier -krypteringsalgoritmen konfigurerad för kommunikation i FL för att undvika att modellparametrar exponeras för det offentliga nätverket direkt. Med andra ord bygger projektet ett komplett och säkert FL -system baserat på hårdvara.
24

UNIFYING DISTILLATION WITH PERSONALIZATION IN FEDERATED LEARNING

Siddharth Divi (10725357) 29 April 2021 (has links)
<div>Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data. In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients. In this paper, we address this problem with PersFL, a discrete two-stage personalized learning algorithm. In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from optimal teachers into each user's local model. The teacher model provides each client with some rich, high-level representation that a client can easily adapt to its local model, which overcomes the statistical heterogeneity present at different clients. We evaluate PersFL on CIFAR-10 and MNIST datasets using three data-splitting strategies to control the diversity between clients' data distributions.</div><div><br></div><div>We empirically show that PersFL outperforms FedAvg and three state-of-the-art personalization methods, pFedMe, Per-FedAvg and FedPer on majority data-splits with minimal communication cost. Further, we study the performance of PersFL on different distillation objectives, how this performance is affected by the equitable notion of fairness among clients, and the number of required communication rounds. We also build an evaluation framework with the following modules: Data Generator, Federated Model Generation, and Evaluation Metrics. We introduce new metrics for the domain of personalized FL, and split these metrics into two perspectives: Performance, and Fairness. We analyze the performance of all the personalized algorithms by applying these metrics to answer the following questions: Which personalization algorithm performs the best in terms of accuracy across all the users?, and Which personalization algorithm is the fairest amongst all of them? Finally, we make the code for this work available at https://tinyurl.com/1hp9ywfa for public use and validation.</div>
25

Towards Peer-to-Peer Federated Learning: Algorithms and Comparisons to Centralized Federated Learning

Mäenpää, Dylan January 2021 (has links)
Due to privacy and regulatory reasons, sharing data between institutions can be difficult. Because of this, real-world data are not fully exploited by machine learning (ML). An emerging method is to train ML models with federated learning (FL) which enables clients to collaboratively train ML models without sharing raw training data. We explored peer-to-peer FL by extending a prominent centralized FL algorithm called Fedavg to function in a peer-to-peer setting. We named this extended algorithm FedavgP2P. Deep neural networks at 100 simulated clients were trained to recognize digits using FedavgP2P and the MNIST data set. Scenarios with IID and non-IID client data were studied. We compared FedavgP2P to Fedavg with respect to models' convergence behaviors and communication costs. Additionally, we analyzed the connection between local client computation, the number of neighbors each client communicates with, and how that affects performance. We also attempted to improve the FedavgP2P algorithm with heuristics based on client identities and per-class F1-scores. The findings showed that by using FedavgP2P, the mean model convergence behavior was comparable to a model trained with Fedavg. However, this came with a varying degree of variation in the 100 models' convergence behaviors and much greater communications costs (at least 14.9x more communication with FedavgP2P). By increasing the amount of local computation up to a certain level, communication costs could be saved. When the number of neighbors a client communicated with increased, it led to a lower variation of the models' convergence behaviors. The FedavgP2P heuristics did not show improved performance. In conclusion, the overall findings indicate that peer-to-peer FL is a promising approach.
26

Applied Machine Learning for Online Education

Serena Alexis Nicoll (12476796) 28 April 2022 (has links)
<p>We consider the problem of developing innovative machine learning tools for online education and evaluate their ability to provide instructional resources.  Prediction tasks for student behavior are a complex problem spanning a wide range of topics: we complement current research in student grade prediction and clickstream analysis by considering data from three areas of online learning: Social Learning Networks (SLN), Instructor Feedback, and Learning Management Systems (LMS). In each of these categories, we propose a novel method for modelling data and an associated tool that may be used to assist students and instructors. First, we develop a methodology for analyzing instructor-provided feedback and determining how it correlates with changes in student grades using NLP and NER--based feature extraction. We demonstrate that student grade improvement can be well approximated by a multivariate linear model with average fits across course sections approaching 83\%, and determine several contributors to student success. Additionally, we develop a series of link prediction methodologies that utilize spatial and time-evolving network architectures to pass network state between space and time periods. Through evaluation on six real-world datasets, we find that our method obtains substantial improvements over Bayesian models, linear classifiers, and an unsupervised baseline, with AUCs typically above 0.75 and reaching 0.99. Motivated by Federated Learning, we extend our model of student discussion forums to model an entire classroom as a SLN. We develop a methodology to represent student actions across different course materials in a shared, low-dimensional space that allows characteristics from actions of different types to be passed jointly to a downstream task. Performance comparisons against several baselines in centralized, federated, and personalized learning demonstrate that our model offers more distinctive representations of students in a low-dimensional space, which in turn results in improved accuracy on a common downstream prediction task. Results from these three research thrusts indicate the ability of machine learning methods to accurately model student behavior across multiple data types and suggest their ability to benefit students and instructors alike through future development of assistive tools. </p>
27

Privacy-Preserved Federated Learning : A survey of applicable machine learning algorithms in a federated environment

Carlsson, Robert January 2020 (has links)
There is a potential in the field of medicine and finance of doing collaborative machine learning. These areas gather data which can be used for developing machine learning models that could predict all from sickness in patients to acts of economical crime like fraud. The problem that exists is that the data collected is mostly of confidential nature and should be handled with precaution. This makes the standard way of doing machine learning - gather data at one centralized server - unwanted to achieve. The safety of the data have to be taken into account. In this project we will explore the Federated learning approach of ”bringing the code to the data, instead of data to the code”. It is a decentralized way of doing machine learning where models are trained on connected devices and data is never shared. Keeping the data privacypreserved.
28

Building a Personally Identifiable Information Recognizer in a Privacy Preserved Manner Using Automated Annotation and Federated Learning

Hathurusinghe, Rajitha 16 September 2020 (has links)
This thesis explores the training of a deep neural network based named entity recognizer in an end-to-end privacy preserved setting where dataset creation and model training happen in an environment with minimal manual interventions. With the improvement of accuracy in Deep Learning Models for practical tasks, a rising concern is satisfying the demand for training data for these models amidst the concerns on the data privacy. Several scenarios of data protection are suggested in the recent past due to public concerns hence the legal guidelines to enforce them. A promising new development is the decentralized model training on isolated datasets, which eliminates the compromises of privacy upon providing data to a centralized entity. However, in this federated setting curating the data source is still a privacy risk mostly in unstructured data sources such as text. We explore the feasibility of automatic dataset annotation for a Named Entity Recognition (NER) task and training a deep learning model with it in two federated learning settings. We explore the feasibility of utilizing a dataset created in this manner for fine-tuning a stateof- the-art deep learning language model for the downstream task of named entity recognition. We also explore this novel setting of deep learning NLP model and federated learning for its deviation from the classical centralized setting. We created an automatically annotated dataset containing around 80,000 sentences, a manual human annotated test set and tools to extend the dataset with more manual annotations. We observed the noise from automated annotation can be overcome to a level by increasing the dataset size. We also contributed to the federated learning framework with state-of-the-art NLP model developments. Overall, our NER model achieved around 0.80 F1-score for recognition of entities in sentences.
29

Domain-based Collaborative Learning for Enhanced Health Management of Distributed Industrial Assets

Pandhare, Vibhor January 2021 (has links)
No description available.
30

Decentralized Federated Autonomous Organizations for Prognostics and Health Management

Bagheri, Behrad 15 June 2020 (has links)
No description available.

Page generated in 0.0758 seconds