• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 1
  • Tagged with
  • 67
  • 67
  • 30
  • 29
  • 19
  • 18
  • 17
  • 17
  • 17
  • 15
  • 13
  • 12
  • 11
  • 11
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Classifying femur fractures using federated learning

Zhang, Hong January 2024 (has links)
The rarity and subtle radiographic features of atypical femoral fractures (AFF) make it difficult to distinguish radiologically from normal femoral fractures (NFF). Compared with NFF, AFF has subtle radiological features and is associated with the long-term use of bisphosphonates for the treatment of osteoporosis. Automatically classifying AFF and NFF not only helps improve the diagnosis rate of AFF but also helps patients receive timely treatment. In recent years, automatic classification technologies for AFF and NFF have continued to emerge, including but not limited to the use of convolutional neural networks (CNNs), vision transformers (ViTs), and multimodal deep learning prediction models. The above methods are all based on deep learning and require the use of centralized radiograph datasets. However, centralizing medical radiograph data involves issues such as patient privacy and data heterogeneity. Firstly, radiograph data is difficult to share among hospitals, and relevant laws or guidelines prohibit the dissemination of these data; Second, there were overall radiological differences among the different hospital datasets, and deep learning does not fully consider the fusion problem of these multi-source heterogeneous datasets. Based on federated learning, we implemented a distributed deep learning strategy to avoid the use of centralized datasets, thereby protecting the local radiograph datasets of medical institutions and patient privacy. To achieve this goal, we studied approximately 4000 images from 72 hospitals in Sweden, containing 206 AFF patients and 744 NFF patients. By dispersing the radiograph datasets of different hospitals across 3-5 nodes, we can simulate the real-world data distribution scenarios, train the local models of the nodes separately, and aggregate the global model, combined with percentile privacy protection, to further protect the security of the local datasets; in addition, we compare the performance of federated learning models using different aggregation algorithms (FedAvg, FedProx, and FedOpt). In the end, the federated learning global model we obtained is better than these local training models, and the performance of federated learning models is close to the performance of the centralized learning model. It is even better than the centralized learning model in some metrics. We conducted 3-node and 5-node federation learning training respectively. Limited by the data set size of each node, 5-node federated learning does not show any more significant performance than 3-node federated learning. Federated learning is more conducive to collaborative training of high-quality prediction models among medical institutions, but also fully protects sensitive medical data. We believe that it will become a paradigm for collaborative training models in the foreseeable future.
32

Cross-Device Federated Intrusion Detector For Early Stage Botnet Propagation

Famera, Angela Grace 03 January 2023 (has links)
No description available.
33

Decentralized Machine Learning On Blockchain: Developing A Federated Learning Based System

Sridhar, Nikhil 01 December 2023 (has links) (PDF)
Traditional Machine Learning (ML) methods usually rely on a central server to per-form ML tasks. However, these methods have problems like security risks, datastorage issues, and high computational demands. Federated Learning (FL), on theother hand, spreads out the ML process. It trains models on local devices and thencombines them centrally. While FL improves computing and customization, it stillfaces the same challenges as centralized ML in security and data storage. This thesis introduces a new approach combining Federated Learning and Decen-tralized Machine Learning (DML), which operates on an Ethereum Virtual Machine(EVM) compatible blockchain. The blockchain’s security and decentralized naturehelp improve transparency, trust, scalability, and efficiency. The main contributionsof this thesis include:1. Redesigning a semi-centralized system with enhanced privacy and the multi-KRUM algorithm, following the work of Shayan et al..2. Developing a new decentralized framework that supports both standard anddeep-learning FL, using the InterPlanetary File System (IPFS) and EthereumVirtual Machine (EVM)-compatible Smart Contracts.3. Assessing how well the system defends against common data poisoning attacks,using a version of Multi-KRUM that’s better at detecting outliers.4. Applying privacy methods to securely combine data from different sources.
34

Federated Machine Learning for Resource Allocation in Multi-domain Fog Ecosystems

Zhang, Weilin January 2023 (has links)
The proliferation of the Internet of Things (IoT) has increasingly demanded intimacy between cloud services and end-users. This has incentivised extending cloud resources to the edge in what is deemed fog computing. The latter is manifesting as an ecosystem of connected clouds, geo-dispersed and of diverse capacities. In such conditions, workload allocation to fog services becomes a non-trivial challenge due to the complexity of trade-offs. Users' demand at the edge is highly diverse, which does not lend itself to straightforward resource planning. Conversely, running services at the edge may leverage proximity, but it comes at higher operational cost let alone rapidly increasing the risk of straining sparse resources. Consequently, there is a need for intelligent yet scalable allocation solutions that counter the adversity of demand at the edge, while efficiently distributing load between the edge and farther clouds. Machine learning is increasingly adopted in resource planning. However, besides privacy concerns, central learning is highly demanding, both computationally and in data supply. Instead, this paper proposes a federated deep reinforcement learning system, based on deep Q-learning network (DQN), for workload distribution in a fog ecosystem. The proposed solution adapts a DQN to optimize local workload allocations, made by single gateways. Federated learning is incorporated to allow multiple gateways in a network to collaboratively build knowledge of users' demand. This is leveraged to establish consensus on the fraction of workload allocated to different fog nodes, using lower data supply and computation resources. The system performance is evaluated using realistic demand set from Google Cluster Workload Traces 2019. Evaluation results show over 50% reduction in failed allocations when distributing users over larger number of gateways, given fixed number of fog nodes. The results further illustrate the trade-offs between performance and cost under different conditions.
35

Enabling IoV Communication through Secure Decentralized Clustering using Federated Deep Reinforcement Learning

Scott, Chandler 01 August 2024 (has links) (PDF)
The Internet of Vehicles (IoV) holds immense potential for revolutionizing transporta- tion systems by facilitating seamless vehicle-to-vehicle and vehicle-to-infrastructure communication. However, challenges such as congestion, pollution, and security per- sist, particularly in rural areas with limited infrastructure. Existing centralized solu- tions are impractical in such environments due to latency and privacy concerns. To address these challenges, we propose a decentralized clustering algorithm enhanced with Federated Deep Reinforcement Learning (FDRL). Our approach enables low- latency communication, competitive packet delivery ratios, and cluster stability while preserving data privacy. Additionally, we introduce a trust-based security framework for IoV environments, integrating a central authority and trust engine to establish se- cure communication and interaction among vehicles and infrastructure components. Through these innovations, we contribute to safer, more efficient, and trustworthy IoV deployments, paving the way for widespread adoption and realizing the transfor- mative potential of IoV technologies.
36

GraphDHT: Scaling Graph Neural Networks' Distributed Training on Edge Devices on a Peer-to-Peer Distributed Hash Table Network

Gupta, Chirag 03 January 2024 (has links)
This thesis presents an innovative strategy for distributed Graph Neural Network (GNN) training, leveraging a peer-to-peer network of heterogeneous edge devices interconnected through a Distributed Hash Table (DHT). As GNNs become increasingly vital in analyzing graph-structured data across various domains, they pose unique challenges in computational demands and privacy preservation, particularly when deployed for training on edge devices like smartphones. To address these challenges, our study introduces the Adaptive Load- Balanced Partitioning (ALBP) technique in the GraphDHT system. This approach optimizes the division of graph datasets among edge devices, tailoring partitions to the computational capabilities of each device. By doing so, ALBP ensures efficient resource utilization across the network, significantly improving upon traditional participant selection strategies that often overlook the potential of lower-performance devices. Our methodology's core is weighted graph partitioning and model aggregation in GNNs, based on partition ratios, improving training efficiency and resource use. ALBP promotes inclusive device participation in training, overcoming computational limits and privacy concerns in large-scale graph data processing. Utilizing a DHT-based system enhances privacy in the peer-to-peer setup. The GraphDHT system, tested across various datasets and GNN architectures, shows ALBP's effectiveness in distributed GNN training and its broad applicability in different domains and structures. This contributes to applied machine learning, especially in optimizing distributed learning on edge devices. / Master of Science / Graph Neural Networks (GNNs) are a type of machine learning model that focuses on analyzing data structured like a network, such as social media connections or biological systems. These models can help identify patterns and make predictions in various tasks, but training them on large-scale datasets can require significant computing power and careful handling of sensitive data. This research proposes a new method for training GNNs on small devices, like smartphones, by dividing the data into smaller pieces and using a peer-to-peer (p2p) network for communication between devices. This approach allows the devices to work together and learn from the data while keeping sensitive information private. The main contributions of this research are threefold: (1) examining existing ways to divide network data and how they can be used for training GNNs on small devices, (2) improving the training process by creating a localized, decentralized network of devices that can communicate and learn together, and (3) testing the method on different types of datasets and GNN models, showing that it works well across a variety of situations. To sum up, this research offers a novel way to train GNNs on small devices, allowing for more efficient learning and better protection of sensitive information.
37

Differentially Private Federated Learning Algorithms for Sparse Basis Recovery

Ajinkya K Mulay (18823252) 14 June 2024 (has links)
<p dir="ltr">Sparse basis recovery is an important learning problem when the number of model dimensions (<i>p</i>) is much larger than the number of samples (<i>n</i>). However, there has been little work that studies sparse basis recovery in the Federated Learning (FL) setting, where the Differential Privacy (DP) of the client data must also be simultaneously protected. Notably, the performance guarantees of existing DP-FL algorithms (such as DP-SGD) will degrade significantly when the system is ill-determined (i.e., <i>p >> n</i>), and thus they will fail to accurately learn the true underlying sparse model. The goal of my thesis is therefore to develop DP-FL sparse basis recovery algorithms that can recover the true underlying sparse basis provably accurately even when <i>p >> n</i>, yet still guaranteeing the differential privacy of the client data.</p><p dir="ltr">During my PhD studies, we developed three DP-FL sparse basis recovery algorithms for this purpose. Our first algorithm, SPriFed-OMP, based on the Orthogonal Matching Pursuit (OMP) algorithm, can achieve high accuracy even when <i>n = O(\sqrt{p})</i> under the stronger Restricted Isometry Property (RIP) assumption for least-square problems. Our second algorithm, Humming-Bird, based on a carefully modified variant of the Forward-Backward Algorithm (FoBA), can achieve differentially private sparse recovery for the same setup while requiring the much weaker Restricted Strong Convexity (RSC) condition. We further extend Humming-Bird to support loss functions beyond least-square satisfying the RSC condition. To the best of our knowledge, these are the first DP-FL results guaranteeing sparse basis recovery in the <i>p >> n</i> setting.</p>
38

Fair and Efficient Federated Learning for Network Optimization with Heteroscedastic Data

Welander, Andreas January 2024 (has links)
The distributed and privacy sensitive nature of cellular networks make them strong candidates for optimization using Federated Learning, but this exposes them to a problem inherent to the learning paradigm: performance inequality due to heterogeneous client data distributions. The prevailing approach of enforcing uniform client performance ignores client-specific performance limitations due to different levels of irreducible uncertainty present in their data, resulting in deteriorated network performance. To address this issue, this thesis introduces two novel federated algorithms designed to enhance learning efficiency and ensure fairness in the presence of heteroscedastic noise, reflecting the distributive justice principles of utilitarianism and equality. Under these circumstances, the proposed algorithms are shown to significantly improve overall performance and performance fairness. The deployment of these algorithms promises a dual benefit: enhancement in network performance and a fairer distribution of service quality for end users.
39

A Study on Private and Secure Federated Learning / プライベートで安全な連合学習

Kato, Fumiyuki 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第25427号 / 情博第865号 / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 伊藤 孝行, 教授 黒田 知宏, 教授 岡部 寿男, 吉川 正俊(京都大学 名誉教授) / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
40

Comparing decentralized learning to Federated Learning when training Deep Neural Networks under churn

Vikström, Johan January 2021 (has links)
Decentralized Machine Learning could address some problematic facets with Federated Learning. There is no central server acting as an arbiter of whom or what may benefit from Machine Learning models created by the vast amount of data becoming available in recent years. It could also increase the reliability and scalability of Machine Learning systems thereby drawing the benefit of having more data accessible. Gossip Learning is such a protocol, but has primarily been designed with linear models in mind. How does Gossip Learning perform when training Deep Neural Networks? Could it be a viable alternative to Federated Learning? In this thesis, we implement Gossip Learning using two different model merging strategies. We also design and implement two extensions to this protocol with the goal of achieving higher performance when training under churn. The training methods are compared on two tasks: image classification on the Federated Extended MNIST dataset and time- series forecasting on the NN5 dataset. Additionally, we also run an experiment where learners churn, alternating between being available and unavailable. We find that Gossip Learning performs slightly better in settings where learners do not churn but is vastly outperformed in the setting where they do. / Decentraliserad Maskinginlärning kan lösa några problematiska aspekter med Federated Learning. Det finns ingen central server som agerar som domare för vilka som får gagna av Maskininlärningsmodellerna skapad av den stora mäng data som blivit tillgänglig på senare år. Det skulle också kunna öka pålitligheten och skalbarheten av Maskininlärningssystem och därav dra nytta av att mer data är tillgänglig. Gossip Learning är ett sånt protokoll, men det är primärt designat med linjära modeller i åtanke. Hur presterar Gossip Learning när man tränar Djupa Neurala Nätverk? Kan det vara ett möjligt alternativ till Federated Learning? I det här exjobbet implementerar vi Gossip Learning med två olika modelsammanslagningstekniker. Vi designar och implementerar även två tillägg till protokollet med målet att uppnå bättre prestanda när man tränar i system där noder går ner och kommer up. Träningsmetoderna jämförs på två uppgifter: bildklassificering på Federated Extended MNIST datauppsättningen och tidsserieprognostisering på NN5 datauppsättningen. Dessutom har vi även experiment då noder alternerar mellan att vara tillgängliga och otillgängliga. Vi finner att Gossip Learning presterar marginellt bättre i miljöer då noder alltid är tillgängliga men är kraftigt överträffade i miljöer då noder alternerar mellan att vara tillgängliga och otillgängliga.

Page generated in 0.0313 seconds