• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 1
  • Tagged with
  • 73
  • 73
  • 34
  • 31
  • 19
  • 19
  • 18
  • 17
  • 17
  • 16
  • 15
  • 14
  • 13
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Towards a Progressive E-health Application Framework

Lu, Zhirui 29 March 2022 (has links)
Recent technological advances have opened many new possibilities for health appli- cations. Next generation of networks allows real-time monitoring, collaboration, and diagnosis. Machine Learning and Deep Learning enable modeling and understanding complex and enormous datasets. Yet all the innovations also pose new challenges to application designers and maintainers. To deliver high standard e-health services while following regulations, Quality of Service requirements need to be fulfilled, high accuracy needs to be archived, let along all the security defenses to protect sensitive data from leaking. In this thesis, we present a collection of works towards a progressive framework for building secure, responsive, and intelligent e-health applications, focusing on three major components, Analyze, Acquire, and Authenticate. The framework is progres- sive, as it can be applied to various architectures, growing with the project and adapting to its needs. For newer decentralized applications that perform data anal- ysis locally on users’ devices, powerful models outperforming existing solutions can be built using Deep Learning, while Federated Learning provides further privacy guarantee against data leakage, as shown in the case of sleep stage prediction task using smart watch data. For traditional centralized applications performing com- plex computations on the cloud or on-premise clusters, to provide Quality of Service guarantees for the data acquisition process in a sensor network, a delay estimation model based on queueing theory is proposed and verified using simulation. We also explore the novel idea of using molecular communication for authentication, named Molecular Key, enabling the incorporation of environmental information into security policy. We envision this framework can provide stepping stones for future e-health applications.
2

On Seven Fundamental Optimization Challenges in Machine Learning

Mishchenko, Konstantin 14 October 2021 (has links)
Many recent successes of machine learning went hand in hand with advances in optimization. The exchange of ideas between these fields has worked both ways, with ' learning building on standard optimization procedures such as gradient descent, as well as with new directions in the optimization theory stemming from machine learning applications. In this thesis, we discuss new developments in optimization inspired by the needs and practice of machine learning, federated learning, and data science. In particular, we consider seven key challenges of mathematical optimization that are relevant to modern machine learning applications, and develop a solution to each. Our first contribution is the resolution of a key open problem in Federated Learning: we establish the first theoretical guarantees for the famous Local SGD algorithm in the crucially important heterogeneous data regime. As the second challenge, we close the gap between the upper and lower bounds for the theory of two incremental algorithms known as Random Reshuffling (RR) and Shuffle-Once that are widely used in practice, and in fact set as the default data selection strategies for SGD in modern machine learning software. Our third contribution can be seen as a combination of our new theory for proximal RR and Local SGD yielding a new algorithm, which we call FedRR. Unlike Local SGD, FedRR is the first local first-order method that can provably beat gradient descent in communication complexity in the heterogeneous data regime. The fourth challenge is related to the class of adaptive methods. In particular, we present the first parameter-free stepsize rule for gradient descent that provably works for any locally smooth convex objective. The fifth challenge we resolve in the affirmative is the development of an algorithm for distributed optimization with quantized updates that preserves global linear convergence of gradient descent. Finally, in our sixth and seventh challenges, we develop new VR mechanisms applicable to the non-smooth setting based on proximal operators and matrix splitting. In all cases, our theory is simpler, tighter and uses fewer assumptions than the prior literature. We accompany each chapter with numerical experiments to show the tightness of the proposed theoretical results.
3

Privacy-aware Federated Learning with Global Differential Privacy

Airody Suresh, Spoorthi 31 January 2023 (has links)
There is an increasing need for low-power neural systems as neural networks become more widely used in embedded devices with limited resources. Spiking neural networks (SNNs) are proving to be a more energy-efficient option to conventional Artificial neural networks (ANNs), which are recognized for being computationally heavy. Despite its significance, there has been not enough attention on training SNNs on large-scale distributed Machine Learning techniques like Federated Learning (FL). As federated learning involves many energy-constrained devices, there is a significant opportunity to take advantage of the energy efficiency offered by SNNs. However, it is necessary to address the real-world communication constraints in an FL system and this is addressed with the help of three communication reduction techniques, namely, model compression, partial device participation, and periodic aggregation. Furthermore, the convergence of federated learning systems is also affected by data heterogeneity. Federated learning systems are capable of protecting the private data of clients from adversaries. However, by analyzing the uploaded client parameters, confidential information can still be revealed. To combat privacy attacks on the FL systems, various attempts have been made to incorporate differential privacy within the framework. In this thesis, we investigate the trade-offs between communication costs and training variance under a Federated Learning system with Differential Privacy applied at the parameter server (curator model). / Master of Science / Federated Learning is a decentralized method of training neural network models; it employs several participating devices to independently learn a model on their local data partition. These local models are then aggregated at a central server to achieve the same performance as if the model had been trained centrally. But with Federated Learning systems there is a communication overhead accumulated. Various communication reductions can be used to reduce these costs. Spiking Neural Networks, being the energy-efficient option to Artificial Neural Networks, can be utilized in Federated Learning systems. This is because FL systems consist of a network of energy-efficient devices. Federated learning systems are helpful in preserving the privacy of data in the system. However, an attacker can still obtain meaningful information from the parameters that are transmitted during a session. To this end, differential privacy techniques are utilized to combat privacy concerns in Federated Learning systems. In this thesis, we compare and contrast different communication costs and parameters of a federated learning system with differential privacy applied to it.
4

Towards a Resource Efficient Framework for Distributed Deep Learning Applications

Han, Jingoo 24 August 2022 (has links)
Distributed deep learning has achieved tremendous success for solving scientific problems in research and discovery over the past years. Deep learning training is quite challenging because it requires training on large-scale massive dataset, especially with graphics processing units (GPUs) in latest high-performance computing (HPC) supercomputing systems. HPC architectures bring different performance trends in training throughput compared to the existing studies. Multiple GPUs and high-speed interconnect are used for distributed deep learning on HPC systems. Extant distributed deep learning systems are designed for non-HPC systems without considering efficiency, leading to under-utilization of expensive HPC hardware. In addition, increasing resource heterogeneity has a negative effect on resource efficiency in distributed deep learning methods including federated learning. Thus, it is important to focus on an increasing demand for both high performance and high resource efficiency for distributed deep learning systems, including latest HPC systems and federated learning systems. In this dissertation, we explore and design novel methods and frameworks to improve resource efficiency of distributed deep learning training. We address the following five important topics: performance analysis on deep learning for supercomputers, GPU-aware deep learning job scheduling, topology-aware virtual GPU training, heterogeneity-aware adaptive scheduling, and token-based incentive algorithm. In the first chapter (Chapter 3), we explore and focus on analyzing performance trend of distributed deep learning on latest HPC systems such as Summitdev supercomputer at Oak Ridge National Laboratory. We provide insights by conducting a comprehensive performance study on how deep learning workloads have effects on the performance of HPC systems with large-scale parallel processing capabilities. In the second part (Chapter 4), we design and develop a novel deep learning job scheduler MARBLE, which considers efficiency of GPU resource based on non-linear scalability of GPUs in a single node and improves GPU utilization by sharing GPUs with multiple deep learning training workloads. The third part of this dissertation (Chapter 5) proposes topology-aware virtual GPU training systems TOPAZ, specifically designed for distributed deep learning on recent HPC systems. In the fourth chapter (Chapter 6), we conduct exploration on an innovative holistic federated learning scheduling that employs a heterogeneity-aware adaptive selection method for improving resource efficiency and accuracy performance, coupled with resource usage profiling and accuracy monitoring to achieve multiple goals. In the fifth part of this dissertation (Chapter 7), we are focused on how to provide incentives to participants according to contribution for reaching high performance of final federated model, while tokens are used as a means of paying for the services of providing participants and the training infrastructure. / Doctor of Philosophy / Distributed deep learning is widely used for solving critical scientific problems with massive datasets. However, to accelerate the scientific discovery, resource efficiency is also important for the deployment on real-world systems, such as high-performance computing (HPC) systems. Deployment of existing deep learning applications on these distributed systems may lead to underutilization of HPC hardware resources. In addition, extreme resource heterogeneity has negative effects on distributed deep learning training. However, much of the prior work has not focused on specific challenges in distributed deep learning including HPC systems and heterogeneous federated systems, in terms of optimizing resource utilization.This dissertation addresses the challenges in improving resource efficiency of distributed deep learning applications, through performance analysis on deep learning for supercomputers, GPU-aware deep learning job scheduling, topology-aware virtual GPU training, and heterogeneity-aware adaptive federated learning scheduling and incentive algorithms.
5

Towards Communication-Efficient Federated Learning Through Particle Swarm Optimization and Knowledge Distillation

Zaman, Saika 01 May 2024 (has links) (PDF)
The widespread popularity of Federated Learning (FL) has led researchers to delve into its various facets, primarily focusing on personalization, fair resource allocation, privacy, and global optimization, with less attention puts towards the crucial aspect of ensuring efficient and cost-optimized communication between the FL server and its agents. A major challenge in achieving successful model training and inference on distributed edge devices lies in optimizing communication costs amid resource constraints, such as limited bandwidth, and selecting efficient agents. In resource-limited FL scenarios, where agents often rely on unstable networks, the transmission of large model weights can substantially degrade model accuracy and increase communication latency between the FL server and agents. Addressing this challenge, we propose a novel strategy that integrates a knowledge distillation technique with a Particle Swarm Optimization (PSO)-based FL method. This approach focuses on transmitting model scores instead of weights, significantly reducing communication overhead and enhancing model accuracy in unstable environments. Our method, with potential applications in smart city services and industrial IoT, marks a significant step forward in reducing network communication costs and mitigating accuracy loss, thereby optimizing the communication efficiency between the FL server and its agents.
6

Detecting Distracted Drivers using a Federated Computer Vision Model : With the Help of Federated Learning

Viggesjöö, Joel January 2023 (has links)
En av de vanligaste distraktionerna under bilkörning är utförandet av aktiviteter som avlägsnar förarens fokus från vägen, exempelvis användandet av en telefon för att skicka meddelanden. Det finns många olika sätt att hantera dessa problem, varav en teknik är att använda maskininlärning för att identifiera och notifiera distraherade bilförare. En lösning för detta blev presenterad i en tidigare artikel, varav traditionell maskininlärning med en centraliserad metod användes, vilket resulterade i goda resultat vid utvärdering. Som ett nästa steg föreslog artikeln att de skapade algoritmerna kunde bli förlängd till decentraliserad lösning för att öka stabiliteten av modellen. Således förlängde detta projekt den centrala maskininlärningsmodellen till en federerad lösning, med mål att behålla liknande resultat vid utvärdering. Som ett ytterligare delmål utforskade projektet kvantiseringstekniker för att erhålla en mindre modell, med mål att behålla liknande resultat som tidigare lösningar. Dessutom introducerades ett ytterligare delmål, vilket var att utforska metoder för att rekonstuera data för att stärka integriteten av modellen ytterligare, med mål att behålla liknande resultat som tidigare lösningar. Projektet lyckades med att förlänga modellen till federerad lärning, tillsammans med implementeringen av kvantiserings-tekniker för att erhålla en mindre modell, men delmålet angående rekonstruering av data uppnåddes ej på grund av tidsbrist. Projektet använde sig av en blandning av bibliotek från Python för att förlänga samt kvantisera modellen, vilket resulterade i fyra nya modeller: en decentraliserad modell samt tre modeller som minskade i storlek med 48 %, 70 %, och 71 % jämfört med den decentraliserade modellen. Utvärderingarna för samtliga modeller visade liknande resultat som den ursprungliga centraliserade modellen, vilket indikerade att projektet var framgångsrikt. / One of the most common driving distractions is performing activities that diverts your attention away from the road, such as using a phone for texting. To address this issue, techniques such as machine learning and computer vision could be used to identify and notify distracted drivers. A solution for this was presented in an earlier article, using a traditional centralized machine learning approach with a good prediction accuracy. As a next step, the earlier article mentions that the created computer vision algorithms could be extended to a federated learning setting to further increase the robustness of the model. Thus, this project extended the centralized machine learning model to a federated learning setting with the aim to preserve the accuracy. Additionally, the project explored quantization techniques to achieve a smaller model, while keeping the prediction accuracy. Furthermore, the project also explored if data reconstruction methods could be used to further increase privacy for user data, while preserving prediction accuracy. The project successfully extended the implementation to a federated learning setting, as well as implementing the quantization techniques for size reduction, but the solution regarding data reconstruction was never implemented due to the time constraints. The project used a mixture of Python frameworks to extend the solution to a federated learning setting and to reduce the size of the model, resulting in one decentralized model, and three models with a reduced size of 48 %, 70 %, and 71 % compared to the decentralized model. The prediction rate of these models had similar prediction accuracy as the centralized model, indicating that the project was a success.
7

Reinforcement Learning assisted Adaptive difficulty of Proof of Work (PoW) in Blockchain-enabled Federated Learning

Sethi, Prateek 10 August 2023 (has links)
This work addresses the challenge of heterogeneity in blockchain mining, particularly in the context of consortium and private blockchains. The motivation stems from ensuring fairness and efficiency in blockchain technology's Proof of Work (PoW) consensus mechanism. Existing consensus algorithms, such as PoW, PoS, and PoB, have succeeded in public blockchains but face challenges due to heterogeneous miners. This thesis highlights the significance of considering miners' computing power and resources in PoW consensus mechanisms to enhance efficiency and fairness. It explores the implications of heterogeneity in blockchain mining in various applications, such as Federated Learning (FL), which aims to train machine learning models across distributed devices collaboratively. The research objectives of this work involve developing novel RL-based techniques to address the heterogeneity problem in consortium blockchains. Two proposed RL-based approaches, RL based Miner Selection (RL-MS) and RL based Miner and Difficulty Selection (RL-MDS), focus on selecting miners and dynamically adapting the difficulty of PoW based on the computing power of the chosen miners. The contributions of this research work include the proposed RL-based techniques, modifications to the Ethereum code for dynamic adaptation of Proof of Work Difficulty (PoW-D), integration of the Commonwealth Cyber Initiative (CCI) xG testbed with an AI/ML framework, implementation of a simulator for experimentation, and evaluation of different RL algorithms. The research also includes additional contributions in Open Radio Access Network (O-RAN) and smart cities. The proposed research has significant implications for achieving fairness and efficiency in blockchain mining in consortium and private blockchains. By leveraging reinforcement learning techniques and considering the heterogeneity of miners, this work contributes to improving the consensus mechanisms and performance of blockchain-based systems. / Master of Science / Technological Advancement has led to devices having powerful yet heterogeneous computational resources. Due to the heterogeneity in the compute of miner nodes in a blockchain, there is unfairness in the PoW Consensus mechanism. More powerful devices have a higher chance of mining and gaining from the mining process. Additionally, the PoW consensus introduces a delay due to the time to mine and block propagation time. This work uses Reinforcement Learning to solve the challenge of heterogeneity in a private Ethereum blockchain. It also introduces a time constraint to ensure efficient blockchain performance for time-critical applications.
8

DIFFERENTIAL PRIVACY IN DISTRIBUTED SETTINGS

Zitao Li (14135316) 18 November 2022 (has links)
<p>Data is considered the "new oil" in the information society and digital economy. While many commercial activities and government decisions are based on data, the public raises more concerns about privacy leakage when their private data are collected and used. In this dissertation, we investigate the privacy risks in settings where the data are distributed across multiple data holders, and there is only an untrusted central server. We provide solutions for several problems under this setting with a security notion called differential privacy (DP). Our solutions can guarantee that there is only limited and controllable privacy leakage from the data holder, while the utility of the final results, such as model prediction accuracy, can be still comparable to the ones of the non-private algorithms.</p> <p><br></p> <p>First, we investigate the problem of estimating the distribution over a numerical domain while satisfying local differential privacy (LDP). Our protocol prevents privacy leakage in the data collection phase, in which an untrusted data aggregator (or a server) wants to learn the distribution of private numerical data among all users. The protocol consists of 1) a new reporting mechanism called the square wave (SW) mechanism, which randomizes the user inputs before sharing them with the aggregator; 2) an Expectation Maximization with Smoothing (EMS) algorithm, which is applied to aggregated histograms from the SW mechanism to estimate the original distributions.</p> <p><br></p> <p>Second, we study the matrix factorization problem in three federated learning settings with an untrusted server, i.e., vertical, horizontal, and local federated learning settings. We propose a generic algorithmic framework for solving the problem in all three settings. We introduce how to adapt the algorithm into differentially private versions to prevent privacy leakage in the training and publishing stages.</p> <p><br></p> <p>Finally, we propose an algorithm for solving the k-means clustering problem in vertical federated learning (VFL). A big challenge in VFL is the lack of a global view of each data point. To overcome this challenge, we propose a lightweight and differentially private set intersection cardinality estimation algorithm based on the Flajolet-Martin (FM) sketch to convey the weight information of the synopsis points. We provide theoretical utility analysis for the cardinality estimation algorithm and further refine it for better empirical performance.</p>
9

Learning with constraints on processing and supervision

Acar, Durmuş Alp Emre 30 August 2023 (has links)
Collecting a sufficient amount of data and centralizing them are both costly and privacy-concerning operations. These practical concerns arise due to the communication costs between data collecting devices and data being personal such as text messages of an end user. The goal is to train generalizable machine learning models with constraints on data without sharing or transferring the data. In this thesis, we will present solutions to several aspects of learning with data constraints, such as processing and supervision. We focus on federated learning, online learning, and learning generalizable representations and provide setting-specific training recipes. In the first scenario, we tackle a federated learning problem where data is decentralized through different users and should not be centralized. Traditional approaches either ignore the heterogeneity problem or increase communication costs to handle it. Our solution carefully addresses the heterogeneity issue of user data by imposing a dynamic regularizer that adapts to the heterogeneity of each user without extra transmission costs. Theoretically, we establish convergence guarantees. We extend our ideas to personalized federated learning, where the model is customized to each end user, and heterogeneous federated learning, where users support different model architectures. As a next scenario, we consider online meta-learning, where there is only one user, and the data distribution of the user changes over time. The goal is to adapt new data distributions with very few labeled data from each distribution. A naive way is to store data from different distributions to train a model from scratch with sufficient data. Our solution efficiently summarizes the information from each task data so that the memory footprint does not scale with the number of tasks. Lastly, we aim to train generalizable representations given a dataset. We consider a setting where we have access to a powerful teacher (more complex) model. Traditional methods do not distinguish points and force the model to learn all the information from the powerful model. Our proposed method focuses on the learnable input space and carefully distills attainable information from the teacher model by discarding the over-capacity information. We compare our methods with state-of-the-art methods in each setup and show significant performance improvements. Finally, we discuss potential directions for future work.
10

Enhancing Privacy in Federated Learning: Mitigating Model Inversion Attacks through Selective Model Transmission and Algorithmic Improvements

Jonsson, Isak January 2024 (has links)
This project aims to identify a sustainable way to construct and train machine learning models. A crucial factor in creating effective machine learning models lies in having access to vast amounts of data. However, this can pose a challenge due to the confidentiality and dispersion of data across various entities. Collecting all the data can thus become a security concern, as transmitting it to a centralized computing location may expose the data to security risks. One solution to this issue is federated learning, which utilizes locally trained AI models. Instead of transmitting data to a centralized computing location, this approach entails sending locally trained AI models and combining them into a global model. In recent years, a method called Model Inversion Attacks has emerged, revealing their potential risk in the context of extracting training data from trained AI models. This methodology potentially heightens the vulnerability of sending models instead of data, posing a security risk. In this project, various Model Inversion Attack methodologies will be examined to further understand the risk of sending models instead of data. The papers examined showed some results of extracting data from trained AI models, although they do not raise significant concerns. Nonetheless, future research in MIA may create security concerns when sending models between parties. Sending parts of the locally trained models to the global model effectively neutralizes the effectiveness of all the examined Model Inversion Attack studies. However, from the results presented in this project, it is evident that challenges persist when only sending parts of a trained model. The challenge was to construct a usable federated learning model while only sending parts of a trained model. To achieve a good federated learning model, several adjustments had to be made to the algorithm, which showed some promising results for the future of federated learning.

Page generated in 0.0404 seconds