11 |
Privacy-aware Federated Learning with Global Differential PrivacyAirody Suresh, Spoorthi 31 January 2023 (has links)
There is an increasing need for low-power neural systems as neural networks become more widely used in embedded devices with limited resources. Spiking neural networks (SNNs) are proving to be a more energy-efficient option to conventional Artificial neural networks (ANNs), which are recognized for being computationally heavy. Despite its significance, there has been not enough attention on training SNNs on large-scale distributed Machine Learning techniques like Federated Learning (FL). As federated learning involves many energy-constrained devices, there is a significant opportunity to take advantage of the energy efficiency offered by SNNs. However, it is necessary to address the real-world communication constraints in an FL system and this is addressed with the help of three communication reduction techniques, namely, model compression, partial device participation, and periodic aggregation. Furthermore, the convergence of federated learning systems is also affected by data heterogeneity.
Federated learning systems are capable of protecting the private data of clients from adversaries.
However, by analyzing the uploaded client parameters, confidential information can still be revealed. To combat privacy attacks on the FL systems, various attempts have been made to incorporate differential privacy within the framework. In this thesis, we investigate the trade-offs between communication costs and training variance under a Federated Learning system with Differential Privacy applied at the parameter server (curator model). / Master of Science / Federated Learning is a decentralized method of training neural network models; it employs several participating devices to independently learn a model on their local data partition.
These local models are then aggregated at a central server to achieve the same performance as if the model had been trained centrally. But with Federated Learning systems there is a communication overhead accumulated. Various communication reductions can be used to reduce these costs. Spiking Neural Networks, being the energy-efficient option to Artificial Neural Networks, can be utilized in Federated Learning systems. This is because FL systems consist of a network of energy-efficient devices.
Federated learning systems are helpful in preserving the privacy of data in the system.
However, an attacker can still obtain meaningful information from the parameters that are transmitted during a session. To this end, differential privacy techniques are utilized to combat privacy concerns in Federated Learning systems. In this thesis, we compare and contrast different communication costs and parameters of a federated learning system with differential privacy applied to it.
|
12 |
Towards a Resource Efficient Framework for Distributed Deep Learning ApplicationsHan, Jingoo 24 August 2022 (has links)
Distributed deep learning has achieved tremendous success for solving scientific problems in research and discovery over the past years. Deep learning training is quite challenging because it requires training on large-scale massive dataset, especially with graphics processing units (GPUs) in latest high-performance computing (HPC) supercomputing systems. HPC architectures bring different performance trends in training throughput compared to the existing studies. Multiple GPUs and high-speed interconnect are used for distributed deep learning on HPC systems. Extant distributed deep learning systems are designed for non-HPC systems without considering efficiency, leading to under-utilization of expensive HPC hardware. In addition, increasing resource heterogeneity has a negative effect on resource efficiency in distributed deep learning methods including federated learning. Thus, it is important to focus on an increasing demand for both high performance and high resource efficiency for distributed deep learning systems, including latest HPC systems and federated learning systems.
In this dissertation, we explore and design novel methods and frameworks to improve resource efficiency of distributed deep learning training. We address the following five important topics: performance analysis on deep learning for supercomputers, GPU-aware deep learning job scheduling, topology-aware virtual GPU training, heterogeneity-aware adaptive scheduling, and token-based incentive algorithm.
In the first chapter (Chapter 3), we explore and focus on analyzing performance trend of distributed deep learning on latest HPC systems such as Summitdev supercomputer at Oak Ridge National Laboratory. We provide insights by conducting a comprehensive performance study on how deep learning workloads have effects on the performance of HPC systems with large-scale parallel processing capabilities. In the second part (Chapter 4), we design and develop a novel deep learning job scheduler MARBLE, which considers efficiency of GPU resource based on non-linear scalability of GPUs in a single node and improves GPU utilization by sharing GPUs with multiple deep learning training workloads. The third part of this dissertation (Chapter 5) proposes topology-aware virtual GPU training systems TOPAZ, specifically designed for distributed deep learning on recent HPC systems. In the fourth chapter (Chapter 6), we conduct exploration on an innovative holistic federated learning scheduling that employs a heterogeneity-aware adaptive selection method for improving resource efficiency and accuracy performance, coupled with resource usage profiling and accuracy monitoring to achieve multiple goals. In the fifth part of this dissertation (Chapter 7), we are focused on how to provide incentives to participants according to contribution for reaching high performance of final federated model, while tokens are used as a means of paying for the services of providing participants and the training infrastructure. / Doctor of Philosophy / Distributed deep learning is widely used for solving critical scientific problems with massive datasets. However, to accelerate the scientific discovery, resource efficiency is also important for the deployment on real-world systems, such as high-performance computing (HPC) systems. Deployment of existing deep learning applications on these distributed systems may lead to underutilization of HPC hardware resources. In addition, extreme resource heterogeneity has negative effects on distributed deep learning training. However, much of the prior work has not focused on specific challenges in distributed deep learning including HPC systems and heterogeneous federated systems, in terms of optimizing resource utilization.This dissertation addresses the challenges in improving resource efficiency of distributed deep learning applications, through performance analysis on deep learning for supercomputers, GPU-aware deep learning job scheduling, topology-aware virtual GPU training, and heterogeneity-aware adaptive federated learning scheduling and incentive algorithms.
|
13 |
Towards Communication-Efficient Federated Learning Through Particle Swarm Optimization and Knowledge DistillationZaman, Saika 01 May 2024 (has links) (PDF)
The widespread popularity of Federated Learning (FL) has led researchers to delve into its various facets, primarily focusing on personalization, fair resource allocation, privacy, and global optimization, with less attention puts towards the crucial aspect of ensuring efficient and cost-optimized communication between the FL server and its agents. A major challenge in achieving successful model training and inference on distributed edge devices lies in optimizing communication costs amid resource constraints, such as limited bandwidth, and selecting efficient agents. In resource-limited FL scenarios, where agents often rely on unstable networks, the transmission of large model weights can substantially degrade model accuracy and increase communication latency between the FL server and agents. Addressing this challenge, we propose a novel strategy that integrates a knowledge distillation technique with a Particle Swarm Optimization (PSO)-based FL method. This approach focuses on transmitting model scores instead of weights, significantly reducing communication overhead and enhancing model accuracy in unstable environments. Our method, with potential applications in smart city services and industrial IoT, marks a significant step forward in reducing network communication costs and mitigating accuracy loss, thereby optimizing the communication efficiency between the FL server and its agents.
|
14 |
Federated Machine Learning Architectures for Image ClassificationAlbahaca, Juan January 2024 (has links)
In this thesis, we explore a new method for binary image classification of semiconductorcomponents using federated learning at Mycronic AB, enabling model training on Pick andPlace (PnP) machines without centralizing sensitive data. Initially, we set a baseline bychoosing a suitable Convolutional Neural Network (CNN) architecture, implementing datapreprocessing methods, and optimizing various hyperparameters. We then assess variousfederated learning algorithms to manage the inherent statistical heterogeneity in distributeddatasets. Our approach is validated using a real-world dataset annotated by Mycronic,confirming that our findings are applicable to real industrial scenarios.
|
15 |
Efficient and Effective Deep Learning Methods for Computer Vision in Centralized and Distributed ApplicationsMendieta, Matias 01 January 2024 (has links) (PDF)
In the rapidly advancing field of computer vision, deep learning has driven significant technological transformations. However, the widespread deployment of these technologies often encounters efficiency challenges, such as high memory usage, demanding computational resources, and extensive communication overhead. Efficiency has become crucial for both centralized and distributed applications of deep learning, ensuring scalability, real-world applicability, and broad accessibility. In distributed settings, federated learning (FL) enables collaborative model training across multiple clients while maintaining data privacy. Despite its promise, FL faces challenges due to clients' constraints in memory, computational power, and bandwidth. Centralized training systems also require high efficiency, where optimizing compute resources during training and inference, as well as label efficiency, can significantly impact the performance and practicality of such models. Addressing these efficiency challenges in both federated learning and centralized training systems promises to provide significant advancements, enabling more extensive and effective deployment of machine learning models across various domains.
To this end, this dissertation addresses many key challenges. First, in federated learning, a novel method is introduced to optimize local model performance while reducing memory and computational demands. Additionally, a novel approach is presented to reduce communication costs by minimizing model update frequency across clients through the use of generative models. In the centralized domain, this dissertation further develops a novel training paradigm for geospatial foundation models using a multi-objective continual pretraining strategy. This improves label efficiency and significantly reduces computational requirements for training large-scale models. Overall, this dissertation advances deep learning efficiency by improving memory utilization, computational demands, and communication efficiency, essential for scalable and effective application of deep learning in both distributed and centralized environments.
|
16 |
Nest-site selection in cooperatively breeding Pohnpei Micronesian Kingfishers (Halcyon cinnamomina reichenbachii) : does nest-site abundance limit reproductive opportunities?Kesler, Dylan C. 12 March 2002 (has links)
Despite their inherent importance and utility as ecological examples, island
species are among the most endangered and least studied groups. Guam
Micronesian Kingfishers (Halcyon cinnamomina cinnamomina) exemplify the
plight of insular biota as a critically endangered and understudied island bird that
went extinct in the wild before they could be studied in their native habitat. Guam
kingfishers currently exist only as a captive population in U.S. zoos. Using radio
telemetry and visual observations of a wild subspecies of Micronesian Kingfisher
(H. c. reichenbachii) from the island of Pohnpei, this study examined factors
critical for the persistence of both the Guam and Pohnpei kingfishers.
Behavioral observations indicated that the birds employ a cooperative social
system, which included non-parent individuals that assisted in reproductive
attempts of others. Because resource limitations have been cited as a potentially
important factor in the evolution of cooperative behaviors and in conservation, this
investigation assessed the characteristics and availability of a potentially limited
nesting resource, arboreal termite nests. First, the characteristics of termite nests,
or termitaria, selected by Micronesian Kingfishers for use as nest sites were
modeled. Results suggested that Micronesian Kingfishers selected termitaria that
were higher from the ground and larger in volume than unused termitaria. Further,
there was little evidence that birds selected from among termitaria based on
proximity to forest edges and foraging areas, placement on a tree, vegetation
characteristics, or microclimate. Second, the number of termitaria with
characteristics indicative of nest sites was assessed to determine if reproductive
opportunities might be limited by the abundance of suitable termitaria. Results
from this analysis suggested that although fewer termitaria existed with
characteristics similar to those used for nesting, reproductive opportunities did not
appear to be limited by their abundance. Therefore, while conservation strategies
should be directed towards providing ample and appropriate nesting substrates, I
found no evidence suggesting that termitaria abundance played a role in the
evolution of cooperative breeding in Pohnpei Micronesian Kingfishers. Findings
presented here will hopefully enhance our understanding of cooperative behaviors,
as well as improve conservation efforts for Micronesian Kingfishers and other
insular avifauna. / Graduation date: 2002
|
17 |
Evaluation of a traditional food for health intervention in Pohnpei, Federated States of MicronesiaKaufer, Laura Allison Iler, 1980- January 2008 (has links)
As a nation, Federated States of Micronesia (FSM) faces increasing rates of noncommunicable diseases related to the replacement of the traditional diet with processed imported food and adoption of sedentary lifestyles. To reverse this trend, a food-based intervention in Pohnpei, FSM, used various approaches to promote local food (LF) production and consumption. Evaluation of the intervention in one community assessed changes in diet and health status in a random sample of households (n=47). Process indicators were also examined. Results from dietary assessments indicated increased (110%) provitamin A carotenoid intake; increased frequency of consumption of local banana (53%), giant swamp taro (475%), and local vegetables (130%); and increased diversity from LF. There was no change in health measures. However, exposure to intervention activities was high, and behaviour towards LF appeared to have changed positively. It is recommended that the intervention continue and expand to further affect dietary change and improve health.
|
18 |
Geology and hydrogeology of the island of Pohnpei, Federated States of MicronesiaSpengler, Steven R January 1990 (has links)
Thesis (Ph. D.)--University of Hawaii at Manoa, 1990. / Includes bibliographical references. / Microfiche. / xiii, 265 leaves, bound ill, maps 29 cm
|
19 |
The Caroline Islands ScriptDe Voogt, Alexander J. January 1993 (has links)
Thesis (M.A.)--University of Hawaii at Manoa, 1993 / Pacific Islands Studies
|
20 |
Federated Machine Learning for Resource Allocation in Multi-domain Fog EcosystemsZhang, Weilin January 2023 (has links)
The proliferation of the Internet of Things (IoT) has increasingly demanded intimacy between cloud services and end-users. This has incentivised extending cloud resources to the edge in what is deemed fog computing. The latter is manifesting as an ecosystem of connected clouds, geo-dispersed and of diverse capacities. In such conditions, workload allocation to fog services becomes a non-trivial challenge due to the complexity of trade-offs. Users' demand at the edge is highly diverse, which does not lend itself to straightforward resource planning. Conversely, running services at the edge may leverage proximity, but it comes at higher operational cost let alone rapidly increasing the risk of straining sparse resources. Consequently, there is a need for intelligent yet scalable allocation solutions that counter the adversity of demand at the edge, while efficiently distributing load between the edge and farther clouds. Machine learning is increasingly adopted in resource planning. However, besides privacy concerns, central learning is highly demanding, both computationally and in data supply. Instead, this paper proposes a federated deep reinforcement learning system, based on deep Q-learning network (DQN), for workload distribution in a fog ecosystem. The proposed solution adapts a DQN to optimize local workload allocations, made by single gateways. Federated learning is incorporated to allow multiple gateways in a network to collaboratively build knowledge of users' demand. This is leveraged to establish consensus on the fraction of workload allocated to different fog nodes, using lower data supply and computation resources. The system performance is evaluated using realistic demand set from Google Cluster Workload Traces 2019. Evaluation results show over 50% reduction in failed allocations when distributing users over larger number of gateways, given fixed number of fog nodes. The results further illustrate the trade-offs between performance and cost under different conditions.
|
Page generated in 0.0195 seconds