• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

EdgeFn: A Lightweight Customizable Data Store for Serverless Edge Computing

Paidiparthy, Manoj Prabhakar 01 June 2023 (has links)
Serverless Edge Computing is an extension of the serverless computing paradigm that enables the deployment and execution of modular software functions on resource-constrained edge devices. However, it poses several challenges due to the edge network's dynamic nature and serverless applications' latency constraints. In this work, we introduce EdgeFn, a lightweight distributed data store for the serverless edge computing system. While serverless comput- ing platforms simplify the development and automated management of software functions, running serverless applications reliably on resource-constrained edge devices poses multiple challenges. These challenges include a lack of flexibility, minimum control over management policies, high data shipping, and cold start latencies. EdgeFn addresses these challenges by providing distributed data storage for serverless applications and allows users to define custom policies that affect the life cycle of serverless functions and their objects. First, we study the challenges of existing serverless systems to adapt to the edge environment. Sec- ond, we propose a distributed data store on top of a Distributed Hash Table (DHT) based Peer-to-Peer (P2P) Overlay, which achieves data locality by co-locating the function and its data. Third, we implement programmable callbacks for storage operations which users can leverage to define custom policies for their applications. We also define some use cases that can be built using the callbacks. Finally, we evaluate EdgeFn scalability and performance using industry-generated trace workload and real-world edge applications. / Master of Science / Serverless Edge Computing is an extension of the serverless computing paradigm that enables the deployment and execution of modular software functions on resource-constrained edge devices. However, it poses several challenges due to the edge network's dynamic nature and serverless applications' latency constraints. In this work, we introduce EdgeFn, a lightweight distributed data store for the serverless edge computing system. While serverless comput- ing platforms simplify the development and automated management of software functions, running serverless applications reliably on resource-constrained edge devices poses multiple challenges. These challenges include a lack of flexibility, minimum control over management policies, high data shipping, and cold start latencies. EdgeFn addresses these challenges by providing distributed data storage for serverless applications and allows users to define custom policies that affect the life cycle of serverless functions and their objects. First, we study the challenges of existing serverless systems to adapt to the edge environment. Sec- ond, we propose a distributed data store on top of a Distributed Hash Table (DHT) based Peer-to-Peer (P2P) Overlay, which achieves data locality by co-locating the function and its data. Third, we implement programmable callbacks for storage operations which users can leverage to define custom policies for their applications. We also define some use cases that can be built using the callbacks. Finally, we evaluate EdgeFn scalability and performance using industry-generated trace workload and real-world edge applications.
2

GraphDHT: Scaling Graph Neural Networks' Distributed Training on Edge Devices on a Peer-to-Peer Distributed Hash Table Network

Gupta, Chirag 03 January 2024 (has links)
This thesis presents an innovative strategy for distributed Graph Neural Network (GNN) training, leveraging a peer-to-peer network of heterogeneous edge devices interconnected through a Distributed Hash Table (DHT). As GNNs become increasingly vital in analyzing graph-structured data across various domains, they pose unique challenges in computational demands and privacy preservation, particularly when deployed for training on edge devices like smartphones. To address these challenges, our study introduces the Adaptive Load- Balanced Partitioning (ALBP) technique in the GraphDHT system. This approach optimizes the division of graph datasets among edge devices, tailoring partitions to the computational capabilities of each device. By doing so, ALBP ensures efficient resource utilization across the network, significantly improving upon traditional participant selection strategies that often overlook the potential of lower-performance devices. Our methodology's core is weighted graph partitioning and model aggregation in GNNs, based on partition ratios, improving training efficiency and resource use. ALBP promotes inclusive device participation in training, overcoming computational limits and privacy concerns in large-scale graph data processing. Utilizing a DHT-based system enhances privacy in the peer-to-peer setup. The GraphDHT system, tested across various datasets and GNN architectures, shows ALBP's effectiveness in distributed GNN training and its broad applicability in different domains and structures. This contributes to applied machine learning, especially in optimizing distributed learning on edge devices. / Master of Science / Graph Neural Networks (GNNs) are a type of machine learning model that focuses on analyzing data structured like a network, such as social media connections or biological systems. These models can help identify patterns and make predictions in various tasks, but training them on large-scale datasets can require significant computing power and careful handling of sensitive data. This research proposes a new method for training GNNs on small devices, like smartphones, by dividing the data into smaller pieces and using a peer-to-peer (p2p) network for communication between devices. This approach allows the devices to work together and learn from the data while keeping sensitive information private. The main contributions of this research are threefold: (1) examining existing ways to divide network data and how they can be used for training GNNs on small devices, (2) improving the training process by creating a localized, decentralized network of devices that can communicate and learn together, and (3) testing the method on different types of datasets and GNN models, showing that it works well across a variety of situations. To sum up, this research offers a novel way to train GNNs on small devices, allowing for more efficient learning and better protection of sensitive information.
3

Scaled: Scalable Federated Learning via Distributed Hash Table Based Overlays

Kim, Taehwan 14 April 2022 (has links)
In recent years, Internet-of-Things (IoT) devices generate a large amount of personal data. However, due to the privacy concern, collecting the private data in cloud centers for training Machine Learning (ML) models becomes unrealistic. To address this problem, Federated Learning (FL) is proposed. Yet, central bottleneck has become a severe concern since the central node in traditional FL is responsible for the communication and aggregation of mil- lions of edge devices. In this paper, we propose Scalable Federated Learning via Distributed Hash Table Based Overlays for network (Scaled) to conduct multiple concurrently running FL-based applications over edge networks. Specifically, Scaled adopts a fully decentral- ized multiple-master and multiple-slave architecture by exploiting Distributed Hash Table (DHT) based overlay networks. Moreover, Scaled improves the scalability and adaptability by involving all edge nodes in training, aggregating, and forwarding. Overall, we make the following contributions in the paper. First, we investigate the existing FL frameworks and discuss their drawbacks. Second, we improve the existing FL frameworks from centralized master-slave architecture by using DHT-based Peer-to-Peer (P2P) overlay networks. Third, we implement the subscription-based application-level hierarchical forest for FL training. Finally, we demonstrate Scaled's scalability and adaptability over large scale experiments. / Master of Science / In recent years, Internet-of-Things (IoT) devices generate a large amount of personal data. However, due to privacy concerns, collecting the private data in central servers for training Machine Learning (ML) models becomes unrealistic. To address this problem, Federated Learning (FL) is proposed. In traditional ML, data from edge devices (i.e. phones) should be collected to the central server to start model training. In FL, training results, instead of the data, are collected to perform training. The benefit of FL is that private data can never be leaked during the training. However, there is a major problem in traditional FL: a single point of failure. When power to a central server goes down or the central server is disconnected from the system, it will lose all the data. To address this problem, Scaled: Scalable Federated Learning via Distributed Hash Table Based Overlays is proposed. Instead of having one powerful main server, Scaled launches many different servers to distribute the workload. Moreover, since Scaled is able to build and manage multiple trees at the same time, it allows multi-model training.

Page generated in 0.053 seconds