• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

EdgeFn: A Lightweight Customizable Data Store for Serverless Edge Computing

Paidiparthy, Manoj Prabhakar 01 June 2023 (has links)
Serverless Edge Computing is an extension of the serverless computing paradigm that enables the deployment and execution of modular software functions on resource-constrained edge devices. However, it poses several challenges due to the edge network's dynamic nature and serverless applications' latency constraints. In this work, we introduce EdgeFn, a lightweight distributed data store for the serverless edge computing system. While serverless comput- ing platforms simplify the development and automated management of software functions, running serverless applications reliably on resource-constrained edge devices poses multiple challenges. These challenges include a lack of flexibility, minimum control over management policies, high data shipping, and cold start latencies. EdgeFn addresses these challenges by providing distributed data storage for serverless applications and allows users to define custom policies that affect the life cycle of serverless functions and their objects. First, we study the challenges of existing serverless systems to adapt to the edge environment. Sec- ond, we propose a distributed data store on top of a Distributed Hash Table (DHT) based Peer-to-Peer (P2P) Overlay, which achieves data locality by co-locating the function and its data. Third, we implement programmable callbacks for storage operations which users can leverage to define custom policies for their applications. We also define some use cases that can be built using the callbacks. Finally, we evaluate EdgeFn scalability and performance using industry-generated trace workload and real-world edge applications. / Master of Science / Serverless Edge Computing is an extension of the serverless computing paradigm that enables the deployment and execution of modular software functions on resource-constrained edge devices. However, it poses several challenges due to the edge network's dynamic nature and serverless applications' latency constraints. In this work, we introduce EdgeFn, a lightweight distributed data store for the serverless edge computing system. While serverless comput- ing platforms simplify the development and automated management of software functions, running serverless applications reliably on resource-constrained edge devices poses multiple challenges. These challenges include a lack of flexibility, minimum control over management policies, high data shipping, and cold start latencies. EdgeFn addresses these challenges by providing distributed data storage for serverless applications and allows users to define custom policies that affect the life cycle of serverless functions and their objects. First, we study the challenges of existing serverless systems to adapt to the edge environment. Sec- ond, we propose a distributed data store on top of a Distributed Hash Table (DHT) based Peer-to-Peer (P2P) Overlay, which achieves data locality by co-locating the function and its data. Third, we implement programmable callbacks for storage operations which users can leverage to define custom policies for their applications. We also define some use cases that can be built using the callbacks. Finally, we evaluate EdgeFn scalability and performance using industry-generated trace workload and real-world edge applications.
12

GraphDHT: Scaling Graph Neural Networks' Distributed Training on Edge Devices on a Peer-to-Peer Distributed Hash Table Network

Gupta, Chirag 03 January 2024 (has links)
This thesis presents an innovative strategy for distributed Graph Neural Network (GNN) training, leveraging a peer-to-peer network of heterogeneous edge devices interconnected through a Distributed Hash Table (DHT). As GNNs become increasingly vital in analyzing graph-structured data across various domains, they pose unique challenges in computational demands and privacy preservation, particularly when deployed for training on edge devices like smartphones. To address these challenges, our study introduces the Adaptive Load- Balanced Partitioning (ALBP) technique in the GraphDHT system. This approach optimizes the division of graph datasets among edge devices, tailoring partitions to the computational capabilities of each device. By doing so, ALBP ensures efficient resource utilization across the network, significantly improving upon traditional participant selection strategies that often overlook the potential of lower-performance devices. Our methodology's core is weighted graph partitioning and model aggregation in GNNs, based on partition ratios, improving training efficiency and resource use. ALBP promotes inclusive device participation in training, overcoming computational limits and privacy concerns in large-scale graph data processing. Utilizing a DHT-based system enhances privacy in the peer-to-peer setup. The GraphDHT system, tested across various datasets and GNN architectures, shows ALBP's effectiveness in distributed GNN training and its broad applicability in different domains and structures. This contributes to applied machine learning, especially in optimizing distributed learning on edge devices. / Master of Science / Graph Neural Networks (GNNs) are a type of machine learning model that focuses on analyzing data structured like a network, such as social media connections or biological systems. These models can help identify patterns and make predictions in various tasks, but training them on large-scale datasets can require significant computing power and careful handling of sensitive data. This research proposes a new method for training GNNs on small devices, like smartphones, by dividing the data into smaller pieces and using a peer-to-peer (p2p) network for communication between devices. This approach allows the devices to work together and learn from the data while keeping sensitive information private. The main contributions of this research are threefold: (1) examining existing ways to divide network data and how they can be used for training GNNs on small devices, (2) improving the training process by creating a localized, decentralized network of devices that can communicate and learn together, and (3) testing the method on different types of datasets and GNN models, showing that it works well across a variety of situations. To sum up, this research offers a novel way to train GNNs on small devices, allowing for more efficient learning and better protection of sensitive information.
13

Scaled: Scalable Federated Learning via Distributed Hash Table Based Overlays

Kim, Taehwan 14 April 2022 (has links)
In recent years, Internet-of-Things (IoT) devices generate a large amount of personal data. However, due to the privacy concern, collecting the private data in cloud centers for training Machine Learning (ML) models becomes unrealistic. To address this problem, Federated Learning (FL) is proposed. Yet, central bottleneck has become a severe concern since the central node in traditional FL is responsible for the communication and aggregation of mil- lions of edge devices. In this paper, we propose Scalable Federated Learning via Distributed Hash Table Based Overlays for network (Scaled) to conduct multiple concurrently running FL-based applications over edge networks. Specifically, Scaled adopts a fully decentral- ized multiple-master and multiple-slave architecture by exploiting Distributed Hash Table (DHT) based overlay networks. Moreover, Scaled improves the scalability and adaptability by involving all edge nodes in training, aggregating, and forwarding. Overall, we make the following contributions in the paper. First, we investigate the existing FL frameworks and discuss their drawbacks. Second, we improve the existing FL frameworks from centralized master-slave architecture by using DHT-based Peer-to-Peer (P2P) overlay networks. Third, we implement the subscription-based application-level hierarchical forest for FL training. Finally, we demonstrate Scaled's scalability and adaptability over large scale experiments. / Master of Science / In recent years, Internet-of-Things (IoT) devices generate a large amount of personal data. However, due to privacy concerns, collecting the private data in central servers for training Machine Learning (ML) models becomes unrealistic. To address this problem, Federated Learning (FL) is proposed. In traditional ML, data from edge devices (i.e. phones) should be collected to the central server to start model training. In FL, training results, instead of the data, are collected to perform training. The benefit of FL is that private data can never be leaked during the training. However, there is a major problem in traditional FL: a single point of failure. When power to a central server goes down or the central server is disconnected from the system, it will lose all the data. To address this problem, Scaled: Scalable Federated Learning via Distributed Hash Table Based Overlays is proposed. Instead of having one powerful main server, Scaled launches many different servers to distribute the workload. Moreover, since Scaled is able to build and manage multiple trees at the same time, it allows multi-model training.
14

Adaptive dissemination of network state knowledge in structured peer-to-peer networks

Hajiarabderkani, Masih January 2015 (has links)
One of the fundamental challenges in building Peer-to-Peer (P2P) applications is to locate resources across a dynamic set of nodes without centralised servers. Structured overlay networks solve this challenge by proving a key-based routing (KBR) layer that maps keys to nodes. The performance of KBR is strongly influenced by the dynamic and unpredictable conditions of P2P environments. To cope with such conditions a node must maintain its routing state. Routing state maintenance directly influences both lookup latency and bandwidth consumption. The more vigorously that state information is disseminated between nodes, the greater the accuracy and completeness of the routing state and the lower the lookup latency, but the more bandwidth that is consumed. Existing structured P2P overlays provide a set of configuration parameters that can be used to tune the trade-off between lookup latency and bandwidth consumption. However, the scale and complexity of the configuration space makes the overlays difficult to optimise. Further, it is increasingly difficult to design adaptive overlays that can cope with the ever increasing complexity of P2P environments. This thesis is motivated by the vision that adaptive P2P systems of tomorrow, would not only optimise their own parameters, but also generate and adapt their own design. This thesis studies the effects of using an adaptive technique to automatically adapt state dissemination cost and lookup latency in structured overlays under churn. In contrast to previous adaptive approaches, this work investigates the algorithmic adaptation of the fundamental data dissemination protocol rather than tuning the parameter values of a protocol with fixed design. This work illustrates that such a technique can be used to design parameter-free structured overlays that outperform other structured overlays with fixed design such as Chord in terms of lookup latency, bandwidth consumption and lookup correctness. A large amount of experimentation was performed, more than the space allows to report. This thesis presents a set of key findings. The full set of experiments and data is available online at: http://trombone.cs.st-andrews.ac.uk/thesis/analysis.
15

Long-Term Location-Independent Research Data Dissemination Using Persistent Identifiers

Wannenwetsch, Oliver 11 January 2017 (has links)
No description available.
16

Enabling Internet-Scale Publish/Subscribe In Overlay Networks

Rahimian, Fatemeh January 2011 (has links)
As the amount of data in todays Internet is growing larger, users are exposedto too much information, which becomes increasingly more difficult tocomprehend. Publish/subscribe systems leverage this problem by providingloosely-coupled communications between producers and consumers of data ina network. Data consumers, i.e., subscribers, are provided with a subscriptionmechanism, to express their interests in a subset of data, in order to be notifiedonly when some data that matches their subscription is generated by theproducers, i.e., publishers. Most publish/subscribe systems today, are basedon the client/server architectural model. However, to provide the publish/-subscribe service in large scale, companies either have to invest huge amountof money for over-provisioning the resources, or are prone to frequent servicefailures. Peer-to-peer overlay networks are attractive alternative solutions forbuilding Internet-scale publish/subscribe systems. However, scalability comeswith a cost: a published message often needs to traverse a large number ofuninterested (unsubscribed) nodes before reaching all its subscribers. Werefer to this undesirable traffic, as relay overhead. Without careful considerations,the relay overhead might sharply increase resource consumption for therelay nodes (in terms of bandwidth transmission cost, CPU, etc) and couldultimately lead to rapid deterioration of the system’s performance once therelay nodes start dropping the messages or choose to permanently abandonthe system. To mitigate this problem, some solutions use unbounded numberof connections per node, while some other limit the expressiveness of thesubscription scheme. In this thesis work, we introduce two systems called Vitis and Vinifera, fortopic-based and content-based publish/subscribe models, respectively. Boththese systems are gossip-based and significantly decrease the relay overhead.We utilize novel techniques to cluster together nodes that exhibit similarsubscriptions. In the topic-based model, distinct clusters for each topic areconstructed, while clusters in the content-based model are fuzzy and do nothave explicit boundaries. We augment these clustered overlays by links thatfacilitate routing in the network. We construct a hybrid system by injectingstructure into an otherwise unstructured network. The resulting structuresresemble navigable small-world networks, which spans along clusters of nodesthat have similar subscriptions. The properties of such overlays make theman ideal platform for efficient data dissemination in large-scale systems. Thesystems requires only a bounded node degree and as we show, through simulations,they scale well with the number of nodes and subscriptions and remainefficient under highly complex subscription patterns, high publication rates,and even in the presence of failures in the network. We also compare bothsystems against some state-of-the-art publish/subscribe systems. Our measurementsshow that both Vitis and Vinifera significantly outperform theircounterparts on various subscription and churn scenarios, under both syntheticworkloads and real-world traces. / QC 20111114
17

PBQoS - uma arquitetura de gerenciamento baseado em políticas para distribuição otimizada de conteúdo multimídia com controle de QoS em redes Overlay. / PBQoS - a Policy-based management architecture for optimized multimedia content distribution to control the QoS in an Overlay network.

Almeida, Fernando Luiz de 16 December 2010 (has links)
Avanços nas tecnologias de comunicação e processamento de sinais além de mudar a forma de como realizar negócios em todo o mundo, têm motivado o surgimento de serviços e aplicações multimídia na Internet de forma crescente. Como conseqüência, é possível conceber, desenvolver, implantar e operar serviços de distribuição de vídeo digital na Internet, tanto na abordagem sob-demanda quanto ao vivo. Com o aumento das aplicações multimídia na rede, torna-se cada vez mais complexo e necessário definir um modelo eficiente que possa realizar o gerenciamento efetivo e integrado de todos os elementos e serviços que compõe um sistema computacional. Pensando assim, este trabalho propõe uma arquitetura de gerenciamento baseado em políticas aplicada à distribuição de conteúdo multimídia com controle de QoS (Quality of Service) em redes de sobreposição (overlay). A arquitetura é baseada nos padrões de gerenciamento por políticas definida pela IETF (Internet Engineering Task Force) que, através de informações contextuais (rede e clientes) administra os serviços disponíveis no sistema. Faz uso dos requisitos de QoS providos pela rede de distribuição e os compara com os requisitos mínimos exigidos pelos perfis das aplicações previamente mapeados em regras de políticas. Dessa forma é possível controlar e administrar os elementos e serviços do sistema, afim de melhor distribuir recursos aos usuários deste sistema. / Advances in communication technologies and signal processing have not only changed the way business is conducted around the world, but have also driven the development of services and multimedia applications on the Internet. As a result, it is possible to design, develop, deploy and operate services for digital video distribution on the Internet, both according to an on-demand approach and live. Because of the increase in multimedia applications on the network, it has become increasingly more complex and necessary to define an efficient architecture that can achieve the effective and integrated management of all the elements and services that compose a computer system. With this in mind, this study proposes developing a robust and efficient architecture based on IETF (Internet Engineering Task Force) policy management standards applied to multimedia distribution content with QoS (Quality of Service) control in Overlay Networks. This architecture makes use of QoS requirements provided by the distribution network and compares them to the minimum requirements demanded by each type of application previously mapped in the policy rules. This system makes it possible to control and manage system information and services and also to distribute resources to system users better.
18

PBQoS - uma arquitetura de gerenciamento baseado em políticas para distribuição otimizada de conteúdo multimídia com controle de QoS em redes Overlay. / PBQoS - a Policy-based management architecture for optimized multimedia content distribution to control the QoS in an Overlay network.

Fernando Luiz de Almeida 16 December 2010 (has links)
Avanços nas tecnologias de comunicação e processamento de sinais além de mudar a forma de como realizar negócios em todo o mundo, têm motivado o surgimento de serviços e aplicações multimídia na Internet de forma crescente. Como conseqüência, é possível conceber, desenvolver, implantar e operar serviços de distribuição de vídeo digital na Internet, tanto na abordagem sob-demanda quanto ao vivo. Com o aumento das aplicações multimídia na rede, torna-se cada vez mais complexo e necessário definir um modelo eficiente que possa realizar o gerenciamento efetivo e integrado de todos os elementos e serviços que compõe um sistema computacional. Pensando assim, este trabalho propõe uma arquitetura de gerenciamento baseado em políticas aplicada à distribuição de conteúdo multimídia com controle de QoS (Quality of Service) em redes de sobreposição (overlay). A arquitetura é baseada nos padrões de gerenciamento por políticas definida pela IETF (Internet Engineering Task Force) que, através de informações contextuais (rede e clientes) administra os serviços disponíveis no sistema. Faz uso dos requisitos de QoS providos pela rede de distribuição e os compara com os requisitos mínimos exigidos pelos perfis das aplicações previamente mapeados em regras de políticas. Dessa forma é possível controlar e administrar os elementos e serviços do sistema, afim de melhor distribuir recursos aos usuários deste sistema. / Advances in communication technologies and signal processing have not only changed the way business is conducted around the world, but have also driven the development of services and multimedia applications on the Internet. As a result, it is possible to design, develop, deploy and operate services for digital video distribution on the Internet, both according to an on-demand approach and live. Because of the increase in multimedia applications on the network, it has become increasingly more complex and necessary to define an efficient architecture that can achieve the effective and integrated management of all the elements and services that compose a computer system. With this in mind, this study proposes developing a robust and efficient architecture based on IETF (Internet Engineering Task Force) policy management standards applied to multimedia distribution content with QoS (Quality of Service) control in Overlay Networks. This architecture makes use of QoS requirements provided by the distribution network and compares them to the minimum requirements demanded by each type of application previously mapped in the policy rules. This system makes it possible to control and manage system information and services and also to distribute resources to system users better.
19

Routing, Resource Allocation and Network Design for Overlay Networks

Zhu, Yong 13 November 2006 (has links)
Overlay networks have been the subject of significant research and practical interest recently in addressing the inefficiency and ossification of the current Internet. In this thesis, we cover various aspects of overlay network design, including overlay routing algorithms, overlay network assignment and multihomed overlay networks. We also examine the behavior of overlay networks under a wide range of network settings and identify several key factors that affect the performance of overlay networks. Based on these findings, practical design guidelines are also given. Specifically, this thesis addresses the following problems: 1) Dynamic overlay routing: We perform an extensive simulation study to investigate the performance of available bandwidth-based dynamic overlay routing from three important aspects: efficiency, stability, and safety margin. Based on the findings, we propose a hybrid routing scheme that achieves good performance in all three aspects. We also examine the effects of several factors on overlay routing performance, including network load, traffic variability, link-state staleness, number of overlay hops, measurement errors, and native sharing effects. 2) Virtual network assignment: We investigate the virtual network (VN) assignment problem in the scenario of network virtualization. Specifically, we develop a basic VN assignment scheme without reconfiguration and use it as the building block for all other advanced algorithms. Subdividing heuristics and adaptive optimization strategies are presented to further improve the performance. We also develop a selective VN reconfiguration scheme that prioritizes the reconfiguration for the most critical VNs. 3) Overlay network configuration tool for PlanetLab: We develop NetFinder, an automatic overlay network configuration tool to efficiently allocate PlanetLab resources to individual overlays. NetFinder continuously monitors the resource utilization of PlanetLab and accepts a user-defined overlay topology as input and selects the set of PlanetLab nodes and their interconnection for the user overlay. 4) Multihomed overlay network: We examine the effectiveness of combining multihoming and overlay routing from the perspective of an overlay service provider (OSP). We focus on the corresponding design problem and examine, with realistic network performance and pricing data, whether the OSP can provide a network service that is profitable, better (in terms of round-trip time), and less expensive than the competing native ISPs.
20

Smart devices collaboration for energy saving in home networks / Collaboration des équipements du réseau domestique pour une meilleure efficacité énergétique globale

Yan, Han 19 December 2014 (has links)
Au cours des dernières années, la révolution numérique a continué sa progression. Les technologies de l'information et des communications (TIC) ont totalement changé la vie quotidienne des gens à leur domicile (concept de « maison numérique »). Pendant ce temps, non seulement le volume des émissions de CO2 produit par les TIC, ce qu'on appelle l'empreinte carbone, est sans cesse en croissance mais elle s'accompagne également d'une hausse du prix de l'électricité, augmentant fortement la part des équipements numériques dans la budget global des ménages. Ainsi, pour des raisons environnementale et économique, réduire la consommation d'énergie dans les nombreux équipements du réseau domestique est devenu un enjeu majeur. Dans ce contexte, la thèse porte sur la conception, l'évaluation et la mise en œuvre d'un ensemble de mécanismes dans le but de répondre aux problèmes de consommation d'énergie sur les réseaux locaux rassemblant les équipements numériques domestiques. Nous proposons un réseau de contrôle qui est formé par des noeuds de contrôle de l'énergie placés au-dessus du réseau traditionnel. Chaque nœud de contrôle est relié à un dispositif en vue de coordonner les états d'alimentation de l'équipement domestique associé.. Un démonstrateur pour un système Home Power Efficiency (HOPE) a également été mis en œuvre. Il démontre la faisabilité de la solution technique que nous proposons pour le contrôle de l'énergie dans un réseau domestique réel avec des scénarios réels qui sont souvent utilisées par utilisateur. Après avoir analysé le mode d'utilisation des équipements du réseau domestique, nous proposons un système de gestion d'énergie qui contrôle ces équipements minimisant ainsi que leur consommation. Le système est basé sur l'analyse des services collaboratifs, chaque service est découpé en blocs fonctionnels atomiques, distribués dans les différents équipements. Cela permet de gérer avec plus de précision les besoins énergétiques de chaque équipement de manière à n'alimenter que les composants nécessaires au service demandé. Pour conclure ces travaux, nous avons également cherché à minimiser les impacts de l'économie d'énergie sur la qualité d'expérience perçue par l'utilisateur (notamment le délai d'activation des services). Nous proposons un système de gestion d'énergie pour des services collaboratifs offrant plusieurs compromis possibles entre la consommation d'énergie et le délai d'activation des services dans un réseau domestique. Il est complété par un algorithme d'apprentissage du comportement des utilisateurs domestiques. / In recent years, Information and Communications Technology (ICT) has totally changed the people daily life in the Digital Home. Meanwhile, not only the amount of CO2 emission of ICT, so called ''footprint'', is increasing without cease, but also the price of electricity is constantly rising. Thus, it is quite important to reduce energy consumption in the home network and home devices for the environmental and economic reasons. In order to cope with this context, the thesis concerns the design, the evaluation, and the implementation of a novel set of mechanisms with the purpose of responding to home network energy consumption problems. We proposed firstly an Overlay Energy Control Network which is formed by the overlay energy control nodes. Each node is connected to one device which forms an overlay control network to coordinate the power states of the device. Then, a testbed for HOme Power Efficiency system (HOPE) is implemented to demonstrate the technical solution for energy control in a real home network environment with several frequently used scenarios. After analyzing user's way of use of their home network equipment, we propose a power management which controls the devices based on the analysis of the collaborative services. These frequently used collaborative services require different functional blocks in different devices. This model provides the possibility to turn on the right requested functional blocks in the right device at the right moment. Finally, based on the former contribution, the collaborative overlay power management offers several possible tradeoffs between the power consumption and the waiting delay in the home network.

Page generated in 0.0505 seconds