• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 123
  • 19
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 188
  • 104
  • 60
  • 43
  • 41
  • 40
  • 37
  • 28
  • 26
  • 22
  • 20
  • 20
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

GraphDHT: Scaling Graph Neural Networks' Distributed Training on Edge Devices on a Peer-to-Peer Distributed Hash Table Network

Gupta, Chirag 03 January 2024 (has links)
This thesis presents an innovative strategy for distributed Graph Neural Network (GNN) training, leveraging a peer-to-peer network of heterogeneous edge devices interconnected through a Distributed Hash Table (DHT). As GNNs become increasingly vital in analyzing graph-structured data across various domains, they pose unique challenges in computational demands and privacy preservation, particularly when deployed for training on edge devices like smartphones. To address these challenges, our study introduces the Adaptive Load- Balanced Partitioning (ALBP) technique in the GraphDHT system. This approach optimizes the division of graph datasets among edge devices, tailoring partitions to the computational capabilities of each device. By doing so, ALBP ensures efficient resource utilization across the network, significantly improving upon traditional participant selection strategies that often overlook the potential of lower-performance devices. Our methodology's core is weighted graph partitioning and model aggregation in GNNs, based on partition ratios, improving training efficiency and resource use. ALBP promotes inclusive device participation in training, overcoming computational limits and privacy concerns in large-scale graph data processing. Utilizing a DHT-based system enhances privacy in the peer-to-peer setup. The GraphDHT system, tested across various datasets and GNN architectures, shows ALBP's effectiveness in distributed GNN training and its broad applicability in different domains and structures. This contributes to applied machine learning, especially in optimizing distributed learning on edge devices. / Master of Science / Graph Neural Networks (GNNs) are a type of machine learning model that focuses on analyzing data structured like a network, such as social media connections or biological systems. These models can help identify patterns and make predictions in various tasks, but training them on large-scale datasets can require significant computing power and careful handling of sensitive data. This research proposes a new method for training GNNs on small devices, like smartphones, by dividing the data into smaller pieces and using a peer-to-peer (p2p) network for communication between devices. This approach allows the devices to work together and learn from the data while keeping sensitive information private. The main contributions of this research are threefold: (1) examining existing ways to divide network data and how they can be used for training GNNs on small devices, (2) improving the training process by creating a localized, decentralized network of devices that can communicate and learn together, and (3) testing the method on different types of datasets and GNN models, showing that it works well across a variety of situations. To sum up, this research offers a novel way to train GNNs on small devices, allowing for more efficient learning and better protection of sensitive information.
102

Scaled: Scalable Federated Learning via Distributed Hash Table Based Overlays

Kim, Taehwan 14 April 2022 (has links)
In recent years, Internet-of-Things (IoT) devices generate a large amount of personal data. However, due to the privacy concern, collecting the private data in cloud centers for training Machine Learning (ML) models becomes unrealistic. To address this problem, Federated Learning (FL) is proposed. Yet, central bottleneck has become a severe concern since the central node in traditional FL is responsible for the communication and aggregation of mil- lions of edge devices. In this paper, we propose Scalable Federated Learning via Distributed Hash Table Based Overlays for network (Scaled) to conduct multiple concurrently running FL-based applications over edge networks. Specifically, Scaled adopts a fully decentral- ized multiple-master and multiple-slave architecture by exploiting Distributed Hash Table (DHT) based overlay networks. Moreover, Scaled improves the scalability and adaptability by involving all edge nodes in training, aggregating, and forwarding. Overall, we make the following contributions in the paper. First, we investigate the existing FL frameworks and discuss their drawbacks. Second, we improve the existing FL frameworks from centralized master-slave architecture by using DHT-based Peer-to-Peer (P2P) overlay networks. Third, we implement the subscription-based application-level hierarchical forest for FL training. Finally, we demonstrate Scaled's scalability and adaptability over large scale experiments. / Master of Science / In recent years, Internet-of-Things (IoT) devices generate a large amount of personal data. However, due to privacy concerns, collecting the private data in central servers for training Machine Learning (ML) models becomes unrealistic. To address this problem, Federated Learning (FL) is proposed. In traditional ML, data from edge devices (i.e. phones) should be collected to the central server to start model training. In FL, training results, instead of the data, are collected to perform training. The benefit of FL is that private data can never be leaked during the training. However, there is a major problem in traditional FL: a single point of failure. When power to a central server goes down or the central server is disconnected from the system, it will lose all the data. To address this problem, Scaled: Scalable Federated Learning via Distributed Hash Table Based Overlays is proposed. Instead of having one powerful main server, Scaled launches many different servers to distribute the workload. Moreover, since Scaled is able to build and manage multiple trees at the same time, it allows multi-model training.
103

Fundamentals of Quantum Communication Networks: Scalability, Efficiency, and Distributed Quantum Machine Learning

Chehimi, Mahdi 09 August 2024 (has links)
The future quantum Internet (QI) will transform today's communication networks and user experiences by providing unparalleled security levels, superior quantum computational powers, along with enhanced sensing accuracy and data processing capabilities. These features will be enabled through applications like quantum key distribution (QKD) and quantum machine learning (QML). Towards enabling these applications, the QI requires the development of global quantum communication networks (QCNs) that enable the distribution of entangled resources between distant nodes. This dissertation addresses two major challenges facing QCNs, which are the scalability and coverage of their architectures, and the efficiency of their operations. Additionally, the dissertation studies the near-term deployment of QML applications over today's noisy quantum devices, essential for realizing the future QI. In doing so, the scalability and efficiency challenges facing the different QCN elements are explored, and practical noise-aware and physics-informed approaches are developed to optimize the QCN performance given heterogeneous quantum application-specific quality of service (QoS) user requirements on entanglement rate and fidelity. Towards achieving this goal, this dissertation makes a number of key contributions. First, the scaling limits of quantum repeaters is investigated, and a holistic optimization framework is proposed to optimize the geographical coverage of quantum repeater networks (QRNs), including the number of quantum repeaters, their placement and separating distances, quantum memory management, and quantum operations scheduling. Then, a novel framework is proposed to address the scalability challenge of free-space optical (FSO) quantum channels in the presence of blockages and environmental effects. Particularly, the utilization of a reconfigurable intelligent surface (RIS) in QCNs is proposed to maintain a line-of-sight (LoS) connection between quantum nodes separated by blockages, and a novel analytical model of quantum noise and end-to-end (e2e) fidelity in such QCNs is developed. The results show enhanced entangled state fidelity and entanglement distribution rates, improving user fairness by around 40% compared to benchmark approaches. The dissertation then investigates the efficiency challenges in a practical use-case of QCNs with a single quantum switch (QS). Particularly, the average quantum memory noise effects are analytically analyzed and their impacts on the allocation of entanglement generation sources and minimization of entanglement distribution delay while optimizing QS entanglement distillation operations are investigated. The results show an enhanced e2e fidelity and a minimized e2e entanglement distribution delay compared to existing approaches, and a unique capability of satisfying all users QoS requirements. This QCN architecture is scaled up with multiple QSs serving heterogeneous user requests, necessary for scalable quantum applications over the QI. Here, a novel efficient matching theory-based framework for optimizing the request-QS association in such QCNs while managing quantum memories and optimizing QS operations is proposed. Finally, after scaling QCNs and ensuring their efficient operations, the dissertation proposes novel distributed QML frameworks that can leverage both classical networks and QCNs to enable collaborative learning between today's noisy quantum devices. In particular, the first quantum federated learning (QFL) frameworks incorporating different quantum neural networks and leveraging quantum and classical data are developed, and the first publicly available federated quantum dataset is introduced. The results show enhanced performance and reductions in the communication overhead and number of training epochs needed until convergence, compared to classical counterpart frameworks. Overall, this dissertation develops robust frameworks and algorithms that advance the theoretical understanding of QCNs and offers practical insights for the future development of the QI and its applications. The dissertation concludes by analyzing some open challenges facing QCNs and proposing a vision for physics-informed QCNs, along with important future directions. / Doctor of Philosophy / In today's digital age, we are generating vast amounts of data through videos, live streams, and various online activities. This explosion of data brings not only incredible opportunities for innovation but also heightened security concerns. The current Internet infrastructure struggles to keep up with the demand for speed and security. In this regard, the quantum Internet (QI) emerges as a revolutionary technology poised to make the communication and data sharing processes faster and more secure than ever before. The QI requires the development of quantum communication networks (QCNs) that will be seamlessly integrated with today's existing communication systems that form today's Internet. This way, the QI enables ultra-secure communication and advanced computing applications that can transform various sectors, from finance to healthcare. However, building such global QCNs, requires overcoming significant challenges, including the sensitive nature and limitations of quantum devices. In this regard, the goal of this dissertation is to develop scalable and efficient QCNs that overcome the different challenges facing different QCN elements and enable a wide coverage and robust performance towards realizing the QI at a global scale. Simultaneously, machine learning (ML), which is driving significant advancements and transforming industries in today's world. Here, quantum technologies are anticipated to make a breakthrough in ML through quantum machine learning (QML) models that can handle today's large and complex data. However, quantum computers are still limited in scale and efficiency, often being noisy and unreliable. Throughout this dissertation, these limitations of QML are addressed by developing frameworks that allow multiple quantum computers to work together collaboratively in a distributed manner over classical networks and QCNs. By leveraging distributed QML, it is possible to achieve remarkable advancements in privacy and data utilization. For instance, distributed QML can enhance navigation systems by providing more accurate and secure route planning or revolutionize healthcare by enabling secure and efficient analysis of medical data. In summary, this dissertation addresses the critical challenges of building scalable and efficient QCNs to support the QI and develops distributed QML frameworks to enable near-term utilization of QML in transformative applications. By doing so, it paves the way for a future where quantum technology is integral to our daily lives, enhancing security, efficiency, and innovation across various domains.
104

Differentially Private Federated Learning Algorithms for Sparse Basis Recovery

Ajinkya K Mulay (18823252) 14 June 2024 (has links)
<p dir="ltr">Sparse basis recovery is an important learning problem when the number of model dimensions (<i>p</i>) is much larger than the number of samples (<i>n</i>). However, there has been little work that studies sparse basis recovery in the Federated Learning (FL) setting, where the Differential Privacy (DP) of the client data must also be simultaneously protected. Notably, the performance guarantees of existing DP-FL algorithms (such as DP-SGD) will degrade significantly when the system is ill-determined (i.e., <i>p >> n</i>), and thus they will fail to accurately learn the true underlying sparse model. The goal of my thesis is therefore to develop DP-FL sparse basis recovery algorithms that can recover the true underlying sparse basis provably accurately even when <i>p >> n</i>, yet still guaranteeing the differential privacy of the client data.</p><p dir="ltr">During my PhD studies, we developed three DP-FL sparse basis recovery algorithms for this purpose. Our first algorithm, SPriFed-OMP, based on the Orthogonal Matching Pursuit (OMP) algorithm, can achieve high accuracy even when <i>n = O(\sqrt{p})</i> under the stronger Restricted Isometry Property (RIP) assumption for least-square problems. Our second algorithm, Humming-Bird, based on a carefully modified variant of the Forward-Backward Algorithm (FoBA), can achieve differentially private sparse recovery for the same setup while requiring the much weaker Restricted Strong Convexity (RSC) condition. We further extend Humming-Bird to support loss functions beyond least-square satisfying the RSC condition. To the best of our knowledge, these are the first DP-FL results guaranteeing sparse basis recovery in the <i>p >> n</i> setting.</p>
105

Fair and Efficient Federated Learning for Network Optimization with Heteroscedastic Data

Welander, Andreas January 2024 (has links)
The distributed and privacy sensitive nature of cellular networks make them strong candidates for optimization using Federated Learning, but this exposes them to a problem inherent to the learning paradigm: performance inequality due to heterogeneous client data distributions. The prevailing approach of enforcing uniform client performance ignores client-specific performance limitations due to different levels of irreducible uncertainty present in their data, resulting in deteriorated network performance. To address this issue, this thesis introduces two novel federated algorithms designed to enhance learning efficiency and ensure fairness in the presence of heteroscedastic noise, reflecting the distributive justice principles of utilitarianism and equality. Under these circumstances, the proposed algorithms are shown to significantly improve overall performance and performance fairness. The deployment of these algorithms promises a dual benefit: enhancement in network performance and a fairer distribution of service quality for end users.
106

Comparing decentralized learning to Federated Learning when training Deep Neural Networks under churn

Vikström, Johan January 2021 (has links)
Decentralized Machine Learning could address some problematic facets with Federated Learning. There is no central server acting as an arbiter of whom or what may benefit from Machine Learning models created by the vast amount of data becoming available in recent years. It could also increase the reliability and scalability of Machine Learning systems thereby drawing the benefit of having more data accessible. Gossip Learning is such a protocol, but has primarily been designed with linear models in mind. How does Gossip Learning perform when training Deep Neural Networks? Could it be a viable alternative to Federated Learning? In this thesis, we implement Gossip Learning using two different model merging strategies. We also design and implement two extensions to this protocol with the goal of achieving higher performance when training under churn. The training methods are compared on two tasks: image classification on the Federated Extended MNIST dataset and time- series forecasting on the NN5 dataset. Additionally, we also run an experiment where learners churn, alternating between being available and unavailable. We find that Gossip Learning performs slightly better in settings where learners do not churn but is vastly outperformed in the setting where they do. / Decentraliserad Maskinginlärning kan lösa några problematiska aspekter med Federated Learning. Det finns ingen central server som agerar som domare för vilka som får gagna av Maskininlärningsmodellerna skapad av den stora mäng data som blivit tillgänglig på senare år. Det skulle också kunna öka pålitligheten och skalbarheten av Maskininlärningssystem och därav dra nytta av att mer data är tillgänglig. Gossip Learning är ett sånt protokoll, men det är primärt designat med linjära modeller i åtanke. Hur presterar Gossip Learning när man tränar Djupa Neurala Nätverk? Kan det vara ett möjligt alternativ till Federated Learning? I det här exjobbet implementerar vi Gossip Learning med två olika modelsammanslagningstekniker. Vi designar och implementerar även två tillägg till protokollet med målet att uppnå bättre prestanda när man tränar i system där noder går ner och kommer up. Träningsmetoderna jämförs på två uppgifter: bildklassificering på Federated Extended MNIST datauppsättningen och tidsserieprognostisering på NN5 datauppsättningen. Dessutom har vi även experiment då noder alternerar mellan att vara tillgängliga och otillgängliga. Vi finner att Gossip Learning presterar marginellt bättre i miljöer då noder alltid är tillgängliga men är kraftigt överträffade i miljöer då noder alternerar mellan att vara tillgängliga och otillgängliga.
107

Federated Learning in Large Scale Networks : Exploring Hierarchical Federated Learning / Federerad Inlärning i Storskaliga Nätverk : Utforskande av Hierarkisk Federerad Inlärning

Eriksson, Henrik January 2020 (has links)
Federated learning faces a challenge when dealing with highly heterogeneous data and it can sometimes be inadequate to adopt an approach where a single model is trained for usage at all nodes in the network. Different approaches have been investigated to succumb this issue such as adapting the trained model to each node and clustering the nodes in the network and train a different model for each cluster where the data is less heterogeneous. In this work we study the possibilities to improve the local model performance utilizing the hierarchical setup that comes with clustering the participating clients in the network. Experiments are carried out featuring a Long Short-Term Memory network to perform time series forecasting to evaluate different approaches utilizing the hierarchical setup and comparing them to standard federated learning approaches. The experiments are done using a dataset collected by Ericsson AB consisting of handovers recorded at base stations in an European city. The hierarchical approaches didn’t show any benefit over common two-level approaches. / Federated Learning står inför en utmaning när det gäller att hantera data med en hög grad av heterogenitet och det kan i vissa fall vara olämpligt att använda sig av en approach där en och samma modell är tränad för att användas av alla noder i nätverket. Olika approacher för att hantera detta problem har undersökts som att anpassa den tränade modellen till varje nod och att klustra noderna i nätverket och träna en egen modell för varje kluster inom vilket datan är mindre heterogen. I detta arbete studeras möjligheterna att förbättra prestandan hos de lokala modellerna genom att dra nytta av den hierarkiska anordning som uppstår när de deltagande noderna i nätverket grupperas i kluster. Experiment är utförda med ett Long Short-Term Memory-nätverk för att utföra tidsserieprognoser för att utvärdera olika approacher som drar nytta av den hierarkiska anordningen och jämför dem med vanliga federated learning-approacher. Experimenten är utförda med ett dataset insamlat av Ericsson AB. Det består av "handoversfrån basstationer i en europeisk stad. De hierarkiska approacherna visade inga fördelar jämfört med de vanliga två-nivåapproacherna.
108

Análise de redes sociais de colaboração científica no ambiente de uma federação de bibliotecas digitais / Social network analysis of scientific collaboration in the environment of a digital libraries federation.

Martins, Dalton Lopes 29 October 2012 (has links)
A produção científica de uma área do conhecimento aparece em diferentes formatos e é disponibilizada de forma essencialmente distribuída por entre revistas, anais, teses, dissertações e outros formatos característicos utilizados pela comunidade científica para a sistematização de seu discurso. Uma federação de bibliotecas digitais oferece uma arquitetura da informação que tem por finalidade facilitar a agregação de diferentes tipos de documentos disponibilizados, facilitando termos acesso a esses documentos, bem como a seus metadados descritores, formando, desse modo, verdadeiras estruturas de apoio ao desenvolvimento de pesquisas e análises científicas dos documentos que por ali circulam. Já a análise de redes sociais vem se mostrando um importante objeto de pesquisa da área da Ciência da Informação nas últimas décadas, tendo sido apropriada ainda de forma preliminar pela comunidade científica brasileira. Como forma de ampliar o conhecimento e experimentações com o uso da análise de redes sociais e identificar seu potencial analítico em relação ao que poderíamos coletar de informações de uma federação de bibliotecas digitais, tivemos por objetivo neste trabalho utilizar a análise de rede para mapear os padrões, tendências e estratégias de conectividade de dois planos de relacionamento entre pesquisadores: a coautoria em documentos oriundos de revistas científicas e a participação em bancas de defesas de teses e dissertações. Além disso, buscamos mapear as causas sociais e políticas dos padrões de rede identificados, colocando em evidência um uso crítico e contextualizado dos indicadores estruturais e dinâmicos de redes utilizados neste trabalho. Utilizamos como caso a biblioteca digital federada Univerciencia.org, uma biblioteca especializada na área de Ciências da Comunicação, tendo fornecido como fonte de dados 49 revistas científicas da área com 9864 documentos e 12 bibliotecas digitais de teses e dissertações com 1961 documentos. Os resultados apontam que os movimentos geradores e constituintes das redes sociais em nossos dois planos de análise são fortemente determinados por uma racionalidade característica da política científica do campo da Comunicação e da ciência de modo geral. / The scientific production of an area of knowledge appears in different formats and is available in a distributed mainly through journals, proceedings, theses, dissertations and other typical formats used by the scientific community for the systematization of his speech. A federation of digital libraries offers an information architecture that aims to facilitate the aggregation of different types of documents available, facilitating access to those documents and their metadata descriptors, forming thus real structures to support the development of research and analysis of scientific documents that circulate through there. The analysis of social networks has proven an important subject of research in the area of Information Science and in recent decades have been appropriate even in a preliminary way by the Brazilian scientific community. As a way to increase knowledge and experimentation with the use of social network analysis and identify his potential analytical, the objective of this work was use network analysis to map the patterns , trends and connectivity strategies between two planes of relation between researchers: co-authoring of documents from scientific journals and participation in defenses of theses and dissertations. Furthermore, we seek to map the social and political causes of network patterns identified, highlighting a critical use of structural and dynamic indicators. We use as case Univerciencia.org federated digital library, a library specialized in the field of Communication Sciences and provided as a source of data collected 49 scientific journals in the area with 9864 documents and 12 digital libraries of theses and dissertations with 1961 documents. The results show that the generative movements and constituents of social networks in our two levels of analysis are strongly determined by a characteristic rationality of science policy in the field of communication and science in general.
109

Beyond relational: a database architecture and federated query optimization in a multi-modal healthcare environment

Hylock, Ray Hales 01 May 2013 (has links)
Over the past thirty years, clinical research has benefited substantially from the adoption of electronic medical record systems. As deployment has increased, so too has the number of researchers seeking to improve the overall analytical environment by way of tools and models. Although much work has been done, there are still many uninvestigated areas; two of which are explored in this dissertation. The first pertains to the physical storage of the data itself. There are two generally accepted storage models: relational and entity-attribute-value (EAV). For clinical data, EAV systems are preferred due to their natural way of managing many-to-many relationships, sparse attributes, and dynamic processes along with minimal conversion effort and reduction in federation complexities. However, the relational database management systems on which they are implemented, are not intended to organize and retrieve data in this format; eroding their performance gains. To combat this effect, we present the foundation for an EAV Database Management System (EDBMS). We discuss data conversion methodologies, formulate the requisite metadata and partitioned type-sensing index structures, and provide detailed runtime and experimental analysis with five extant methods. Our results show that the prototype, EAVDB, reduces space and conversion requirements while enhancing overall query performance. The second topic concerns query performance in a federated environment. One method used to decrease query execution time, is to pre-compute and store "beneficial" queries (views). The View Selection Problem (VSP) identifies these views subject to resource constraints. A federated model, however, has yet to be developed. In this dissertation, we submit three advances in view materialization. First, a more robust optimization function, the Minimum-Maintenance View Selection Problem (MMVSP), is derived by combining existing approaches. Second, the Federated View Selection Problem (FVSP), built upon the MMVSP, and federated data cube lattice are formalized. The FVSP allows for multiple querying nodes, partial and full materialization, and data propagation constriction. The latter two are shown to greatly reduce the overall number of valid solutions within the solution space and thus a novel, multi-tiered approach is given. Lastly, EAV materialization, which is introduced in this dissertation, is incorporated into an expanded, multi-modal variant of the FVSP. As models and heuristics for both the federated and EAV VSP, to the best of our knowledge, do not exist, this research defines two new branches of data warehouse optimization. Coupled with our EDBMS design, this dissertation confronts two main challenges associated with clinical data warehousing and federation.
110

A federated approach to enterprise integration

Fernandez, George, gfernandez@rmit.edu.au January 2006 (has links)
In order to remain competitive, the integration of their information systems is an imperative for many large organisations. Applications that originally have been developed independently are now required to interoperate to support new or different functions of the enterprise. Although the mechanisms for application interoperation exist provided by the technology, due to the sheer number and complexity of the running systems, integration solutions � centralised or distributed�appropriate at the local level do not translate successfully to the whole enterprise. Centralised integration approaches often satisfy only some of the integration requirements, they are very expensive, and are fraught with danger since they imply an �all or nothing� approach. Distributed approaches, on the other hand, suffer from complexity and scalability problems as the number of system interfaces to be implemented and the number of execution-time invocations grows with the number of component applications. This dissertation makes a contribution to the field of Enterprise Application Integration (EAI) within the framework of distributed systems technology. Based on real-life case studies experience, we present here a federated approach that controls the size and complexity of the integration effort by reusing existing systems as much as possible and reducing the number of interacting applications. Only selected local elements are exposed to the organisational milieu, and a consistent supporting infrastructure is provided to make systems interactions possible. Our approach provides a flexible and scalable strategy to enterprise integration, avoiding the shortcomings of traditional approaches. We respect existing organisational structures, and demonstrate how appropriate federation infrastructure and protocols enable the interoperation of existing systems. The three main facets of enterprise knowledge are systematically incorporated into the integration effort: a) by the use of domain ontologies to support data integration; b) by the development of a methodology to include business rules; and c) by the development of FEW, a federated workflow model to implement the business processes of the organisation.

Page generated in 0.0717 seconds