• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 3
  • Tagged with
  • 94
  • 94
  • 41
  • 37
  • 30
  • 28
  • 26
  • 23
  • 23
  • 19
  • 17
  • 17
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

[en] SIGNAL PROCESSING TECHNIQUES FOR ENERGY EFFICIENT DISTRIBUTED LEARNING / [pt] TÉCNICAS DE PROCESSAMENTO DE SINAIS PARA APRENDIZAGEM DISTRIBUÍDA COM EFICIÊNCIA ENERGÉTICA

ALIREZA DANAEE 11 January 2023 (has links)
[pt] As redes da Internet das Coisas (IdC) incluem dispositivos inteligentes que contêm muitos sensores que permitem interagir com o mundo físico, coletando e processando dados de streaming em tempo real. O consumo total de energia e o custo desses sensores afetam o consumo de energia e o custo dos dispositivos IdC. O tipo de sensor determina a precisão da interface analógica e a resolução dos conversores analógico-digital (ADCs). A resolução dos ADCs tem um compromisso entre a precisão de inferência e o consumo de energia, uma vez que o consumo de energia dos ADCs depende do número de bits usados para representar amostras digitais. Nesta tese, apresentamos um esquema de aprendizado distribuído com eficiência energética usando sinais quantizados para redes da IdC. Em particular, desenvolvemos algoritmos de gradiente estocástico com reconhecimento de quantização distribuído (DQA-LMS) e de mínimos quadrados recursivos com reconhecimento de quantização distribuído (DQA-RLS) que podem aprender parâmetros de maneira eficiente em energia usando sinais quantizados com poucos bits, exigindo um baixo custo computacional. Além disso, desenvolvemos uma estratégia de compensação de viés para melhorar ainda mais o desempenho dos algoritmos propostos. Uma análise estatística dos algoritmos propostos juntamente com uma avaliação da complexidade computacional das técnicas propostas e existentes é realizada. Os resultados numéricos avaliam os algoritmos com reconhecimento de quantização distribuída em relação às técnicas existentes para uma tarefa de estimação de parâmetros em que os dispositivos IdC operam em um modo ponto a ponto. Também apresentamos um esquema de aprendizado federativo com eficiência energética usando sinais quantizados para redes de IdC. Desenvolvemos o algoritmo federated averaging LMS (QA-FedAvg-LMS) com reconhecimento de quantização para redes IdC estruturadas por configuração de aprendizado federativo em que os dispositivos IdC trocam suas estimativas com um servidor. Uma estratégia de compensação de viés para QA-FedAvg-LMS é proposta junto com sua análise estatística e a avaliação de desempenho em relação às técnicas existentes com resultados numéricos. / [en] Internet of Things (IoT) networks include smart devices that contain many sensors that allow them to interact with the physical world, collecting and processing streaming data in real time. The total energy-consumption and cost of these sensors affect the energy-consumption and the cost of IoT devices. The type of sensor determines the accuracy of the analog interface and the resolution of the analog-to-digital converters (ADCs). The ADC resolution requirement has a trade-off between sensing performance and energy consumption since the energy consumption of ADCs strongly depends on the number of bits used to represent digital samples. In this thesis, we present an energy-efficient distributed learning framework using coarsely quantized signals for IoT networks. In particular, we develop a distributed quantization-aware least-mean square (DQA-LMS) and a distributed quantization-aware recursive least-squares (DQA-RLS) algorithms that can learn parameters in an energy-efficient fashion using signals quantized with few bits while requiring a low computational cost. Moreover, we develop a bias compensation strategy to further improve the performance of the proposed algorithms. We then carry out a statistical analysis of the proposed algorithms along with a computational complexity evaluation of the proposed and existing techniques. Numerical results assess the distributed quantization-aware algorithms against existing techniques for distributed parameter estimation where IoT devices operate in a peer-to-peer mode. We also introduce an energy-efficient federated learning framework using coarsely quantized signals for IoT networks, where IoT devices exchange their estimates with a server. We then develop the quantization-aware federated averaging LMS (QA-FedAvg-LMS) algorithm to perform parameter estimation at the clients and servers. Furthermore, we devise a bias compensation strategy for QA-FedAvg-LMS, carry out its statistical analysis, and assess its performance against existing techniques with numerical results.
92

Lite-Agro: Integrating Federated Learning and TinyML on IoAT-Edge for Plant Disease Classification

Dockendorf, Catherine April 05 1900 (has links)
Lite-Agro studies applications of TinyML in pear (Pyrus communis) tree disease identification and explores hardware implementations with an ESP32 microcontroller. The study works with the DiaMOS Pear Dataset to learn through image analysis whether the leaf is healthy or not, and classifies it according to curl, healthy, spot or slug categories. The system is designed as a low cost and light-duty computing detection edge solution that compares models such as InceptionV3, XceptionV3, EfficientNetB0, and MobileNetV2. This work also researches integration with federated learning frameworks and provides an introduction to federated averaging algorithms.
93

Towards causal federated learning : a federated approach to learning representations using causal invariance

Francis, Sreya 10 1900 (has links)
Federated Learning is an emerging privacy-preserving distributed machine learning approach to building a shared model by performing distributed training locally on participating devices (clients) and aggregating the local models into a global one. As this approach prevents data collection and aggregation, it helps in reducing associated privacy risks to a great extent. However, the data samples across all participating clients are usually not independent and identically distributed (non-i.i.d.), and Out of Distribution (OOD) generalization for the learned models can be poor. Besides this challenge, federated learning also remains vulnerable to various attacks on security wherein a few malicious participating entities work towards inserting backdoors, degrading the generated aggregated model as well as inferring the data owned by participating entities. In this work, we propose an approach for learning invariant (causal) features common to all participating clients in a federated learning setup and analyse empirically how it enhances the Out of Distribution (OOD) accuracy as well as the privacy of the final learned model. Although Federated Learning allows for participants to contribute their local data without revealing it, it faces issues in data security and in accurately paying participants for quality data contributions. In this report, we also propose an EOS Blockchain design and workflow to establish data security, a novel validation error based metric upon which we qualify gradient uploads for payment, and implement a small example of our Blockchain Causal Federated Learning model to analyze its performance with respect to robustness, privacy and fairness in incentivization. / L’apprentissage fédéré est une approche émergente d’apprentissage automatique distribué préservant la confidentialité pour créer un modèle partagé en effectuant une formation distribuée localement sur les appareils participants (clients) et en agrégeant les modèles locaux en un modèle global. Comme cette approche empêche la collecte et l’agrégation de données, elle contribue à réduire dans une large mesure les risques associés à la vie privée. Cependant, les échantillons de données de tous les clients participants sont généralement pas indépendante et distribuée de manière identique (non-i.i.d.), et la généralisation hors distribution (OOD) pour les modèles appris peut être médiocre. Outre ce défi, l’apprentissage fédéré reste également vulnérable à diverses attaques contre la sécurité dans lesquelles quelques entités participantes malveillantes s’efforcent d’insérer des portes dérobées, dégradant le modèle agrégé généré ainsi que d’inférer les données détenues par les entités participantes. Dans cet article, nous proposons une approche pour l’apprentissage des caractéristiques invariantes (causales) communes à tous les clients participants dans une configuration d’apprentissage fédérée et analysons empiriquement comment elle améliore la précision hors distribution (OOD) ainsi que la confidentialité du modèle appris final. Bien que l’apprentissage fédéré permette aux participants de contribuer leurs données locales sans les révéler, il se heurte à des problèmes de sécurité des données et de paiement précis des participants pour des contributions de données de qualité. Dans ce rapport, nous proposons également une conception et un flux de travail EOS Blockchain pour établir la sécurité des données, une nouvelle métrique basée sur les erreurs de validation sur laquelle nous qualifions les téléchargements de gradient pour le paiement, et implémentons un petit exemple de notre modèle d’apprentissage fédéré blockchain pour analyser ses performances.
94

DISTRIBUTED MACHINE LEARNING OVER LARGE-SCALE NETWORKS

Frank Lin (16553082) 18 July 2023 (has links)
<p>The swift emergence and wide-ranging utilization of machine learning (ML) across various industries, including healthcare, transportation, and robotics, have underscored the escalating need for efficient, scalable, and privacy-preserving solutions. Recognizing this, we present an integrated examination of three novel frameworks, each addressing different aspects of distributed learning and privacy issues: Two Timescale Hybrid Federated Learning (TT-HF), Delay-Aware Federated Learning (DFL), and Differential Privacy Hierarchical Federated Learning (DP-HFL). TT-HF introduces a semi-decentralized architecture that combines device-to-server and device-to-device (D2D) communications. Devices execute multiple stochastic gradient descent iterations on their datasets and sporadically synchronize model parameters via D2D communications. A unique adaptive control algorithm optimizes step size, D2D communication rounds, and global aggregation period to minimize network resource utilization and achieve a sublinear convergence rate. TT-HF outperforms conventional FL approaches in terms of model accuracy, energy consumption, and resilience against outages. DFL focuses on enhancing distributed ML training efficiency by accounting for communication delays between edge and cloud. It also uses multiple stochastic gradient descent iterations and periodically consolidates model parameters via edge servers. The adaptive control algorithm for DFL mitigates energy consumption and edge-to-cloud latency, resulting in faster global model convergence, reduced resource consumption, and robustness against delays. Lastly, DP-HFL is introduced to combat privacy vulnerabilities in FL. Merging the benefits of FL and Hierarchical Differential Privacy (HDP), DP-HFL significantly reduces the need for differential privacy noise while maintaining model performance, exhibiting an optimal privacy-performance trade-off. Theoretical analysis under both convex and nonconvex loss functions confirms DP-HFL’s effectiveness regarding convergence speed, privacy performance trade-off, and potential performance enhancement with appropriate network configuration. In sum, the study thoroughly explores TT-HF, DFL, and DP-HFL, and their unique solutions to distributed learning challenges such as efficiency, latency, and privacy concerns. These advanced FL frameworks have considerable potential to further enable effective, efficient, and secure distributed learning.</p>

Page generated in 0.0969 seconds