• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 3
  • Tagged with
  • 94
  • 94
  • 41
  • 37
  • 30
  • 28
  • 26
  • 23
  • 23
  • 19
  • 17
  • 17
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Over-the-Air Computation for Machine Learning: Model Aggregation via Retransmissions

Hellström, Henrik January 2022 (has links)
With the emerging Internet of Things (IoT) paradigm, more than a billion sensing devices will be collecting an unprecedented amount of data. Simultaneously, the field of data analytics is being revolutionized by modern machine learning (ML) techniques that enable sophisticated processing of massive datasets. Many researchers are envisioning a combination of these two technologies to support exciting applications such as environmental monitoring, Industry 4.0, and vehicular communications. However, traditional wireless communication protocols are inefficient in supporting distributed ML services, where data and computations are distributed over wireless networks. This motivates the need for new wireless communication methods. One such method, over-the-air computation (AirComp), promises to communicate with massive gains in terms of energy, latency, and spectrum efficiency compared to traditional methods. The expected efficiency of AirComp is due to the complete spectrum sharing for all participating devices. Unlike in traditional physical-layer communications, where interference is avoided by allocating orthogonal communication channels, AirComp promotes interference to compute a function of the individually transmitted messages. However, AirComp can not reconstruct functions perfectly but introduces errors in the process, which harms the convergence rate and region of optimality of ML algorithms. The main objective of this thesis is to develop methods that reduce these errors and analyze their effects on ML performance. In the first part of this thesis, we consider the general problem of designing wireless methods for ML applications. In particular, we present an extensive survey which divides the field into two broad categories, digital communications and analog over-the-air-computation. Digital communications refers to orthogonal communication schemes that are optimized for ML metrics, such as classification accuracy, privacy, and data-importance, rather than traditional communication metrics such as fairness, data rate, and reliability. Analog over-the-air-computation refers to the AirComp method and its application to distributed ML, where communication-efficiency, function estimation, and privacy are key concerns. In the second part of this thesis, we focus on the analog over-the-air computation problem. We consider a network setup with multiple devices and a server that can be reached via a single hop, where the wireless channel is modeled as a multiple-access channel with fading and additive noise. Over such a channel, the AirComp function estimate is associated with two types of error: 1) misalignment errors caused by channel fading and 2) noise-induced errors caused by the additive noise. To mitigate these errors, we propose AirComp with retransmissions and develop the optimal power control scheme for such a system. Furthermore, we use optimization theory to derive bounds on the convergence of an AirComp-supported ML system that reveal a relationship between the number of retransmissions and loss of the ML model. Finally, with numerical results we show that retransmissions can significantly improve ML performance, especially for low-SNR scenarios. / Med Internet of Things (IoT)-paradigmen, kommer över en miljard sensorenheter att samla en mängd data som saknar motstycke. Samtidigt har dataanalys revolutionerats av moderna maskininlärningstekniker (ML) som möjliggör avancerad behandling av massiva dataset. Många forskare föreställer sig en kombination av dessa två two teknologier för att möjliggöra spännande applikationer som miljöövervakning, Industri 4.0, och fordonskommunikation. Tyvärr är traditionella kommunikationsprotokoll ineffektiva när det kommer till att stödja distribuerad maskininlärning, där data och beräkningar är utspridda över trådlösa nätverk. Detta motiverar behovet av nya trådlösa kommunikationsprotokoll. Ett protokoll, over-the-air computation (AirComp), lovar att kommunicera med enorma fördelar när det kommer till energieffektivitet, latens, and spektrumeffektivitet jämfört med traditionella protkoll. AirComps effektivitet beror på den fullständiga spektrumdelningen mellan alla medverkande enheter. Till skillnad från traditionell ortogonal kommunikation, där interferens undviks genom att allokera ortogonala radioresurser, så uppmuntrar AirComp interferens och nyttjar den för att räkna ut en funktion av de kommunicerade meddelanderna. Dock kan inte AirComp rekonstruera funktioner perfekt, utan introducerar fel i processen vilket försämrar konvergensen av ML-algoritmer. Det huvudsakliga målet med den här avhandlingen är att utveckla metoder som minskar dessa fel och att analysera de effekter felen har på prestandan av distribuerade ML-algoritmer. I den första delen av avhandlingen behandlar vi det allmänna problemet med att designa trådlösa nätverksprotokoll för att stödja ML. Specifikt så presenterar vi en utförlig kartläggning som delar upp fältet i två kategorier, digital kommunikation och analog AirComp. Digital kommunikation syftar på ortogonala kommunikationsprotokoll som är optimerade för ML-måttstockar, t.ex. klassifikationskapabilitet, integritet, och data-vikt (data-importance), snarare än traditionella kommunikationsmål såsom jämlikhet, datahastighet, och tillförlitlighet. Analog AirComp syftar till AirComps applicering till distribuerad ML, där kommunikationseffektivitet, funktionsestimering, och integritet är viktiga måttstockar. I den andra delen av avhandlingen fokuserar vi på det analoga AirComp-problemet. Vi beaktar ett nätverk med flera enheter och en server som kan nås via en länk, där den trådlösa kanalen modelleras som en multiple-access kanal (MAC) med fädning och additivt brus. Över en sådan kanal så associeras AirComps funktionsestimat med två sorters fel: 1) felinställningsfel orsakade av fädning och 2) brusinducerade fel orsakade av det additiva bruset. För att mildra felen föreslår vi AirComp med återsändning och utvecklar den optimala "power control"-algoritmen för ett sådant system. Dessutom använder vi optimeringsteori för att härleda begränsningar på konvergensen av ett AirCompsystem för distribuerad ML som tydliggör ett förhållande mellan antalet återsändningar och förlustfunktionen för ML-modellen. Slutligen visar vi att återsändningar kan signifikant förbättra ML-prestanda genom numeriska resultat, särskilt när signal-till-brus ration är låg. / <p>QC 20220909</p>
42

Resource Allocation for Federated Learning over Wireless Networks

Jansson, Fredrik January 2022 (has links)
This thesis examines resource allocation for Federated Learning in wireless networks. In Federated learning a server and a number of users exchange neural network parameters during training. This thesis aims to create a realistic simulation of a Federated Learning process by creating a channel model and using compression when channel capacity is insufficient. In the thesis we learn that Federated learning can handle high ratios of sparsification compression. We will also investigate how the choice of users and scheduling schemes affect the convergence speed and accuracy of the training process. This thesis will conclude that the choice of scheduling schemes will depend on the distributed data distribution.
43

Autonomic Management and Orchestration Strategies in MEC-Enabled 5G Networks

Subramanya, Tejas 26 October 2021 (has links)
5G and beyond mobile network technology promises to deliver unprecedented ultra-low latency and high data rates, paving the way for many novel applications and services. Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC) are two technologies expected to play a vital role in achieving ambitious Quality of Service requirements of such applications. While NFV provides flexibility by enabling network functions to be dynamically deployed and inter-connected to realize Service Function Chains (SFC), MEC brings the computing capability to the mobile network's edges, thus reducing latency and alleviating the transport network load. However, adequate mechanisms are needed to meet the dynamically changing network service demands (i.e., in single and multiple domains) and optimally utilize the network resources while ensuring that the end-to-end latency requirement of services is always satisfied. In this dissertation work, we break the problem into three separate stages and present the solutions for each one of them.Firstly, we apply Artificial Intelligence (AI) techniques to drive NFV resource orchestration in MEC-enabled 5G architectures for single and multi-domain scenarios. We propose three deep learning approaches to perform horizontal and vertical Virtual Network Function (VNF) auto-scaling: (i) Multilayer Perceptron (MLP) classification and regression (single-domain), (ii) Centralized Artificial Neural Network (ANN), centralized Long-Short Term Memory (LSTM) and centralized Convolutional Neural Network-LSTM (CNN-LSTM) (single-domain), and (iii) Federated ANN, federated LSTM and federated CNN-LSTM (multi-domain). We evaluate the performance of each of these deep learning models trained over a commercial network operator dataset and investigate the pros and cons of different approaches for VNF auto-scaling. For the first approach, our results show that both MLP classifier and MLP regressor models have strong predicting capability for auto-scaling. However, MLP regressor outperforms MLP classifier in terms of accuracy. For the second approach (one-step prediction), CNN-LSTM performs the best for the QoS-prioritized objective and LSTM performs the best for the cost-prioritized objective. For the second approach (multi-step prediction), the encoder-decoder CNN-LSTM model outperforms the encoder-decoder LSTM model for both QoS and Cost prioritized objectives. For the third approach, both federated LSTM and federated CNN-LSTM models perform equally better than the federated ANN model. It was also noted that in general federated learning approaches performs poorly compared to centralized learning approaches. Secondly, we employ Integer Linear Programming (ILP) techniques to formulate and solve a joint user association and SFC placement problem, where each SFC represents a service requested by a user with end-to-end latency and data rate requirements. We also develop a comprehensive end-to-end latency model considering radio delay, backhaul network delay and SFC processing delay for 5G mobile networks. We evaluated the proposed model using simulations based on real-operator network topology and real-world latency values. Our results show that the average end-to-end latency reduces significantly when SFCs are placed at the ME hosts according to their latency and data rate demands. Furthermore, we propose an heuristic algorithm to address the issue of scalability in ILP, that can solve the above association/mapping problem in seconds rather than hours.Finally, we introduce lightMEC - a lightweight MEC platform for deploying mobile edge computing functionalities which allows hosting of low-latency and bandwidth-intensive applications at the network edge. Measurements conducted over a real-life test demonstrated that lightMEC could actually support practical MEC applications without requiring any change to existing mobile network nodes' functionality in the access and core network segments. The significant benefits of adopting the proposed architecture are analyzed based on a proof-of-concept demonstration of the content caching use case. Furthermore, we introduce the AI-driven Kubernetes orchestration prototype that we implemented by leveraging the lightMEC platform and assess the performance of the proposed deep learning models (from stage 1) in an experimental setup. The prototype evaluations confirm the simulation results achieved in stage 1 of the thesis.
44

<b>MODERN BANDIT OPTIMIZATION WITH STATISTICAL GUARANTEES</b>

Wenjie Li (17506956) 01 December 2023 (has links)
<p dir="ltr">Bandit and optimization represent prominent areas of machine learning research. Despite extensive prior research on these topics in various contexts, modern challenges, such as deal- ing with highly unsmooth nonlinear reward objectives and incorporating federated learning, have sparked new discussions. The X-armed bandit problem is a specialized case where bandit algorithms and blackbox optimization techniques join forces to address noisy reward functions within continuous domains to minize the regret. This thesis concentrates on the X -armed bandit problem in a modern setting. In the first chapter, we introduce an optimal statistical collaboration framework for the single-client X -armed bandit problem, expanding the range of objectives by considering more general smoothness assumptions and empha- sizing tighter statistical error measures to expedite learning. The second chapter addresses the federated X-armed bandit problem, providing a solution for collaboratively optimizing the average global objective while ensuring client privacy. In the third chapter, we confront the more intricate personalized federated X -armed bandit problem. An enhanced algorithm facilitating the simultaneous optimization of all local objectives is proposed.</p>
45

Secure and efficient federated learning

Li, Xingyu 12 May 2023 (has links) (PDF)
In the past 10 years, the growth of machine learning technology has been significant, largely due to the availability of large datasets for training. However, gathering a sufficient amount of data on a central server can be challenging. Additionally, with the rise of mobile networking and the large amounts of data generated by IoT devices, privacy and security issues have become a concern, resulting in government regulations such as GDPR, HIPAA, CCPA, and ADPPA. Under these circumstances, traditional centralized machine learning methods face a problem in that sensitive data must be kept locally for privacy reasons, making it difficult to achieve the desired learning outcomes. Federated learning (FL) offers a solution to this by allowing for a global shared model to be trained by exchanging locally computed optimums instead of sharing the actual data. Despite its success as a natural solution for IoT machine learning implementation, Federated learning (FL) still faces challenges with regards to security and performance. These include high communication costs between IoT devices and the central server, the potential for sensitive information leakage and reduced model precision due to the aggregation process in the distributed IoT network, and performance concerns caused by the heterogeneity of data and devices in the network. In this dissertation, I present practical and effective techniques with strong theoretical supports to address these challenges. To optimize communication resources, I introduce a new multi-server FL framework called MS-FedAvg. To enhance security, I propose a robust defense algorithm called LoMar. To address data heterogeneity, I present FedLGA, and for device heterogeneity, I propose FedSAM.
46

Classifying femur fractures using federated learning

Zhang, Hong January 2024 (has links)
The rarity and subtle radiographic features of atypical femoral fractures (AFF) make it difficult to distinguish radiologically from normal femoral fractures (NFF). Compared with NFF, AFF has subtle radiological features and is associated with the long-term use of bisphosphonates for the treatment of osteoporosis. Automatically classifying AFF and NFF not only helps improve the diagnosis rate of AFF but also helps patients receive timely treatment. In recent years, automatic classification technologies for AFF and NFF have continued to emerge, including but not limited to the use of convolutional neural networks (CNNs), vision transformers (ViTs), and multimodal deep learning prediction models. The above methods are all based on deep learning and require the use of centralized radiograph datasets. However, centralizing medical radiograph data involves issues such as patient privacy and data heterogeneity. Firstly, radiograph data is difficult to share among hospitals, and relevant laws or guidelines prohibit the dissemination of these data; Second, there were overall radiological differences among the different hospital datasets, and deep learning does not fully consider the fusion problem of these multi-source heterogeneous datasets. Based on federated learning, we implemented a distributed deep learning strategy to avoid the use of centralized datasets, thereby protecting the local radiograph datasets of medical institutions and patient privacy. To achieve this goal, we studied approximately 4000 images from 72 hospitals in Sweden, containing 206 AFF patients and 744 NFF patients. By dispersing the radiograph datasets of different hospitals across 3-5 nodes, we can simulate the real-world data distribution scenarios, train the local models of the nodes separately, and aggregate the global model, combined with percentile privacy protection, to further protect the security of the local datasets; in addition, we compare the performance of federated learning models using different aggregation algorithms (FedAvg, FedProx, and FedOpt). In the end, the federated learning global model we obtained is better than these local training models, and the performance of federated learning models is close to the performance of the centralized learning model. It is even better than the centralized learning model in some metrics. We conducted 3-node and 5-node federation learning training respectively. Limited by the data set size of each node, 5-node federated learning does not show any more significant performance than 3-node federated learning. Federated learning is more conducive to collaborative training of high-quality prediction models among medical institutions, but also fully protects sensitive medical data. We believe that it will become a paradigm for collaborative training models in the foreseeable future.
47

Cross-Device Federated Intrusion Detector For Early Stage Botnet Propagation

Famera, Angela Grace 03 January 2023 (has links)
No description available.
48

Decentralized Machine Learning On Blockchain: Developing A Federated Learning Based System

Sridhar, Nikhil 01 December 2023 (has links) (PDF)
Traditional Machine Learning (ML) methods usually rely on a central server to per-form ML tasks. However, these methods have problems like security risks, datastorage issues, and high computational demands. Federated Learning (FL), on theother hand, spreads out the ML process. It trains models on local devices and thencombines them centrally. While FL improves computing and customization, it stillfaces the same challenges as centralized ML in security and data storage. This thesis introduces a new approach combining Federated Learning and Decen-tralized Machine Learning (DML), which operates on an Ethereum Virtual Machine(EVM) compatible blockchain. The blockchain’s security and decentralized naturehelp improve transparency, trust, scalability, and efficiency. The main contributionsof this thesis include:1. Redesigning a semi-centralized system with enhanced privacy and the multi-KRUM algorithm, following the work of Shayan et al..2. Developing a new decentralized framework that supports both standard anddeep-learning FL, using the InterPlanetary File System (IPFS) and EthereumVirtual Machine (EVM)-compatible Smart Contracts.3. Assessing how well the system defends against common data poisoning attacks,using a version of Multi-KRUM that’s better at detecting outliers.4. Applying privacy methods to securely combine data from different sources.
49

Federated Machine Learning for Resource Allocation in Multi-domain Fog Ecosystems

Zhang, Weilin January 2023 (has links)
The proliferation of the Internet of Things (IoT) has increasingly demanded intimacy between cloud services and end-users. This has incentivised extending cloud resources to the edge in what is deemed fog computing. The latter is manifesting as an ecosystem of connected clouds, geo-dispersed and of diverse capacities. In such conditions, workload allocation to fog services becomes a non-trivial challenge due to the complexity of trade-offs. Users' demand at the edge is highly diverse, which does not lend itself to straightforward resource planning. Conversely, running services at the edge may leverage proximity, but it comes at higher operational cost let alone rapidly increasing the risk of straining sparse resources. Consequently, there is a need for intelligent yet scalable allocation solutions that counter the adversity of demand at the edge, while efficiently distributing load between the edge and farther clouds. Machine learning is increasingly adopted in resource planning. However, besides privacy concerns, central learning is highly demanding, both computationally and in data supply. Instead, this paper proposes a federated deep reinforcement learning system, based on deep Q-learning network (DQN), for workload distribution in a fog ecosystem. The proposed solution adapts a DQN to optimize local workload allocations, made by single gateways. Federated learning is incorporated to allow multiple gateways in a network to collaboratively build knowledge of users' demand. This is leveraged to establish consensus on the fraction of workload allocated to different fog nodes, using lower data supply and computation resources. The system performance is evaluated using realistic demand set from Google Cluster Workload Traces 2019. Evaluation results show over 50% reduction in failed allocations when distributing users over larger number of gateways, given fixed number of fog nodes. The results further illustrate the trade-offs between performance and cost under different conditions.
50

Simulating Broadband Analog Aggregation for Federated Learning

Pekkanen, Linus, Johansson, Patrik January 2020 (has links)
With increasing amounts of data coming fromconnecting progressively more devices, new machine learningmodels have risen. For wireless networks the idea of using adistributed approach to machine learning has gained increasingpopularity, where all nodes in the network participate in creatinga global machine learning model by training with the localdata stored at each node, an example of this approach is calledfederated learning. However, traditional communication protocolshave been proven inefficient. This opens up opportunities todesign new machine-learning specific communication schemes.The concept ofOver-the-air computationis built on the fact thata wireless communication channel can naturally compute somelinear functions, for instance the sum. If all nodes in a networktransmits simultaneously to a server, the signals are aggregatedbefore reaching the server. / I takt med denökande datamängden frånallt fler uppkopplade enheter har nya modeller för mask-ininlärning dykt upp. För trådlösa nätverk har idén att appliceradecentraliserade maskininlärnings modellerökat i popularitet,där alla noder i nätverket bidrar till en global maskininlärningsmodell genom att träna på den data som finns lokalt på varjenod. Ett exempel på en sådan metodärFederated Learning.Traditionella metoder för kommunikation har visat sig varaineffektiva vilket öppnar upp möjligheten för att designa nyamaskininlärningsspecifika kommunikationsscheman. Konceptetover-the-air computationutnyttjar det faktum att en trådlöskommunikationskanal naturligt kan beräkna vissa funktioner,som exempelvis en summa. Om alla noder i nätverket sändertill en server samtidigt aggregeras signalerna genom interferensinnan de når servern. / Kandidatexjobb i elektroteknik 2020, KTH, Stockholm

Page generated in 0.1335 seconds