11 |
FEDERATED LEARNING AMIDST DYNAMIC ENVIRONMENTSBhargav Ganguly (19119859) 08 November 2024 (has links)
<p dir="ltr">Federated Learning (FL) is a prime example of a large-scale distributed machine learning framework that has emerged as a result of the exponential growth in data generation and processing capabilities on smart devices. This framework enables the efficient processing and analysis of vast amounts of data, leveraging the collective power of numerous devices to achieve unprecedented scalability and performance. In the FL framework, each end-user device trains a local model using its own data. Through the periodic synchronization of local models, FL achieves a global model that incorporates the insights from all participat- ing devices. This global model can then be used for various applications, such as predictive analytics, recommendation systems, and more.</p><p dir="ltr">Despite its potential, traditional Federated Learning (FL) frameworks face significant hur- dles in real-world applications. These challenges stem from two primary issues: the dynamic nature of data distributions and the efficient utilization of network resources in diverse set- tings. Traditional FL frameworks often rely on the assumption that data distributions remain stationary over time. However, real-world environments are inherently dynamic, with data distributions constantly evolving, which in turn becomes a potential source of <i>temporal</i> het- erogeneity in FL. Another significant challenge in traditional FL frameworks is the efficient use of network resources in heterogeneous settings. Real-world networks consist of devices with varying computational capabilities, communication protocols, and network conditions. Traditional FL frameworks often struggle to adapt to these diverse <i>spatially</i> heterogeneous settings, leading to inefficient use of network resources and increased latency.</p><p dir="ltr">The primary focus of this thesis is to investigate algorithmic frameworks that can miti- gate the challenges posed by <i>temporal</i> and <i>spatial</i> system heterogeneities in FL. One of the significant sources of <i>temporal</i> heterogeneities in FL is owed to the dynamic drifting of client datasets over time, whereas <i>spatial</i> heterogeneities majorly broadly subsume the diverse computational capabilities and network conditions of devices in a network. We introduce two novel FL frameworks: MASTER-FL, which addresses model staleness in the presence of <i>temporally</i> drifting datasets, and Cooperative Edge-Assisted Dynamic Federated Learning CE-FL, which manages both <i>spatial</i> and <i>temporal</i> heterogeneities in extensive hierarchical FL networks. MASTER-FL is specifically designed to ensure that the global model remains accurate and up-to-date even in environments which are characterized by rapidly changing datasets across time. CE-FL, on the other hand, leverages server-side computing capabili- ties, intelligent data offloading, floating aggregation and cooperative learning strategies to manage the diverse computational capabilities and network conditions often associated with modern FL systems.</p>
|
12 |
Federated Learning for Natural Language Processing using Transformers / Evaluering av Federerad Inlärning tillämpad på Transformers för klassificering av analytikerrapporterKjellberg, Gustav January 2022 (has links)
The use of Machine Learning (ML) in business has increased significantly over the past years. Creating high quality and robust models requires a lot of data, which is at times infeasible to obtain. As more people are becoming concerned about their data being misused, data privacy is increasingly strengthened. In 2018, the General Data Protection Regulation (GDPR), was announced within the EU. Models that use either sensitive or personal data to train need to obtain that data in accordance with the regulatory rules, such as GDPR. One other data related issue is that enterprises who wish to collaborate on model building face problems when it requires them to share their private corporate data [36, 38]. In this thesis we will investigate how one might overcome the issue of directly accessing private data when training ML models by employing Federated Learning (FL) [38]. The concept of FL is to allow several silos, i.e. separate parties, to train models with the same objective, using their local data and then with the learned model parameters create a central model. The objective of the central model is to obtain the information learned by the separate models, without ever accessing the raw data itself. This is achieved by averaging the separate models’ weights into the central model. FL thus facilitates opportunities to train a model on large amounts of data from several sources, without the need of having access to the data itself. If one can create a model with this methodology, that is not significantly worse than a model trained on the raw data, then positive effects such as strengthened data privacy, cross-enterprise collaboration and more could be attainable. In this work we have used a financial data set consisting of 25242 equity research reports, provided by Skandinaviska Enskilda Banken (SEB). Each report has a recommendation label, either Buy, Sell or Hold, making this a multi-class classification problem. To evaluate the feasibility of FL we fine-tune the pre-trained Transformer model AlbertForSequenceClassification [37] on the classification task. We create one baseline model using the entire data set and an FL model with different experimental settings, for which the data is distributed both uniformly and non-uniformly. The baseline model is used to benchmark the FL model. Our results indicate that the best FL setting only suffers a small reduction in performance. The baseline model achieves an accuracy of 83.5% compared to 82.8% for the best FL model setting. Further, we find that with an increased number of clients, the performance is worsened. We also found that our FL model was not sensitive to non-uniform data distributions. All in all, we show that FL results in slightly worse generalisation compared to the baseline model, while strongly improving on data privacy, as the central model never accesses the clients’ data. / Företags nyttjande av maskininlärning har de senaste åren ökat signifikant och för att kunna skapa högkvalitativa modeller krävs stora mängder data, vilket kan vara svårt att insamla. Parallellt med detta så ökar också den allmänna förståelsen för hur användandet av data missbrukas, vilket har lätt till ett ökat behov av starkare datasäkerhet. 2018 så trädde General Data Protection Regulation (GDPR) i kraft inom EU, vilken bland annat ställer krav på hur företag skall hantera persondata. Företag med maskininlärningsmodeller som på något sätt använder känslig eller personlig data behöver således ha fått tillgång till denna data i enlighet med de rådande lagar och regler som omfattar datahanteringen. Ytterligare ett datarelaterat problem är då företag önskar att skapa gemensamma maskininlärningsmodeller som skulle kräva att de delar deras bolagsdata [36, 38]. Denna uppsats kommer att undersöka hur Federerad Inlärning [38] kan användas för att skapa maskinlärningsmodeller som överkommer dessa datasäkerhetsrelaterade problem. Federerad Inlärning är en metod för att på ett decentraliserat vis träna maskininlärningsmodeller. Detta omfattar att låta flera aktörer träna en modell var. Varje enskild aktör tränar respektive modell på deras isolerade data och delar sedan endast modellens parametrar till en central modell. På detta vis kan varje enskild modell bidra till den gemensamma modellen utan att den gemensamma modellen någonsin haft tillgång till den faktiska datan. Givet att en modell, skapad med Federerad Inlärning kan uppnå liknande resultat som en modell tränad på rådata, så finns många positiva fördelar så som ökad datasäkerhet och ökade samarbeten mellan företag. Under arbetet har ett dataset, bestående av 25242 finansiella rapporter tillgängliggjort av Skandinaviska Ensilda Banken (SEB) använts. Varje enskild rapport innefattar en rekommendation, antingen Köp, Sälj eller Håll, vilket innebär att vi utför muliklass-klassificering. Med datan tränas den förtränade Transformermodellen AlbertForSequence- Classification [37] på att klassificera rapporterna. En Baseline-modell, vilken har tränats på all rådata och flera Federerade modellkonfigurationer skapades, där bland annat varierande fördelningen av data mellan aktörer från att vara jämnt fördelat till vara ojämnt fördelad. Resultaten visar att den bästa Federerade modellkonfigurationen endast presterar något sämre än Baseline-modellen. Baselinemodellen uppnådde en klassificeringssäkerhet på 83.5% medan den bästa Federerade modellen uppnådde 82.8%. Resultaten visar också att den Federerade modellen inte var känslig mot att variera fördelningen av datamängd mellan aktorerna, samt att med ett ökat antal aktörer så minskar klassificeringssäkerheten. Sammanfattningsvis så visar vi att Federerad Inlärning uppnår nästan lika goda resultat som Baseline-modellen, samtidigt så bidrar metoden till avsevärt bättre datasäkerhet då den centrala modellen aldrig har tillgång till rådata.
|
13 |
Parallel and Decentralized Algorithms for Big-data Optimization over NetworksAmir Daneshmand (11153640) 22 July 2021 (has links)
<p>Recent decades have witnessed the rise of data deluge generated by heterogeneous sources, e.g., social networks, streaming, marketing services etc., which has naturally created a surge of interests in theory and applications of large-scale convex and non-convex optimization. For example, real-world instances of statistical learning problems such as deep learning, recommendation systems, etc. can generate sheer volumes of spatially/temporally diverse data (up to Petabytes of data in commercial applications) with millions of decision variables to be optimized. Such problems are often referred to as Big-data problems. Solving these problems by standard optimization methods demands intractable amount of centralized storage and computational resources which is infeasible and is the foremost purpose of parallel and decentralized algorithms developed in this thesis.</p><p><br></p><p>This thesis consists of two parts: (I) Distributed Nonconvex Optimization and (II) Distributed Convex Optimization.</p><p><br></p><p>In Part (I), we start by studying a winning paradigm in big-data optimization, Block Coordinate Descent (BCD) algorithm, which cease to be effective when problem dimensions grow overwhelmingly. In particular, we considered a general family of constrained non-convex composite large-scale problems defined on multicore computing machines equipped with shared memory. We design a hybrid deterministic/random parallel algorithm to efficiently solve such problems combining synergically Successive Convex Approximation (SCA) with greedy/random dimensionality reduction techniques. We provide theoretical and empirical results showing efficacy of the proposed scheme in face of huge-scale problems. The next step is to broaden the network setting to general mesh networks modeled as directed graphs, and propose a class of gradient-tracking based algorithms with global convergence guarantees to critical points of the problem. We further explore the geometry of the landscape of the non-convex problems to establish second-order guarantees and strengthen our convergence to local optimal solutions results to global optimal solutions for a wide range of Machine Learning problems.</p><p><br></p><p>In Part (II), we focus on a family of distributed convex optimization problems defined over meshed networks. Relevant state-of-the-art algorithms often consider limited problem settings with pessimistic communication complexities with respect to the complexity of their centralized variants, which raises an important question: can one achieve the rate of centralized first-order methods over networks, and moreover, can one improve upon their communication costs by using higher-order local solvers? To answer these questions, we proposed an algorithm that utilizes surrogate objective functions in local solvers (hence going beyond first-order realms, such as proximal-gradient) coupled with a perturbed (push-sum) consensus mechanism that aims to track locally the gradient of the central objective function. The algorithm is proved to match the convergence rate of its centralized counterparts, up to multiplying network factors. When considering in particular, Empirical Risk Minimization (ERM) problems with statistically homogeneous data across the agents, our algorithm employing high-order surrogates provably achieves faster rates than what is achievable by first-order methods. Such improvements are made without exchanging any Hessian matrices over the network. </p><p><br></p><p>Finally, we focus on the ill-conditioning issue impacting the efficiency of decentralized first-order methods over networks which rendered them impractical both in terms of computation and communication cost. A natural solution is to develop distributed second-order methods, but their requisite for Hessian information incurs substantial communication overheads on the network. To work around such exorbitant communication costs, we propose a “statistically informed” preconditioned cubic regularized Newton method which provably improves upon the rates of first-order methods. The proposed scheme does not require communication of Hessian information in the network, and yet, achieves the iteration complexity of centralized second-order methods up to the statistical precision. In addition, (second-order) approximate nature of the utilized surrogate functions, improves upon the per-iteration computational cost of our earlier proposed scheme in this setting.</p>
|
14 |
DISTRIBUTED MACHINE LEARNING OVER LARGE-SCALE NETWORKSFrank Lin (16553082) 18 July 2023 (has links)
<p>The swift emergence and wide-ranging utilization of machine learning (ML) across various industries, including healthcare, transportation, and robotics, have underscored the escalating need for efficient, scalable, and privacy-preserving solutions. Recognizing this, we present an integrated examination of three novel frameworks, each addressing different aspects of distributed learning and privacy issues: Two Timescale Hybrid Federated Learning (TT-HF), Delay-Aware Federated Learning (DFL), and Differential Privacy Hierarchical Federated Learning (DP-HFL). TT-HF introduces a semi-decentralized architecture that combines device-to-server and device-to-device (D2D) communications. Devices execute multiple stochastic gradient descent iterations on their datasets and sporadically synchronize model parameters via D2D communications. A unique adaptive control algorithm optimizes step size, D2D communication rounds, and global aggregation period to minimize network resource utilization and achieve a sublinear convergence rate. TT-HF outperforms conventional FL approaches in terms of model accuracy, energy consumption, and resilience against outages. DFL focuses on enhancing distributed ML training efficiency by accounting for communication delays between edge and cloud. It also uses multiple stochastic gradient descent iterations and periodically consolidates model parameters via edge servers. The adaptive control algorithm for DFL mitigates energy consumption and edge-to-cloud latency, resulting in faster global model convergence, reduced resource consumption, and robustness against delays. Lastly, DP-HFL is introduced to combat privacy vulnerabilities in FL. Merging the benefits of FL and Hierarchical Differential Privacy (HDP), DP-HFL significantly reduces the need for differential privacy noise while maintaining model performance, exhibiting an optimal privacy-performance trade-off. Theoretical analysis under both convex and nonconvex loss functions confirms DP-HFL’s effectiveness regarding convergence speed, privacy performance trade-off, and potential performance enhancement with appropriate network configuration. In sum, the study thoroughly explores TT-HF, DFL, and DP-HFL, and their unique solutions to distributed learning challenges such as efficiency, latency, and privacy concerns. These advanced FL frameworks have considerable potential to further enable effective, efficient, and secure distributed learning.</p>
|
15 |
Investigation of Backdoor Attacks and Design of Effective Countermeasures in Federated LearningAgnideven Palanisamy Sundar (11190282) 03 September 2024 (has links)
<p dir="ltr">Federated Learning (FL), a novel subclass of Artificial Intelligence, decentralizes the learning process by enabling participants to benefit from a comprehensive model trained on a broader dataset without direct sharing of private data. This approach integrates multiple local models into a global model, mitigating the need for large individual datasets. However, the decentralized nature of FL increases its vulnerability to adversarial attacks. These include backdoor attacks, which subtly alter classification in some categories, and byzantine attacks, aimed at degrading the overall model accuracy. Detecting and defending against such attacks is challenging, as adversaries can participate in the system, masquerading as benign contributors. This thesis provides an extensive analysis of the various security attacks, highlighting the distinct elements of each and the inherent vulnerabilities of FL that facilitate these attacks. The focus is primarily on backdoor attacks, which are stealthier and more difficult to detect compared to byzantine attacks. We explore defense strategies effective in identifying malicious participants or mitigating attack impacts on the global model. The primary aim of this research is to evaluate the effectiveness and limitations of existing server-level defenses and to develop innovative defense mechanisms under diverse threat models. This includes scenarios where the server collaborates with clients to thwart attacks, cases where the server remains passive but benign, and situations where no server is present, requiring clients to independently minimize and isolate attacks while enhancing main task performance. Throughout, we ensure that the interventions do not compromise the performance of both global and local models. The research predominantly utilizes 2D and 3D datasets to underscore the practical implications and effectiveness of proposed methodologies.</p>
|
Page generated in 0.095 seconds