Spelling suggestions: "subject:"distributed systems anda algorithms"" "subject:"distributed systems ando algorithms""
11 |
Dissertation_Miller_Alexander_DTECH_5APR2023.pdfAlexander Thomas Miller (15204598) 12 April 2023 (has links)
<p>The advent and spread of counterfeit goods in medical supply chains is an opportunistic activity by actors who take advantage of information asymmetry between themselves and the rest of the supply chain. Counterfeiters, fraudsters, and companies who intend to cut corners have asymmetric knowledge of the quality (substandard) of the products they possess and are not obligated, but have a disincentive, to share that asymmetric knowledge. This has led to a medical supply chain that is riddled with asymmetric information from consumers, all the way upstream to manufacturers. The asymmetric information present in the supply chain allows agents to take advantage of the demand and chaos in the system to act contrary to the principal’s, in this case the supply chain, best interest as described in the agent-principal theory. The problem related to information asymmetry in principal-agent relationships, including those encapsulated in supply chains, is well documented in prior literature. The missing piece of research deals with quantification of information asymmetry metrics and assessing supply chain of goods.</p>
<p>This research explored current and proposed information asymmetry mitigating activities including the potential applications of technology-based methods of reducing information asymmetry within the medical supply chains including distributed ledger technology. Five data aggregation services were searched for relevant literature generating a final sample for analysis of 90 documents (ndocuments = 90). A qualitative meta-analysis methodology was conducted using Nvivo as exploratory research to analyze content in the corpus of documents and extract key themes relevant to each research question then synthesize frequencies of key themes such that information asymmetry in medical supply chains can be decomposed into agents, conditions, and contributing factors.</p>
|
12 |
Analyses and Scalable Algorithms for Byzantine-Resilient Distributed OptimizationKananart Kuwaranancharoen (16480956) 03 July 2023 (has links)
<p>The advent of advanced communication technologies has given rise to large-scale networks comprised of numerous interconnected agents, which need to cooperate to accomplish various tasks, such as distributed message routing, formation control, robust statistical inference, and spectrum access coordination. These tasks can be formulated as distributed optimization problems, which require agents to agree on a parameter minimizing the average of their local cost functions by communicating only with their neighbors. However, distributed optimization algorithms are typically susceptible to malicious (or "Byzantine") agents that do not follow the algorithm. This thesis offers analysis and algorithms for such scenarios. As the malicious agent's function can be modeled as an unknown function with some fundamental properties, we begin in the first two parts by analyzing the region containing the potential minimizers of a sum of functions. Specifically, we explicitly characterize the boundary of this region for the sum of two unknown functions with certain properties. In the third part, we develop resilient algorithms that allow correctly functioning agents to converge to a region containing the true minimizer under the assumption of convex functions of each regular agent. Finally, we present a general algorithmic framework that includes most state-of-the-art resilient algorithms. Under the strongly convex assumption, we derive a geometric rate of convergence of all regular agents to a ball around the optimal solution (whose size we characterize) for some algorithms within the framework.</p>
|
13 |
Scalable Parallel Machine Learning on High Performance Computing Systems–Clustering and Reinforcement LearningWeijian Zheng (14226626) 08 December 2022 (has links)
<p>High-performance computing (HPC) and machine learning (ML) have been widely adopted by both academia and industries to address enormous data problems at extreme scales. While research has reported on the interactions of HPC and ML, achieving high performance and scalability for parallel and distributed ML algorithms is still a challenging task. This dissertation first summarizes the major challenges for applying HPC to ML applications: 1) poor performance and scalability, 2) loss of the convergence rate, 3) lower quality of the trained model, and 4) a lack of performance optimization techniques designed for specific applications. Researchers can address the four challenges in new ML applications. This dissertation shows how to solve them for two specific applications: 1) a clustering algorithm and 2) graph optimization algorithms that use reinforcement learning (RL).</p>
<p>As to the clustering algorithm, we first propose an algorithm called the simulated-annealing clustering algorithm. By combining a blocked data layout and asynchronous local optimization within each thread, the simulated-annealing enhanced clustering algorithm has a convergence rate that is comparable to the K-means algorithm but with much higher performance. Experiments with synthetic and real-world datasets show that the simulated-annealing enhanced clustering algorithm is significantly faster than the MPI K-means library using up to 1024 cores. However, the optimization costs (Sum of Square Error (SSE)) of the simulated-annealing enhanced clustering algorithm became higher than the original costs. To tackle this problem, we devise a new algorithm called the full-step feel-the-way clustering algorithm. In the full-step feel-the-way algorithm, there are L local steps within each block of data points. We use the first local step’s results to compute accurate global optimization costs. Our results show that the full-step algorithm can significantly reduce the global number of iterations needed to converge while obtaining low SSE costs. However, the time spent on the local steps is greater than the benefits of the saved iterations. To improve this performance, we next optimize the local step time by incorporating a sampling-based method called reassignment-history-aware sampling. Extensive experiments with various synthetic and real world datasets (e.g., MNIST, CIFAR-10, ENRON, and PLACES-2) show that our parallel algorithms can outperform the fastest open-source MPI K-means implementation by up to 110% on 4,096 CPU cores with comparable SSE costs.</p>
<p>Our evaluations of the sampling-based feel-the-way algorithm establish the effectiveness of the local optimization strategy, the blocked data layout, and the sampling methods for addressing the challenges of applying HPC to ML applications. To explore more parallel strategies and optimization techniques, we focus on a more complex application: graph optimization problems using reinforcement learning (RL). RL has proved successful for automatically learning good heuristics to solve graph optimization problems. However, the existing RL systems either do not support graph RL environments or do not support multiple or many GPUs in a distributed setting. This has compromised RL’s ability to solve large scale graph optimization problems due to the lack of parallelization and high scalability. To address the challenges of parallelization and scalability, we develop OpenGraphGym-MG, a high performance distributed-GPU RL framework for solving graph optimization problems. OpenGraphGym-MG focuses on a class of computationally demanding RL problems in which both the RL environment and the policy model are highly computation intensive. In this work, we distribute large-scale graphs across distributed GPUs and use spatial parallelism and data parallelism to achieve scalable performance. We compare and analyze the performance of spatial and data parallelism and highlight their differences. To support graph neural network (GNN) layers that take data samples partitioned across distributed GPUs as input, we design new parallel mathematical kernels to perform operations on distributed 3D sparse and 3D dense tensors. To handle costly RL environments, we design new parallel graph environments to scale up all RL-environment-related operations. By combining the scalable GNN layers with the scalable RL environment, we are able to develop high performance OpenGraphGym-MG training and inference algorithms in parallel.</p>
<p>To summarize, after proposing the major challenges for applying HPC to ML applications, this thesis explores several parallel strategies and performance optimization techniques using two ML applications. Specifically, we propose a local optimization strategy, a blocked data layout, and sampling methods for accelerating the clustering algorithm, and we create a spatial parallelism strategy, a parallel graph environment, agent, and policy model, and an optimized replay buffer, and multi-node selection strategy for solving large optimization problems over graphs. Our evaluations prove the effectiveness of these strategies and demonstrate that our accelerations can significantly outperform the state-of-the-art ML libraries and frameworks without loss of quality in trained models.</p>
|
14 |
Investigation of Backdoor Attacks and Design of Effective Countermeasures in Federated LearningAgnideven Palanisamy Sundar (11190282) 03 September 2024 (has links)
<p dir="ltr">Federated Learning (FL), a novel subclass of Artificial Intelligence, decentralizes the learning process by enabling participants to benefit from a comprehensive model trained on a broader dataset without direct sharing of private data. This approach integrates multiple local models into a global model, mitigating the need for large individual datasets. However, the decentralized nature of FL increases its vulnerability to adversarial attacks. These include backdoor attacks, which subtly alter classification in some categories, and byzantine attacks, aimed at degrading the overall model accuracy. Detecting and defending against such attacks is challenging, as adversaries can participate in the system, masquerading as benign contributors. This thesis provides an extensive analysis of the various security attacks, highlighting the distinct elements of each and the inherent vulnerabilities of FL that facilitate these attacks. The focus is primarily on backdoor attacks, which are stealthier and more difficult to detect compared to byzantine attacks. We explore defense strategies effective in identifying malicious participants or mitigating attack impacts on the global model. The primary aim of this research is to evaluate the effectiveness and limitations of existing server-level defenses and to develop innovative defense mechanisms under diverse threat models. This includes scenarios where the server collaborates with clients to thwart attacks, cases where the server remains passive but benign, and situations where no server is present, requiring clients to independently minimize and isolate attacks while enhancing main task performance. Throughout, we ensure that the interventions do not compromise the performance of both global and local models. The research predominantly utilizes 2D and 3D datasets to underscore the practical implications and effectiveness of proposed methodologies.</p>
|
15 |
Distributed Algorithms for Multi-robot AutonomyZehui Lu (18953791) 02 July 2024 (has links)
<p dir="ltr">Autonomous robots can perform dangerous and tedious tasks, eliminating the need for human involvement. To deploy an autonomous robot in the field, a typical planning and control hierarchy is used, consisting of a high-level planner, a mid-level motion planner, and a low-level tracking controller. In applications such as simultaneous localization and mapping, package delivery, logistics, and surveillance, a group of autonomous robots can be more efficient and resilient than a single robot. However, deploying a multi-robot team by directly aggregating each robot's planning hierarchy into a larger, centralized hierarchy faces challenges related to scalability, resilience, and real-time computation. Distributed algorithms offer a promising solution for introducing effective coordination within a network of robots, addressing these issues. This thesis explores the application of distributed algorithms in multi-robot systems, focusing on several essential components required to enable distributed multi-robot coordination, both in general terms and for specific applications.</p>
|
Page generated in 0.0945 seconds