• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

DISTRIBUTED OPTIMIZATION FOR MACHINE LEARNING: GUARANTEES AND TRADEOFFS

Ye Tian (11166960) 01 September 2021 (has links)
<div>In the era of big data, the sheer volume and widespread spatial distribution of information has been promoting extensive research on distributed optimization over networks. Each computing unit has access only to a relatively small portion of the entire data and can only communicate with a relatively small number of neighbors. The goal of the system is to reach consensus on the optimal parametric model with respect to the entire data among all computing units. Existing work has provided various decentralized optimization algorithms for the purpose. However, some important questions remain unclear: (I) what is the intrinsic connection among different existing algorithms? (II) what is the min-max lower complexity bound for decentralized algorithms? Can one design an optimal decentralized algorithm in the sense that it achieves the lower complexity bound? and (III) in the presence of asynchrony and imperfect communications, can one design linearly convergent decentralized algorithms?</div><div><br></div><div> This thesis aims at addressing the above questions. (I) Abstracting from ad-hoc, specific solution methods, we propose a unified distributed algorithmic framework and analysis for a general class of optimization problems over networks. Our method encapsulates several existing first-order distributed algorithms. Distinguishing features of our scheme are: (a) When each of the agent’s functions is strongly convex, the algorithm converges at a linear rate, whose dependence on the agents’ functions and network topology is decoupled; (b) When the objective function is convex, but not strongly convex, similar decoupling as in (a) is established for the coefficient of the proved sublinear rate. This also reveals the role of function heterogeneity on the convergence rate; (c) The algorithm can adjust the ratio between the number of communications and computations to achieve a rate (in terms of computations) independent on the network connectivity; and (d) A by-product of our analysis is a tuning recommendation for several existing (non-accelerated) distributed algorithms, yielding provably faster (worst-case) convergence rate for the class of problems under consideration. (II) Referring to lower complexity bounds, the proposed novel family of algorithms, when equipped with acceleration, are proved to be optimal, that is, they achieve convergence rate lower bounds. (III) Finally, to make the proposed algorithms practical, we break the synchronism in the agents’ updates: agents wake up and update without any coordination, using information only from immediate neighbors with unknown, arbitrary but bounded delays. Quite remarkably, even in the presence of asynchrony, the proposed algorithmic framework is proved to converge at a linear rate (resp. sublinear rate) when applied to strongly convex (resp. non strongly convex) optimization problems.</div>
2

Distributed Network Processing and Optimization under Communication Constraint

Chang Shen Lee (11184969) 26 July 2021 (has links)
<div>In recent years, the amount of data in the information processing systems has significantly increased, which is also referred to as big-data. The design of systems handling big-data calls for a scalable approach, which brings distributed systems into the picture. In contrast to centralized systems, data are spread across the network of agents in the distributed system, and agents cooperatively complete tasks through local communications and local computations. However, the design and analysis of distributed systems, in which no central coordinators with complete information are present, are challenging tasks. In order to support communication among agents to enable multi-agent coordination among others, practical communication constraints should be taken into consideration in the design and analysis of such systems. The focus of this dissertation is to provide design and analysis of distributed network processing using finite-rate communications among agents. In particular, we address the following open questions: 1) can one design algorithms balancing a graph weight matrix using finite-rate and simplex communications among agents? 2) can one design algorithms computing the average of agents’ states using finite-rate and simplex communications? and 3) going beyond of ad-hoc algorithmic designs, can one design a black-box mechanism transforming a general class of algorithms with unquantized communication to their finite-bit quantized counterparts?</div><div><br></div><div>This dissertation addresses the above questions. First, we propose novel distributed algorithms solving the weight-balancing and average consensus problems using only finite-rate simplex communications among agents, compliant to the directed nature of the network topology. A novel convergence analysis is put forth, based on a new metric inspired by the</div><div>positional system representations. In the second half of this dissertation, distributed optimization subject to quantized communications is studied. Specifically, we consider a general class of linearly convergent distributed algorithms cast as fixed-point iterate, and propose a novel black-box quantization mechanism. In the proposed mechanism, a novel quantizer preserving linear convergence is proposed, which is proved to be more communication efficient than state-of-the-art quantization mechanisms. Extensive numerical results validate our theoretical findings.</div>
3

Relax the Reliance on Honesty in Distributed Cryptographic Protocols

Tiantian Gong (19838595) 14 October 2024 (has links)
<p dir="ltr">Distributed cryptographic protocols typically assume a bounded number of malicious parties (who behave arbitrarily) in the system---and in turn, a lower bound on the number of <i>honest</i> parties (who follow and only follow a protocol faithfully/honestly without performing unspecified computations)---for their respective security guarantees to hold. However, when deploying these protocols in practice, the nature of computing parties does not necessarily align nicely with the protocols' assumptions. Specifically, there may be only a few honest/compliant parties, or none exists. Instead, non-malicious parties may be <i>semi-honest</i> (who follow the protocol specifications but are curious to learn as much information as possible from semi-honest parties' transcripts) or <i>rational</i> (who take actions that maximize their utilities instead of actions benefiting the protocol the most, e.g., performing extra computations or not following protocols). In such cases, the security guarantees of such protocols may deviate greatly in real life from what is theoretically promised, leaving a huge gap between theory and practice. </p><p dir="ltr">In this thesis, I bridge such a gap by enhancing the fault tolerance of various distributed cryptographic primitives by <i>relaxing the assumption on the existence of honest parties</i>.</p><p dir="ltr">First, in the context of <b>secure multi-party computations</b>, without honest parties, my goal is to induce honest (i.e., not compromising correctness) and non-curious (i.e., not harming privacy) behaviors from rational participants via game theoretic and cryptographic techniques. In particular, I first demonstrate how to ensure protocol correctness and deter collusion among parties to recover secrets---which also breaks privacy---in multiserver private information retrieval with a singleton access structure. Then for primitives with more general (non-singleton) access structures, I introduce a distinct treatment through the lens of verifiable secret sharing. The two solutions are designed with a public bulletin board, commitment schemes, digital signature schemes, zkSNARKs (zero-knowledge succinct non-interactive arguments of knowledge), and distinct incentive structures tailored for varying access structures underlying the schemes.</p><p dir="ltr">Second, in <b>permissionless blockchain systems</b>, for protocols without privacy guarantees like computation outsourcing and consensus, my goal is to incentivize rational parties to behave correctly. This means to act according to the protocol specifications or as implied by the security requirements of the primitive, e.g., fairly distribute rewards to participants based on contributions in proof-of-work (PoW) blockchains. Specifically, I present a defense against an undercutting attack in PoW blockchains from a game theory perspective and propose a decentralized computation outsourcing protocol built on permissionless blockchain systems based on multi-unit auctions.</p>
4

Data-Driven Computing and Networking Solution for Securing Cyber-Physical Systems

Yifu Wu (18498519) 03 May 2024 (has links)
<p dir="ltr">In recent years, a surge in data-driven computation has significantly impacted security analysis in cyber-physical systems (CPSs), especially in decentralized environments. This transformation can be attributed to the remarkable computational power offered by high-performance computers (HPCs), coupled with advancements in distributed computing techniques and sophisticated learning algorithms like deep learning and reinforcement learning. Within this context, wireless communication systems and decentralized computing systems emerge as highly suitable environments for leveraging data-driven computation in security analysis. Our research endeavors have focused on exploring the vast potential of various deep learning algorithms within the CPS domains. We have not only delved into the intricacies of existing algorithms but also designed novel approaches tailored to the specific requirements of CPSs. A pivotal aspect of our work was the development of a comprehensive decentralized computing platform prototype, which served as the foundation for simulating complex networking scenarios typical of CPS environments. Within this framework, we harnessed deep learning techniques such as restricted Boltzmann machine (RBM) and deep convolutional neural network (DCNN) to address critical security concerns such as the detection of Quality of Service (QoS) degradation and Denial of Service (DoS) attacks in smart grids. Our experimental results showcased the superior performance of deep learning-based approaches compared to traditional pattern-based methods. Additionally, we devised a decentralized computing system that encompassed a novel decentralized learning algorithm, blockchain-based learning automation, distributed storage for data and models, and cryptography mechanisms to bolster the security and privacy of both data and models. Notably, our prototype demonstrated excellent efficacy, achieving a fine balance between model inference performance and confidentiality. Furthermore, we delved into the integration of domain knowledge from CPSs into our deep learning models. This integration shed light on the vulnerability of these models to dedicated adversarial attacks. Through these multifaceted endeavors, we aim to fortify the security posture of CPSs while unlocking the full potential of data-driven computation in safeguarding critical infrastructures.</p>

Page generated in 0.149 seconds