Return to search

New Approaches to Distributed State Estimation, Inference and Learning with Extensions to Byzantine-Resilience

<div>In this thesis, we focus on the problem of estimating an unknown quantity of interest, when the information required to do so is dispersed over a network of agents. In particular, each agent in the network receives sequential observations generated by the unknown quantity, and the collective goal of the network is to eventually learn this quantity by means of appropriately crafted information diffusion rules. The abstraction described above can be used to model a variety of problems ranging from environmental monitoring of a dynamical process using autonomous robot teams, to statistical inference using a network of processors, to social learning in groups of individuals. The limited information content of each agent, coupled with dynamically changing networks, the possibility of adversarial attacks, and constraints imposed by the communication channels, introduce various unique challenges in addressing such problems. We contribute towards systematically resolving some of these challenges.</div><div><br></div><div>In the first part of this thesis, we focus on tracking the state of a dynamical process, and develop a distributed observer for the most general class of LTI systems, linear measurement models, and time-invariant graphs. To do so, we introduce the notion of a multi-sensor observable decomposition - a generalization of the Kalman observable canonical decomposition for a single sensor. We then consider a scenario where certain agents in the network are compromised based on the classical Byzantine adversary model. For this worst-case adversarial setting, we identify certain fundamental necessary conditions that are a blend of system- and network-theoretic requirements. We then develop an attack-resilient, provably-correct, fully distributed state estimation algorithm. Finally, by drawing connections to the concept of age-of-information for characterizing information freshness, we show how our framework can be extended to handle a broad class of time-varying graphs. Notably, in each of the cases above, our proposed algorithms guarantee exponential convergence at any desired convergence rate.</div><div><br></div><div>In the second part of the thesis, we turn our attention to the problem of distributed hypothesis testing/inference, where each agent receives a stream of stochastic signals generated by an unknown static state that belongs to a finite set of hypotheses. To enable each agent to uniquely identify the true state, we develop a novel distributed learning rule that employs a min-protocol for data-aggregation, as opposed to the large body of existing techniques that rely on "belief-averaging". We establish consistency of our rule under minimal requirements on the observation model and the network structure, and prove that it guarantees exponentially fast convergence to the truth with probability 1. Most importantly, we establish that the learning rate of our algorithm is network-independent, and a strict improvement over all existing approaches. We also develop a simple variant of our learning algorithm that can account for misbehaving agents. As the final contribution of this work, we develop communication-efficient rules for distributed hypothesis testing. Specifically, we draw on ideas from event-triggered control to reduce the number of communication rounds, and employ an adaptive quantization scheme that guarantees exponentially fast learning almost surely, even when just 1 bit is used to encode each hypothesis. </div>

  1. 10.25394/pgs.12690770.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/12690770
Date29 July 2020
CreatorsAritra Mitra (9154928)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/New_Approaches_to_Distributed_State_Estimation_Inference_and_Learning_with_Extensions_to_Byzantine-Resilience/12690770

Page generated in 0.0179 seconds