Return to search

Aggregation, dissemination and filtering : controlling complex information flows in networks

Modern day networks, both physical and virtual, are designed to support increasingly sophisticated applications based on complex manipulation of information flows. On the flip side, the ever-growing scale of the underlying networks necessitate the use of low-complexity algorithms. Exploring this tension needs an understanding of the relation between these flows and the network structure. In this thesis, we undertake a study of three such processes: aggregation, dissemination and filtering. In each case, we characterize how the network topology imposes limits on these processes, and how one can use knowledge of the topology to design simple yet efficient control algorithms. Aggregation: We study data-aggregation in sensor networks via in-network computation, i.e., via combining packets at intermediate nodes. In particular, we are interested in maximizing the refresh-rate of repeated/streaming aggregation. For a particular class of functions, we characterize the maximum achievable refresh-rate in terms of the underlying graph structure; furthermore we develop optimal algorithms for general networks, and also a simple distributed algorithm for acyclic wired networks. Dissemination: We consider dissemination processes on networks via intrinsic peer-to-peer transmissions aided by external agents: sources with bounded spreading power, but unconstrained by the network. Such a model captures many static (e.g. long-range links) and dynamic/controlled (e.g. mobile nodes, broadcasting) models for long-range dissemination. We explore the effect of external sources for two dissemination models: spreading processes, wherein nodes once infected remain so forever, and epidemic process, in which nodes can recover from the infection. The main takeaways from our results demonstrate: (i) the role of graph structure, and (ii) the power of random strategies. In spreading processes, we show that external agents dramatically reduce the spreading time in networks that are spatially constrained; furthermore random policies are order-wise optimal. In epidemic processes, we show that for causing long-lasting epidemics, external sources must scale with the number of nodes -- however the strategies can be random. Filtering: A common phenomena in modern recommendation systems is the use of user-feedback to infer the 'value' of an item to other users, resulting in an exploration vs. exploitation trade-off. We study this in a simple natural model, where an 'access-graph' constrains which user is allowed to see which item, and the number of items and the number of item-views are of the same order. We want algorithms that recommend relevant content in an online manner (i.e., instantaneously on user arrival). To this end, we consider both finite-population (i.e., with a fixed set of users and items) and infinite-horizon settings (i.e., with user/item arrivals and departures) -- in each case, we design algorithms with guarantees on the competitive ratio for any arbitrary user. Conversely, we also present upper bounds on the competitive ratio, which show that in many settings our algorithms are orderwise optimal. / text

Identiferoai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/21751
Date25 October 2013
CreatorsBanerjee, Siddhartha
Source SetsUniversity of Texas
Languageen_US
Detected LanguageEnglish
Formatapplication/pdf

Page generated in 0.004 seconds