Return to search

Distributed learning in large populations

Distributed learning is the iterative process of decision-making in the presence of other decision-makers. In recent years, researchers across fields as disparate as engineering, biology, and economics have identified mathematically congruous problem formulations at the intersection of their disciplines. In particular, stochastic processes, game theory, and control theory have been brought to bare on certain very basic and universal questions. What sort of environments are conducive to distributed learning? Are there any generic algorithms offering non-trivial performance guarantees for a large class of models?

The first half of this thesis makes contributions to two particular problems in distributed learning, self-assembly and language. Self-assembly refers to the emergence of high-level structures via the aggregate behavior of simpler building blocks. A number of algorithms have been suggested that are capable of generic self-assembly of graphs. That is, given a description of the objective they produce a policy with a corresponding performance guarantee. These guarantees have been in the form of deterministic convergence results. We introduce the notion of stochastic stability to the self-assembly problem. The stochastically stable states are the configurations the system spends almost all of its time in as a noise parameter is taken to zero. We show that in this framework simple procedures exist that are capable of self-assembly of any tree under stringent locality constraints. Our procedure gives an asymptotically maximum yield of target assemblies while obeying communication and reversibility constraints. We also present a slightly more sophisticated algorithm that guarantees maximum yields for any problem size. The latter algorithm utilizes a somewhat more presumptive notion of agents' internal states. While it is unknown whether an algorithm providing maximum yields subject to our constraints can depend only on the more parsimonious form of internal state, we are able to show that such an algorithm would not be able to possess a unique completing rule--- a useful feature for analysis.

We then turn our attention to the problem of distributed learning of communication protocols, or, language. Recent results for signaling game models establish the non-negligible possibility of convergence, under distributed learning, to states of unbounded efficiency loss. We provide a tight lower bound on efficiency and discuss its implications. Moreover, motivated by the empirical phenomenon of linguistic drift, we study the signaling game under stochastic evolutionary dynamics. We again make use of stochastic stability analysis and show that the long-run distribution of states has support limited to the efficient communication systems. We find that this behavior is insensitive to the particular choice of evolutionary dynamic, a fact that is intuitively captured by the game's potential function corresponding to average fitness. Consequently, the model supports conclusions similar to those found in the literature on language competition. That is, we expect monomorphic language states to eventually predominate. Homophily has been identified as a feature that potentially stabilizes diverse linguistic communities. We find that incorporating homophily in our stochastic model gives mixed results. While the monomorphic prediction holds in the small noise limit, diversity can persist at higher noise levels or as a metastable phenomenon.

The contributions of the second half of this thesis relate to more basic issues in distributed learning. In particular, we provide new results on the problem of distributed convergence to Nash equilibrium in finite games. A recently proposed class of games known as stable games have the attractive property of admitting global convergence to equilibria under many learning dynamics. We show that stable games can be formulated as passive input-output systems. This observation enables us to identify passivity of a learning dynamic as a sufficient condition for global convergence in stable games. Notably, dynamics satisfying our condition need not exhibit positive correlation between the payoffs and their directions of motion. We show that our condition is satisfied by the dynamics known to exhibit global convergence in stable games. We give a decision-theoretic interpretation for passive learning dynamics that mirrors the interpretation of stable games as strategic environments exhibiting self-defeating externalities. Moreover, we exploit the flexibility of the passivity condition to study the impact of applying various forecasting heuristics to the payoffs used in the learning process. Finally, we show how passivity can be used to identify strategic tendencies of the players that allow for convergence in the presence of information lags of arbitrary duration in some games.

Identiferoai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/44783
Date14 August 2012
CreatorsFox, Michael Jacob
PublisherGeorgia Institute of Technology
Source SetsGeorgia Tech Electronic Thesis and Dissertation Archive
Detected LanguageEnglish
TypeDissertation

Page generated in 0.002 seconds