Return to search

Navigating Uncertainty: Distributed and Bandit Solutions for Equilibrium Learning in Multiplayer Games

<p dir="ltr">In multiplayer games, a collection of self-interested players aims to optimize their individual cost functions in a non-cooperative manner. The cost function of each player depends not only on its own actions but also on the actions of others. In addition, players' actions may also collectively satisfy some global constraints. The study of this problem has grown immensely in the past decades with applications arising in a wide range of societal systems, including strategic behaviors in power markets, traffic assignment of strategic risk-averse users, engagement of multiple humanitarian organizations in disaster relief, etc. Furthermore, with machine learning models playing an increasingly important role in practical applications, the robustness of these models becomes another prominent concern. Investigation into the solutions of multiplayer games and Nash equilibrium problems (NEPs) can advance the algorithm design for fitting these models in the presence of adversarial noises. </p><p dir="ltr">Most of the existing methods for solving multiplayer games assume the presence of a central coordinator, which, unfortunately, is not practical in many scenarios. Moreover, in addition to couplings in the objectives and the global constraints, all too often, the objective functions contain uncertainty in the form of stochastic noises and unknown model parameters. The problem is further complicated by the following considerations: the individual objectives of players may be unavailable or too complex to model; players may exhibit reluctance to disclose their actions; players may experience random delays when receiving feedback regarding their actions. To contend with these issues and uncertainties, in the first half of the thesis, we develop several algorithms based on the theory of operator splitting and stochastic approximation, where the game participants only share their local information and decisions with their trusted neighbors on the network. In the second half of the thesis, we explore the bandit online learning framework as a solution to the challenges, where decisions made by players are updated based solely on the realized objective function values. Our future work will delve into data-driven approaches for learning in multiplayer games and we will explore functional representations of players' decisions, in a departure from the vector form. </p>

  1. 10.25394/pgs.25596582.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/25596582
Date15 April 2024
CreatorsYuanhanqing Huang (18361527)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Navigating_Uncertainty_Distributed_and_Bandit_Solutions_for_Equilibrium_Learning_in_Multiplayer_Games/25596582

Page generated in 0.0021 seconds