<p dir="ltr">The original definition of Nash Equilibrium applied to normal form games, but the notion has now been extended to various other forms of games including leader-follower games (Stackelberg games), extensive form games, stochastic games, games of incomplete information, cooperative games, and so on. We focus on general-sum stochastic Stackelberg games in this work. An example where such games would be natural to consider is in security games where a defender wishes to protect some targets through deployment of limited resources and an attacker wishes to strategically attack the targets to benefit themselves. The hierarchical order of play arises naturally since the defender typically acts first and deploys a strategy, while the attacker observes the strategy ofthe defender before attacking. Another example where this framework fits is in testing during epidemics, where the leader (the government) sets testing policies and the follower (the citizens) decide at every time step whether to get tested. The government wishes to minimize the number of infected people in the population while the follower wishes to minimize the cost of getting sick and testing. This thesis presents a learning algorithm for players to converge to their stationary policies in a general sum stochastic sequential Stackelberg game. The algorithm is a two time scale implicit policy gradient algorithm that provably converges to stationary points of the optimization problems of the two players. Our analysis allows us to move beyond the assumptions of zero-sum or static Stackelberg games made in the existing literature for learning algorithms to converge.</p><p dir="ltr"><br></p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/25620726 |
Date | 19 April 2024 |
Creators | Pranoy Das (18369306) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/Learning_in_Stochastic_Stackelberg_Games/25620726 |
Page generated in 0.0139 seconds