Return to search

DECISION MAKING UNDER UNCERTAINTY IN DYNAMIC MULTI-STAGE ATTACKER-DEFENDER GAMES

This dissertation presents efficient, on-line, convergent methods to find defense strategies against attacks in dynamic multi-stage attacker-defender games including adaptive learning. This effort culminated in four papers submitted to high quality journals and a book and they are partially published. The first paper presents a novel fictitious play approach to describe the interactions between the attackers and network administrator along a dynamic game. Multi-objective optimization methodology is used to predict the attacker's best actions at each decision node. The administrator also keeps track of the attacker's actions and updates his knowledge on the attacker's behavior and objectives after each detected attack, and uses this information to update the prediction of the attacker's future actions to find its best response strategies. The second paper proposes a Dynamic game tree based Fictitious Play (DFP) approach to describe the repeated interactive decision processes of the players. Each player considers all possibilities in future interactions with their uncertainties, which are based on learning the opponent's decision process (including risk attitude, objectives). Instead of searching the entire game tree, appropriate future time horizons are dynamically selected for both players. The administrator keeps tracking the opponent's actions, predicts the probabilities of future possible attacks, and then chooses its best moves. The third paper introduces an optimization model to maximize the deterministic equivalent of the random payoff function of a computer network administrator in defending the system against random attacks. By introducing new variables the transformed objective function becomes concave. A special optimization algorithm is developed which requires the computation of the unique solution of a single variable monotonic equation. The fourth paper, which is an invited book chapter, proposes a discrete-time stochastic control model to capture the process of finding the best current move of the defender. The defender's payoffs at each stage of the game depend on the attacker's and the defender's accumulative efforts and are considered random variables due to their uncertainty. Their certain equivalents can be approximated based on their first and second moments which is chosen as the cost functions of the dynamic system. An on-line, convergent, Scenarios based Proactive Defense (SPD) algorithm is developed based on Differential Dynamic Programming (DDP) to solve the associated optimal control problem.

Identiferoai:union.ndltd.org:arizona.edu/oai:arizona.openrepository.com:10150/204331
Date January 2011
CreatorsLuo, Yi
ContributorsSzidarovszky, Ferenc, Hariri, Salim, Lin, Wei Hua, Bayraksan, Güzin
PublisherThe University of Arizona.
Source SetsUniversity of Arizona
LanguageEnglish
Detected LanguageEnglish
Typetext, Electronic Dissertation
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.

Page generated in 0.0059 seconds