• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Security Games: Solution Concepts and Algorithms

Korzhyk, Dmytro January 2013 (has links)
<p>Algorithms for finding game-theoretic solutions are now used in several real-world security applications. Many of these applications are based on different but related game-theoretical models collectively known as security games. Much of the research in this area has focused on the two-player setting in which the first player (leader, defender) commits to a strategy, after which the second player (follower, attacker) observes that strategy and responds to it. This is commonly known as the Stackelberg, or leader-follower, model. If none of the players can observe the actions of the others then such a setting is called a simultaneous-move game. A common solution concept in simultaneous-move games is the Nash equilibrium (NE). In the present dissertation, we contribute to this line of research in two ways.</p><p>First, we consider new ways of modeling commitment. We propose the new model in which the leader can commit to a correlated strategy. We show that this model is equivalent to the Stackelberg model in two-player games and is different from the existing models in games with three or more players. We propose an algorithm for computing a solution to this model in polynomial time. We also consider a leader-follower setting in which the players are uncertain about whether the follower can observe. We describe an iterative algorithm for solving such games.</p><p>Second, we analyze the computational complexity of computing Stackelberg and NE strategies in security games. We describe algorithms to solve some variants of the security game model in polynomial time and prove NP-hardness of solving other variants of the model. We also extend the family of security games by allowing the attacker have multiple resources. We provide an algorithm for computing an NE of such games in polynomial time, and we show that computing a Stackelberg strategy is NP-hard.</p> / Dissertation
2

Bayesian Reinforcement Learning Methods for Network Intrusion Prevention

Nesti Lopes, Antonio Frederico January 2021 (has links)
A growing problem in network security stems from the fact that both attack methods and target systems constantly evolve. This problem makes it difficult for human operators to keep up and manage the security problem. To deal with this challenge, a promising approach is to use reinforcement learning to adapt security policies to a changing environment. However, a drawback of this approach is that traditional reinforcement learning methods require a large amount of data in order to learn effective policies, which can be both costly and difficult to obtain. To address this problem, this thesis investigates ways to incorporate prior knowledge in learning systems for network security. Our goal is to be able to learn security policies with less data compared to traditional reinforcement learning algorithms. To investigate this question, we take a Bayesian approach and consider Bayesian reinforcement learning methods as a complement to current algorithms in reinforcement learning. Specifically, in this work, we study the following algorithms: Bayesian Q-learning, Bayesian REINFORCE, and Bayesian Actor-Critic. To evaluate our approach, we have implemented the mentioned algorithms and techniques and applied them to different simulation scenarios of intrusion prevention. Our results demonstrate that the Bayesian reinforcement learning algorithms are able to learn more efficiently compared to their non-Bayesian counterparts but that the Bayesian approach is more computationally demanding. Further, we find that the choice of prior and the kernel function have a large impact on the performance of the algorithms. / Ett växande problem inom cybersäkerhet är att både attackmetoder samt system är i en konstant förändring och utveckling: å ena sidan så blir attackmetoder mer och mer sofistikerade, och å andra sidan så utvecklas system via innovationer samt uppgraderingar. Detta problem gör det svårt för mänskliga operatörer att hantera säkerhetsproblemet. En lovande metod för att hantera denna utmaning är förstärkningslärande. Med förstärkningslärande kan en autonom agent automatiskt lära sig att anpassa säkerhetsstrategier till en föränderlig miljö. En utmaning med detta tillvägagångsätt är dock att traditionella förstärkningsinlärningsmetoder kräver en stor mängd data för att lära sig effektiva strategier, vilket kan vara både kostsamt och svårt att erskaffa. För att lösa detta problem så undersöker denna avhandling Bayesiska metoder för att inkorporera förkunskaper i inlärningsalgoritmen, vilket kan möjliggöra lärande med mindre data. Specifikt så studerar vi följande Bayesiska algoritmer: Bayesian Q-learning, Bayesian REINFORCE och Bayesian Actor- Critic. För att utvärdera vårt tillvägagångssätt har vi implementerat de nämnda algoritmerna och utvärderat deras prestanda i olika simuleringsscenarier för intrångsförebyggande samt analyserat deras komplexitet. Våra resultat visar att de Bayesiska förstärkningsinlärningsalgoritmerna kan användas för att lära sig strategier med mindre data än vad som kravs vid användande av icke-Bayesiska motsvarigheter, men att den Bayesiska metoden är mer beräkningskrävande. Vidare finner vi att metoden för att inkorporera förkunskap i inlärningsalgoritmen, samt val av kernelfunktion, har stor inverkan på algoritmernas prestanda.

Page generated in 0.0696 seconds