Return to search

Modeling Rational Adversaries: Predicting Behavior and Developing Deterrents

In the field of cybersecurity, it is often not possible to construct systems that are resistant to all attacks. For example, even a well-designed password authentication system will be vulnerable to password cracking attacks because users tend to select low-entropy passwords. In the field of cryptography, we often model attackers as powerful and malicious and say that a system is broken if any such attacker can violate the desired security properties. While this approach is useful in some settings, such a high bar is unachievable in many security applications e.g., password authentication. However, even when the system is imperfectly secure, it may be possible to deter a rational attacker who seeks to maximize their utility. In particular, if a rational adversary finds that the cost of running an attack is higher than their expected rewards, they will not run that particular attack. In this dissertation we argue in support of the following statement: Modeling adversaries as rational actors can be used to better model the security of imperfect systems and develop stronger defenses. We present several results in support of this thesis. First, we develop models for the behavior of rational adversaries in the context of password cracking and quantum key-recovery attacks. These models allow us to quantify the damage caused by password breaches, quantify the damage caused by (widespread) password length leakage, and identify imperfectly secure settings where a rational adversary is unlikely to run any attacks i.e. quantum key-recovery attacks. Second, we develop several tools to deter rational attackers by ensuring the utility-optimizing attack is either less severe or nonexistent. Specifically, we develop tools that increase the cost of offline password cracking attacks by strengthening password hashing algorithms, strategically signaling user password strength, and using dedicated Application-Specific Integrated Circuits (ASICs) to store passwords.

  1. 10.25394/pgs.15057363.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/15057363
Date26 July 2021
CreatorsBenjamin D Harsha (11186139)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Modeling_Rational_Adversaries_Predicting_Behavior_and_Developing_Deterrents/15057363

Page generated in 0.0018 seconds