Return to search

An empirical approach to modeling uncertainty in intrusion analysis

Master of Science / Department of Computing and Information Sciences / Xinming (Simon) Ou / A well-known problem in current intrusion detection tools is that they
create too many low-level alerts and system administrators find it
hard to cope up with the huge volume. Also, when they have to combine
multiple sources of information to confirm an attack, there is a
dramatic increase in the complexity. Attackers use sophisticated
techniques to evade the detection and current system monitoring tools
can only observe the symptoms or effects of malicious activities.
When mingled with similar effects from normal or non-malicious
behavior they lead intrusion analysis to conclusions of varying
confidence and high false positive/negative rates.

In this thesis work we present an empirical approach to the problem of
modeling uncertainty where inferred security implications of low-level
observations are captured in a simple logical language augmented with
uncertainty tags. We have designed an automated reasoning process
that enables us to combine multiple sources of system monitoring data
and extract highly-confident attack traces from the numerous possible
interpretations of low-level observations. We have developed our
model empirically: the starting point was a true intrusion that
happened on a campus network we studied to capture the essence of the
human reasoning process that led to conclusions about the attack. We
then used a Datalog-like language to encode the model and a Prolog
system to carry out the reasoning process. Our model and reasoning
system reached the same conclusions as the human administrator on the
question of which machines were certainly compromised. We then
automatically generated the reasoning model needed for handling Snort
alerts from the natural-language descriptions in the Snort rule
repository, and developed a Snort add-on to analyze Snort alerts.
Keeping the reasoning model unchanged, we applied our reasoning system
to two third-party data sets and one production network. Our results
showed that the reasoning model is effective on these data sets as
well. We believe such an empirical approach has the potential of
codifying the seemingly ad-hoc human reasoning of uncertain events,
and can yield useful tools for automated intrusion analysis.

Identiferoai:union.ndltd.org:KSU/oai:krex.k-state.edu:2097/2337
Date January 1900
CreatorsSakthivelmurugan, Sakthiyuvaraja
PublisherKansas State University
Source SetsK-State Research Exchange
Languageen_US
Detected LanguageEnglish
TypeThesis

Page generated in 0.0024 seconds