• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 19
  • 10
  • 2
  • 1
  • Tagged with
  • 72
  • 72
  • 72
  • 40
  • 35
  • 21
  • 17
  • 17
  • 16
  • 15
  • 14
  • 14
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Scalable framework for turn-key honeynet deployment

Brzeczko, Albert Walter 22 May 2014 (has links)
Enterprise networks present very high value targets in the eyes of malicious actors who seek to exfiltrate sensitive proprietary data, disrupt the operations of a particular organization, or leverage considerable computational and network resources to further their own illicit goals. For this reason, enterprise networks typically attract the most determined of attackers. These attackers are prone to using the most novel and difficult-to-detect approaches so that they may have a high probability of success and continue operating undetected. Many existing network security approaches that fall under the category of intrusion detection systems (IDS) and intrusion prevention systems (IPS) are able to detect classes of attacks that are well-known. While these approaches are effective for filtering out routine attacks in automated fashion, they are ill-suited for detecting the types of novel tactics and zero-day exploits that are increasingly used against the enterprise. In this thesis, a solution is presented that augments existing security measures to provide enhanced coverage of novel attacks in conjunction with what is already provided by traditional IDS and IPS. The approach enables honeypots, a class of tech- nique that observes novel attacks by luring an attacker to perform malicious activity on a system having no production value, to be deployed in a turn-key fashion and at large scale on enterprise networks. In spite of the honeypot’s efficacy against tar- geted attacks, organizations can seldom afford to devote capital and IT manpower to integrating them into their security posture. Furthermore, misconfigured honeypots can actually weaken an organization’s security posture by giving the attacker a stag- ing ground on which to perform further attacks. A turn-key approach is needed for organizations to use honeypots to trap, observe, and mitigate novel targeted attacks.
12

Shilling attack detection in recommender systems.

Bhebe, Wilander. January 2015 (has links)
M. Tech. Information Networks / The growth of the internet has made it easy for people to exchange information resulting in the abundance of information commonly referred to as information overload. It causes retailers to fail to make adequate sales since the customers are swamped with a lot of options and choices. To lessen this problem retailers have begun to find it useful to make use of algorithmic approaches to determine which content to show consumers. These algorithmic approaches are known as recommender systems. Collaborative Filtering recommender systems suggest items to users based on other users reported prior experience with those items. These systems are, however, vulnerable to shilling attacks since they are highly dependent on outside sources of information. Shilling is a process in which syndicating users can connive to promote or demote a certain item, where malicious users benefit from introducing biased ratings. It is, however, critical that shilling detection systems are implemented to detect, warn and shut down shilling attacks within minutes. Modern patented shilling detection systems employ: (a) classification methods, (b) statistical methods, and (c) rules and threshold values defined by shilling detection analysts, using their knowledge of valid shilling cases and the false alarm rate as guidance. The goal of this dissertation is to determine a context for, and assess the performance of Meta-Learning techniques that can be integrated in the shilling detection process.
13

Dynamic Game-Theoretic Models to Determine the Value of Intrusion Detection Systems in the Face of Uncertainty

Moured, David Paul 27 January 2015 (has links)
Firms lose millions of dollars every year to cyber-attacks and the risk to these companies is growing exponentially. The threat to monetary and intellectual property has made Information Technology (IT) security management a critical challenge to firms. Security devices, including Intrusion Detections Systems (IDS), are commonly used to help protect these firms from malicious users by identifying the presence of malicious network traffic. However, the actual value of these devices remains uncertain among the IT security community because of the costs associated with the implementation of different monitoring strategies that determine when to inspect potentially malicious traffic and the costs associated with false positive and negative errors. Game theoretic models have proven effective for determining the value of these devices under several conditions where firms and users are modeled as players. However, these models assume that both the firm and attacker have complete information about their opponent and lack the ability to account for more realistic situations where players have incomplete information regarding their opponent's payoffs. The proposed research develops an enhanced model that can be used for strategic decision making in IT security management where the firm is uncertain about the user's utility of intrusion. By using Harsanyi Transformation Analysis, the model provides the IT security research community with valuable insight into the value of IDS when the firm is uncertain of the incentives and payoffs available to users choosing to hack. Specifically, this dissertation considers two possible types of users with different utility for intrusion to gain further insights about the players' strategies. The firm's optimal strategy is to start the game with the expected value of the user's utility as an estimate. Under this strategy, the firm can determine the user's utility with certainty within one iteration of the game. After the first iteration, the game may be analyzed as a game of perfect information.
14

Avaliação de técnicas de captura para sistemas detectores de intrusão. / Evaluation of capture techniques for intrusion detection systems.

Tavares, Dalton Matsuo 04 July 2002 (has links)
O objetivo principal do presente trabalho é apresentar uma proposta que permita a combinação entre uma solução de captura de pacotes já existente e não muito flexível (sniffer) e o conceito de agentes móveis para aplicação em redes segmentadas. Essa pesquisa possui como foco principal a aplicação da técnica captura de pacotes em SDIs network based, utilizando para isso o modelo desenvolvido no ICMC (Cansian, 1997) e posteriormente adequado ao ambiente de agentes móveis (Bernardes, 1999). Assim sendo, foi especificada a camada base do ambiente desenvolvido em (Bernardes, 1999) visando as interações entre seus agentes e o agente de captura de pacotes. / The main objective of the current work is to present a proposal that allows the combination between an existent and not so flexible packet capture solution (sniffer) and the concept of mobile agents for application in switched networks. This research focuses the application of the packet capture technique in IDSs network-based, using for this purpose the model developed at ICMC (Cansian, 1997) and later adjusted to the mobile agents environment (Bernardes, 1999). Therefore, the base layer of the developed environment (Bernardes, 1999} was specified focusing the interactions between its agents and the packet capture agent.
15

Empirical Measurement of Defense in Depth

Boggs, Nathaniel January 2015 (has links)
Measurement is a vital tool for organizations attempting to increase, evaluate, or simply maintain their overall security posture over time. Organizations rely on defense in depth, which is a layering of multiple defenses, in order to strengthen overall security. Measuring organizations' total security requires evaluating individual security controls such as firewalls, antivirus, or intrusion detection systems alone as well as their joint effectiveness when deployed together in defense in depth. Currently, organizations must rely on best practices rooted in ad hoc expert opinion, reports on individual product performance, and marketing hype to make their choices. When attempting to measure the total security provided by a defense in depth architecture, dependencies between security controls compound the already difficult task of measuring a single security control accurately. We take two complementary approaches to address this challenge of measuring the total security provided by defense in depth deployments. In our first approach, we use direct measurement where for some set of attacks, we compute a total detection rate for a set of security controls deployed in defense in depth. In order to compare security controls operating on different types of data, we link together all data generated from each particular attack and track the specific attacks detected by each security control. We implement our approach for both the drive-by download and web application attack vectors across four separate layers each. We created an extensible automated framework for web application data generation using public sources of English text. For our second approach, we measure the total adversary cost that is the total effort, resources, and time required to evade security controls deployed in defense in depth. Dependencies between security controls prevent us from simply summing the adversary cost to evade individual security controls in order to compute a total adversary cost. We create a methodology that accounts for these dependencies especially focusing on multiplicative relationships where the adversary cost of evading two security controls together is more than the sum of the adversary costs to evade each individually. Using the insight gained into the multiplicative dependency, we design a method for creating sets of multiplicative security controls. Additionally, we create a prototype to demonstrate our methodology for empirically measuring total adversary cost using attack tree visualizations and a database design capable of representing dependent relationships between security controls.
16

Avaliação do uso de agentes móveis em segurança computacional. / An evaluation of the use of mobile agents in computational security.

Bernardes, Mauro Cesar 22 December 1999 (has links)
Em decorrência do aumento do número de ataques de origem interna, a utilização de mecanismos de proteção, como o firewall, deve ser ampliada. Visto que este tipo de ataque, ocasionado pelos usuários internos ao sistema, não permite a localização imediata, torna-se necessário o uso integrado de diversas tecnologias para aumentar a capacidade de defesa de um sistema. Desta forma, a introdução de agentes móveis em apoio a segurança computacional apresenta-se como uma solução natural, uma vez que permitirá a distribuição de tarefas de monitoramento do sistema e automatização do processo de tomada de decisão, no caso de ausência do administrador humano. Este trabalho apresenta uma avaliação do uso do mecanismo de agentes móveis para acrescentar características de mobilidade ao processo de monitoração de intrusão em sistemas computacionais. Uma abordagem modular é proposta, onde agentes pequenos e independentes monitoram o sistema. Esta abordagem apresenta significantes vantagens em termos de overhead, escalabilidade e flexibilidade. / The use of protection mechanisms must be improved due the increase of attacks from internal sources. As this kind of attack, made by internal users do not allow its immediate localization, it is necessary the integrated use of several technologies to enhance the defense capabilities of a system. Therefore, the introduction of mobile agents to provide security appears to be a natural solution. It will allow the distribution of the system monitoring tasks and automate the decision making process, in the absence of a human administrator. This project presents an evaluation of the use of mobile agents to add mobile capabilities to the process of intrusion detection in computer systems. A finer-grained approach is proposed, where small and independent agents monitor the system. This approach has significant advantages in terms of overhead, scalability and flexibility.
17

DATA COLLECTION FRAMEWORK AND MACHINE LEARNING ALGORITHMS FOR THE ANALYSIS OF CYBER SECURITY ATTACKS

Unknown Date (has links)
The integrity of network communications is constantly being challenged by more sophisticated intrusion techniques. Attackers are shifting to stealthier and more complex forms of attacks in an attempt to bypass known mitigation strategies. Also, many detection methods for popular network attacks have been developed using outdated or non-representative attack data. To effectively develop modern detection methodologies, there exists a need to acquire data that can fully encompass the behaviors of persistent and emerging threats. When collecting modern day network traffic for intrusion detection, substantial amounts of traffic can be collected, much of which consists of relatively few attack instances as compared to normal traffic. This skewed distribution between normal and attack data can lead to high levels of class imbalance. Machine learning techniques can be used to aid in attack detection, but large levels of imbalance between normal (majority) and attack (minority) instances can lead to inaccurate detection results. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
18

Ant tree miner amyntas for intrusion detection

Botes, Frans Hendrik January 2018 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2018. / With the constant evolution of information systems, companies have to acclimatise to the vast increase of data flowing through their networks. Business processes rely heavily on information technology and operate within a framework of little to no space for interruptions. Cyber attacks aimed at interrupting business operations, false intrusion detections and leaked information burden companies with large monetary and reputational costs. Intrusion detection systems analyse network traffic to identify suspicious patterns that intent to compromise the system. Classifiers (algorithms) are used to classify the data within different categories e.g. malicious or normal network traffic. Recent surveys within intrusion detection highlight the need for improved detection techniques and warrant further experimentation for improvement. This experimental research project focuses on implementing swarm intelligence techniques within the intrusion detection domain. The Ant Tree Miner algorithm induces decision trees by using ant colony optimisation techniques. The Ant Tree Miner poses high accuracy with efficient results. However, limited research has been performed on this classifier in other domains such as intrusion detection. The research provides the intrusion detection domain with a new algorithm that improves upon results of decision trees and ant colony optimisation techniques when applied to the domain. The research has led to valuable insights into the Ant Tree Miner classifier within a previously unknown domain and created an intrusion detection benchmark for future researchers.
19

Algorithms for Large-Scale Internet Measurements

Leonard, Derek Anthony 2010 December 1900 (has links)
As the Internet has grown in size and importance to society, it has become increasingly difficult to generate global metrics of interest that can be used to verify proposed algorithms or monitor performance. This dissertation tackles the problem by proposing several novel algorithms designed to perform Internet-wide measurements using existing or inexpensive resources. We initially address distance estimation in the Internet, which is used by many distributed applications. We propose a new end-to-end measurement framework called Turbo King (T-King) that uses the existing DNS infrastructure and, when compared to its predecessor King, obtains delay samples without bias in the presence of distant authoritative servers and forwarders, consumes half the bandwidth, and reduces the impact on caches at remote servers by several orders of magnitude. Motivated by recent interest in the literature and our need to find remote DNS nameservers, we next address Internet-wide service discovery by developing IRLscanner, whose main design objectives have been to maximize politeness at remote networks, allow scanning rates that achieve coverage of the Internet in minutes/hours (rather than weeks/months), and significantly reduce administrator complaints. Using IRLscanner and 24-hour scan durations, we perform 20 Internet-wide experiments using 6 different protocols (i.e., DNS, HTTP, SMTP, EPMAP, ICMP and UDP ECHO). We analyze the feedback generated and suggest novel approaches for reducing the amount of blowback during similar studies, which should enable researchers to collect valuable experimental data in the future with significantly fewer hurdles. We finally turn our attention to Intrusion Detection Systems (IDS), which are often tasked with detecting scans and preventing them; however, it is currently unknown how likely an IDS is to detect a given Internet-wide scan pattern and whether there exist sufficiently fast stealth techniques that can remain virtually undetectable at large-scale. To address these questions, we propose a novel model for the windowexpiration rules of popular IDS tools (i.e., Snort and Bro), derive the probability that existing scan patterns (i.e., uniform and sequential) are detected by each of these tools, and prove the existence of stealth-optimal patterns.
20

Network Forensics and Log Files Analysis : A Novel Approach to Building a Digital Evidence Bag and Its Own Processing Tool

Qaisi, Ahmed Abdulrheem Jerribi January 2011 (has links)
Intrusion Detection Systems (IDS) tools are deployed within networks to monitor data that is transmitted to particular destinations such as MySQL,Oracle databases or log files. The data is normally dumped to these destinations without a forensic standard structure. When digital evidence is needed, forensic specialists are required to analyse a very large volume of data. Even though forensic tools can be utilised, most of this process has to be done manually, consuming time and resources. In this research, we aim to address this issue by combining several existing tools to archive the original IDS data into a new container (Digital Evidence Bag) that has a structure based upon standard forensic processes. The aim is to develop a method to improve the current IDS database function in a forensic manner. This database will be optimised for future, forensic, analysis. Since evidence validity is always an issue, a secondary aim of this research is to develop a new monitoring scheme. This is to provide the necessary evidence to prove that an attacker had surveyed the network prior to the attack. To achieve this, we will set up a network that will be monitored by multiple IDSs. Open source tools will be used to carry input validation attacks into the network including SQL injection. We will design a new tool to obtain the original data in order to store it within the proposed DEB. This tool will collect the data from several databases of the different IDSs. We will assume that the IDS will not have been compromised.

Page generated in 0.1514 seconds