• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 61
  • 32
  • 11
  • 6
  • 5
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 433
  • 433
  • 219
  • 177
  • 139
  • 137
  • 118
  • 91
  • 87
  • 81
  • 69
  • 62
  • 59
  • 59
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Scalable framework for turn-key honeynet deployment

Brzeczko, Albert Walter 22 May 2014 (has links)
Enterprise networks present very high value targets in the eyes of malicious actors who seek to exfiltrate sensitive proprietary data, disrupt the operations of a particular organization, or leverage considerable computational and network resources to further their own illicit goals. For this reason, enterprise networks typically attract the most determined of attackers. These attackers are prone to using the most novel and difficult-to-detect approaches so that they may have a high probability of success and continue operating undetected. Many existing network security approaches that fall under the category of intrusion detection systems (IDS) and intrusion prevention systems (IPS) are able to detect classes of attacks that are well-known. While these approaches are effective for filtering out routine attacks in automated fashion, they are ill-suited for detecting the types of novel tactics and zero-day exploits that are increasingly used against the enterprise. In this thesis, a solution is presented that augments existing security measures to provide enhanced coverage of novel attacks in conjunction with what is already provided by traditional IDS and IPS. The approach enables honeypots, a class of tech- nique that observes novel attacks by luring an attacker to perform malicious activity on a system having no production value, to be deployed in a turn-key fashion and at large scale on enterprise networks. In spite of the honeypot’s efficacy against tar- geted attacks, organizations can seldom afford to devote capital and IT manpower to integrating them into their security posture. Furthermore, misconfigured honeypots can actually weaken an organization’s security posture by giving the attacker a stag- ing ground on which to perform further attacks. A turn-key approach is needed for organizations to use honeypots to trap, observe, and mitigate novel targeted attacks.
122

Shilling attack detection in recommender systems.

Bhebe, Wilander. January 2015 (has links)
M. Tech. Information Networks / The growth of the internet has made it easy for people to exchange information resulting in the abundance of information commonly referred to as information overload. It causes retailers to fail to make adequate sales since the customers are swamped with a lot of options and choices. To lessen this problem retailers have begun to find it useful to make use of algorithmic approaches to determine which content to show consumers. These algorithmic approaches are known as recommender systems. Collaborative Filtering recommender systems suggest items to users based on other users reported prior experience with those items. These systems are, however, vulnerable to shilling attacks since they are highly dependent on outside sources of information. Shilling is a process in which syndicating users can connive to promote or demote a certain item, where malicious users benefit from introducing biased ratings. It is, however, critical that shilling detection systems are implemented to detect, warn and shut down shilling attacks within minutes. Modern patented shilling detection systems employ: (a) classification methods, (b) statistical methods, and (c) rules and threshold values defined by shilling detection analysts, using their knowledge of valid shilling cases and the false alarm rate as guidance. The goal of this dissertation is to determine a context for, and assess the performance of Meta-Learning techniques that can be integrated in the shilling detection process.
123

Dynamic Game-Theoretic Models to Determine the Value of Intrusion Detection Systems in the Face of Uncertainty

Moured, David Paul 27 January 2015 (has links)
Firms lose millions of dollars every year to cyber-attacks and the risk to these companies is growing exponentially. The threat to monetary and intellectual property has made Information Technology (IT) security management a critical challenge to firms. Security devices, including Intrusion Detections Systems (IDS), are commonly used to help protect these firms from malicious users by identifying the presence of malicious network traffic. However, the actual value of these devices remains uncertain among the IT security community because of the costs associated with the implementation of different monitoring strategies that determine when to inspect potentially malicious traffic and the costs associated with false positive and negative errors. Game theoretic models have proven effective for determining the value of these devices under several conditions where firms and users are modeled as players. However, these models assume that both the firm and attacker have complete information about their opponent and lack the ability to account for more realistic situations where players have incomplete information regarding their opponent's payoffs. The proposed research develops an enhanced model that can be used for strategic decision making in IT security management where the firm is uncertain about the user's utility of intrusion. By using Harsanyi Transformation Analysis, the model provides the IT security research community with valuable insight into the value of IDS when the firm is uncertain of the incentives and payoffs available to users choosing to hack. Specifically, this dissertation considers two possible types of users with different utility for intrusion to gain further insights about the players' strategies. The firm's optimal strategy is to start the game with the expected value of the user's utility as an estimate. Under this strategy, the firm can determine the user's utility with certainty within one iteration of the game. After the first iteration, the game may be analyzed as a game of perfect information.
124

Enhanced Deployment Strategy for Role-based Hierarchical Application Agents in Wireless Sensor Networks with Established Clusterheads

Gendreau, Audrey A. 01 January 2014 (has links)
Efficient self-organizing virtual clusterheads that supervise data collection based on their wireless connectivity, risk, and overhead costs, are an important element of Wireless Sensor Networks (WSNs). This function is especially critical during deployment when system resources are allocated to a subsequent application. In the presented research, a model used to deploy intrusion detection capability on a Local Area Network (LAN), in the literature, was extended to develop a role-based hierarchical agent deployment algorithm for a WSN. The resulting model took into consideration the monitoring capability, risk, deployment distribution cost, and monitoring cost associated with each node. Changing the original LAN methodology approach to model a cluster-based sensor network depended on the ability to duplicate a specific parameter that represented the monitoring capability. Furthermore, other parameters derived from a LAN can elevate costs and risk of deployment, as well as jeopardize the success of an application on a WSN. A key component of the approach presented in this research was to reduce the costs when established clusterheads in the network were found to be capable of hosting additional detection agents. In addition, another cost savings component of the study addressed the reduction of vulnerabilities associated with deployment of agents to high volume nodes. The effectiveness of the presented method was validated by comparing it against a type of a power-based scheme that used each node's remaining energy as the deployment value. While available energy is directly related to the model used in the presented method, the study deliberately sought out nodes that were identified with having superior monitoring capability, cost less to create and sustain, and are at low-risk of an attack. This work investigated improving the efficiency of an intrusion detection system (IDS) by using the proposed model to deploy monitoring agents after a temperature sensing application had established the network traffic flow to the sink. The same scenario was repeated using a power-based IDS to compare it against the proposed model. To identify a clusterhead's ability to host monitoring agents after the temperature sensing application terminated, the deployed IDS utilized the communication history and other network factors in order to rank the nodes. Similarly, using the node's communication history, the deployed power-based IDS ranked nodes based on their remaining power. For each individual scenario, and after the IDS application was deployed, the temperature sensing application was run for a second time. This time, to monitor the temperature sensing agents as the data flowed towards the sink, the network traffic was rerouted through the new intrusion detection clusterheads. Consequently, if the clusterheads were shared, the re-routing step was not preformed. Experimental results in this research demonstrated the effectiveness of applying a robust deployment metric to improve upon the energy efficiency of a deployed application in a multi-application WSN. It was found that in the scenarios with the intrusion detection application that utilized the proposed model resulted in more remaining energy than in the scenarios that implemented the power-based IDS. The algorithm especially had a positive impact on the small, dense, and more homogeneous networks. This finding was reinforced by the smaller percentage of new clusterheads that was selected. Essentially, the energy cost of the route to the sink was reduced because the network traffic was rerouted through fewer new clusterheads. Additionally, it was found that the intrusion detection topology that used the proposed approach formed smaller and more connected sets of clusterheads than the power-based IDS. As a consequence, this proposed approach essentially achieved the research objective for enhancing energy use in a multi-application WSN.
125

An Anomaly Behavior Analysis Intrusion Detection System for Wireless Networks

Satam, Pratik January 2015 (has links)
Wireless networks have become ubiquitous, where a wide range of mobile devices are connected to a larger network like the Internet via wireless communications. One widely used wireless communication standard is the IEEE 802.11 protocol, popularly called Wi-Fi. Over the years, the 802.11 has been upgraded to different versions. But most of these upgrades have been focused on the improvement of the throughput of the protocol and not enhancing the security of the protocol, thus leaving the protocol vulnerable to attacks. The goal of this research is to develop and implement an intrusion detection system based on anomaly behavior analysis that can detect accurately attacks on the Wi-Fi networks and track the location of the attacker. As a part of this thesis we present two architectures to develop an anomaly based intrusion detection system for single access point and distributed Wi-Fi networks. These architectures can detect attacks on Wi-Fi networks, classify the attacks and track the location of the attacker once the attack has been detected. The system uses statistical and probability techniques associated with temporal wireless protocol transitions, that we refer to as Wireless Flows (Wflows). The Wflows are modeled and stored as a sequence of n-grams within a given period of analysis. We studied two approaches to track the location of the attacker. In the first approach, we use a clustering approach to generate power maps that can be used to track the location of the user accessing the Wi-Fi network. In the second approach, we use classification algorithms to track the location of the user from a Central Controller Unit. Experimental results show that the attack detection and classification algorithms generate no false positives and no false negatives even when the Wi-Fi network has high frame drop rates. The Clustering approach for location tracking was found to perform highly accurate in static environments (81% accuracy) but the performance rapidly deteriorates with the changes in the environment. While the classification algorithm to track the location of the user at the Central Controller/RADIUS server was seen to perform with lesser accuracy then the clustering approach (76% accuracy) but the system's ability to track the location of the user deteriorated less rapidly with changes in the operating environment.
126

An insider misuse threat detection and prediction language

Magklaras, Georgios Vasilios January 2012 (has links)
Numerous studies indicate that amongst the various types of security threats, the problem of insider misuse of IT systems can have serious consequences for the health of computing infrastructures. Although incidents of external origin are also dangerous, the insider IT misuse problem is difficult to address for a number of reasons. A fundamental reason that makes the problem mitigation difficult relates to the level of trust legitimate users possess inside the organization. The trust factor makes it difficult to detect threats originating from the actions and credentials of individual users. An equally important difficulty in the process of mitigating insider IT threats is based on the variability of the problem. The nature of Insider IT misuse varies amongst organizations. Hence, the problem of expressing what constitutes a threat, as well as the process of detecting and predicting it are non trivial tasks that add up to the multi- factorial nature of insider IT misuse. This thesis is concerned with the process of systematizing the specification of insider threats, focusing on their system-level detection and prediction. The design of suitable user audit mechanisms and semantics form a Domain Specific Language to detect and predict insider misuse incidents. As a result, the thesis proposes in detail ways to construct standardized descriptions (signatures) of insider threat incidents, as means of aiding researchers and IT system experts mitigate the problem of insider IT misuse. The produced audit engine (LUARM – Logging User Actions in Relational Mode) and the Insider Threat Prediction and Specification Language (ITPSL) are two utilities that can be added to the IT insider misuse mitigation arsenal. LUARM is a novel audit engine designed specifically to address the needs of monitoring insider actions. These needs cannot be met by traditional open source audit utilities. ITPSL is an XML based markup that can standardize the description of incidents and threats and thus make use of the LUARM audit data. Its novelty lies on the fact that it can be used to detect as well as predict instances of threats, a task that has not been achieved to this date by a domain specific language to address threats. The research project evaluated the produced language using a cyber-misuse experiment approach derived from real world misuse incident data. The results of the experiment showed that the ITPSL and its associated audit engine LUARM provide a good foundation for insider threat specification and prediction. Some language deficiencies relate to the fact that the insider threat specification process requires a good knowledge of the software applications used in a computer system. As the language is easily expandable, future developments to improve the language towards this direction are suggested.
127

Stream splitting in support of intrusion detection

Judd, John David 06 1900 (has links)
Approved for public release, distribution is unlimited / One of the most significant challenges with modern intrusion detection systems is the high rate of false alarms that they generate. In order to lower this rate, we propose to reduce the amount of traffic sent a given intrusion detection system via a filtering process termed stream splitting. Each packet arriving at the system is treated as belonging to a connection. Each connection is then assigned to a network stream. A network stream can then be sent to an analysis engine tailored specifically for that type of data. To demonstrate a stream-splitting capability, both an extendable multi-threaded architecture and prototype were developed. This system was tested to ensure the ability to capture traffic and found to be able to do so with minimal loss at network speeds up to 20 Mb/s, comparable to several open-source analysis programs. The stream splitter was also shown to be able to correctly implement a traffic separation scheme. / Ensign, United States Navy
128

The Human Analysis Element of Intrusion Detection: A Cognitive Task Model and Interface Design and Implications

Ellis, Brenda Lee 01 January 2009 (has links)
The use of monitoring and intrusion detection tools are common in today's network security architecture. The combination of tools generates an abundance of data which can result in cognitive overload of those analyzing the data. ID analysts initially review alerts generated by intrusion detection systems to determine the validity of the alerts. Since a large number of alerts are false positives, analyzing the data can severely reduce the number of unnecessary and unproductive investigations. The problem remains that this process is resource intensive. To date, very little research has been done to clearly determine and document the process of intrusion detection. In order to rectify this problem, research was conducted which involved several phases. Fifteen individuals were selected to participate in a cognitive task analysis. The results of the cognitive task analysis were used to develop a prototype interface which was tested by the participants. A test of the participants' knowledge after the use of the prototype revealed an increase in both effectiveness and efficiency in analyzing alerts. Specifically, the findings revealed an increase in effectiveness as 72% of the participants made better determinations using the prototype interface. The results also showed an increase in efficiency when 72% of the participants analyzed and validated alerts in less time while using the prototype interface. These findings, based on empirical data, showed that the use of the task diagram and prototype interface helped to reduce the amount of time it previously took to analyze alerts generated by intrusion detection systems.
129

Définition et évaluation d'un mécanisme de génération de règles de corrélation liées à l'environnement. / Definition and assessment of a mechanism for the generation of environment specific correlation rules

Godefroy, Erwan 30 September 2016 (has links)
Dans les systèmes d'informations, les outils de détection produisent en continu un grand nombre d'alertes.Des outils de corrélation permettent de réduire le nombre d'alertes et de synthétiser au sein de méta-alertes les informations importantes pour les administrateurs.Cependant, la complexité des règles de corrélation rend difficile leur écriture et leur maintenance.Cette thèse propose par conséquent une méthode pour générer des règles de corrélation de manière semi-automatique à partir d’un scénario d’attaque exprimé dans un langage de niveau d'abstraction élevé.La méthode repose sur la construction et l'utilisation d’une base de connaissances contenant une modélisation des éléments essentiels du système d’information (par exemple les nœuds et le déploiement des outils de détection). Le procédé de génération des règles de corrélation est composé de différentes étapes qui permettent de transformer progressivement un arbre d'attaque en règles de corrélation.Nous avons évalué ce travail en deux temps. D'une part, nous avons déroulé la méthode dans le cadre d'un cas d'utilisation mettant en jeu un réseau représentatif d'un système d'une petite entreprise.D'autre part, nous avons mesuré l'influence de fautes touchant la base de connaissances sur les règles de corrélation générées et sur la qualité de la détection. / Information systems produce continuously a large amount of messages and alerts. In order to manage this amount of data, correlation system are introduced to reduce the alerts number and produce high-level meta-alerts with relevant information for the administrators. However, it is usually difficult to write complete and correct correlation rules and to maintain them. This thesis describes a method to create correlation rules from an attack scenario specified in a high-level language. This method relies on a specific knowledge base that includes relevant information on the system such as nodes or the deployment of sensor. This process is composed of different steps that iteratively transform an attack tree into a correlation rule. The assessment of this work is divided in two aspects. First, we apply the method int the context of a use-case involving a small business system. The second aspect covers the influence of a faulty knowledge base on the generated rules and on the detection.
130

Comparison of systems to detect rogue access points

Lennartsson, Alexander, Melander, Hilda January 2019 (has links)
A hacker might use a rogue access point to gain access to a network, this poses athreat to the individuals connected to it. The hacker might have the potential to leakcorporate data or steal private information. The detection of rogue access points istherefore of importance to prevent any damage to both businesses and individuals.Comparing different software that detects rogue access points increases the chanceof someone finding a solution that suits their network. The different type of softwarethat are compared are intrusion detection systems, wireless scanners and a Ciscowireless lan controller. The parameters that are being compared are; cost, compat-ibility, detection capability and implementation difficulty. In order to obtain resultssome of the parameters require testing. As there are three types of software, threeexperiment environments should be conducted. Our research indicates that alreadyexisting network equipment or the size of the network affects the results from theexperiments.

Page generated in 0.1 seconds