• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 433
  • 38
  • 35
  • 29
  • 19
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 4
  • 4
  • Tagged with
  • 757
  • 757
  • 464
  • 347
  • 184
  • 182
  • 159
  • 122
  • 112
  • 112
  • 108
  • 103
  • 100
  • 86
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
342

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
343

Kvinnors upplevda otrygghet i Örebro kommun

Eklund, Linda January 2010 (has links)
No description available.
344

Towards Measurable and Tunable Security

Lundin, Reine January 2007 (has links)
<p>Many security services today only provides one security configuration at run-time, and cannot then utilize the trade-off between performance and security. In order to make use of this trade-off, tunable security services providing several security configurations that can be selected at run-time are needed. To be able to make intelligent choices on which security configuration to use for different situations we need to know how good they are, i.e., we need to order the different security configurations with respect to each security attribute using measures for both security and performance.</p><p>However, a key issue with computer security is that it is due to its complex nature hard to measure.</p><p>As the title of this thesis indicates, it discusses both security measures and tunable security services. Thus, it can be seen to consist of two parts. In the first part, discussing security measures for tunable security services, an investigation on the security implications of selective encryption by using guesswork as a security measure is made. Built on this an investigation of the relationship between guesswork and entropy. The result shows that guesswork,</p><p>after a minor redefinition, is equal to the sum of the entropy and the relative entropy.</p><p>The second part contributes to the area of tunable security services, e.g., services that provides several security configurations at run-time. In particular, we present the mobile Crowds (mCrowds) system,</p><p>an anonymity technology for the mobile Internet developed at Karlstad University, and a tunable encryption service, that is based on a selective encryption paradigm and designed as a middleware. Finally, an investigation of the tunable features provided by Mix-Nets and Crowds are done, using a conceptual model for tunable security services.</p>
345

A Methodology for Detecting and Classifying Rootkit Exploits

Levine, John G. (John Glenn) 18 March 2004 (has links)
A Methodology for Detecting and Classifying Rootkit Exploits John G. Levine 164 Pages Directed by Dr. Henry L. Owen We propose a methodology to detect and classify rootkit exploits. The goal of this research is to provide system administrators, researchers, and security personnel with the information necessary in order to take the best possible recovery actions concerning systems that are compromised by rootkits. There is no such methodolgoy available at present to perform this function. This may also help to detect and fingerprint additional instances and prevent further security instances involving rootkits. A formal framework was developed in order to define rootkit exploits as an existing rootkit, a modification to an exisiting, or an entirely new rootkit. A methodology was then described in order to apply this framework against rootkits that are to be investigated. We then proposed some new methods to detect and characterize specific types of rootkit exploits. These methods consisted of identifying unique string signatures of binary executable files as well as examining the system call table within the system kernel. We established a Honeynet in order to aid in our research efforts and then applied our methodology to a previously unseen rootkit that was targeted against the Honeynet. By using our methodology we were able to uniquely characterize this rootkit and identify some unique signatures that could be used in the detection of this specific rootkit. We applied our methodolgy against nine additional rootkit exploits and were were able to identify unique characterstics for each of these rootkits. These charactersitics could also be used in the prevention and detection of these rootkits.
346

A statistical process control approach for network intrusion detection

Park, Yongro 13 January 2005 (has links)
Intrusion detection systems (IDS) have a vital role in protecting computer networks and information systems. In this thesis we applied an SPC monitoring concept to a certain type of traffic data in order to detect a network intrusion. We developed a general SPC intrusion detection approach and described it and the source and the preparation of data used in this thesis. We extracted sample data sets that represent various situations, calculated event intensities for each situation, and stored these sample data sets in the data repository for use in future research. A regular batch mean chart was used to remove the sample datas inherent 60-second cycles. However, this proved too slow in detecting a signal because the regular batch mean chart only monitored the statistic at the end of the batch. To gain faster results, a modified batch mean (MBM) chart was developed that met this goal. Subsequently, we developed the Modified Batch Mean Shewhart chart, the Modified Batch Mean Cusum chart, and the Modified Batch Mean EWMA chart and analyzed the performances of each one on simulated data. The simulation studies showed that the MBM charts perform especially well with large signals ?the type of signal typically associated with a DOS intrusion. The MBM Charts can be applied two ways: by using actual control limits or by using robust control limits. The actual control limits must be determined by simulation, but the robust control limits require nothing more than the use of the recommended limits. The robust MBM Shewhart chart was developed based on choosing appropriate values based on batch size. The robust MBM Cusum chart and robust MBM EWMA chart were developed on choosing appropriate values of charting parameters.
347

Developing a Risk Management System for Information Systems Security Incidents

Farahmand, Fariborz 22 November 2004 (has links)
The Internet and information systems have enabled businesses to reduce costs, attain greater market reach, and develop closer business partnerships along with improved customer relationships. However, using the Internet has led to new risks and concerns. This research provides a management perspective on the issues confronting CIOs and IT managers. It outlines the current state of the art of information security, the important issues confronting managers, security enforcement measure/techniques, and potential threats and attacks. It develops a model for classification of threats and control measures. It also develops a scheme for probabilistic evaluation of the impact of security threats with some illustrative examples. It involves validation of information assets and probabilities of success of attacks on those assets in organizations and evaluates the expected damages of these attacks. The research outlines some suggested control measures and presents some cost models for quantifying damages from these attacks and compares the tangible and intangible costs of these attacks. This research also develops a risk management system for information systems security incidents in five stages: 1- Resource and application value analysis, 2- Vulnerability and risk analysis, 3- Computation of losses due to threats and benefits of control measures, 4- Selection of control measures, and 5- Implementation of alternatives. The outcome of this research should help decision makers to select the appropriate control measure(s) to minimize damage or loss due to security incidents. Finally, some recommendations for future work are provided to improve the management of security in organizations.
348

Secure Geolocation for Wireless Indoor Networks

Lim, Yu-Xi 12 April 2006 (has links)
The objective of the research is to develop an accurate system for indoor location estimation using a secure architecture based on the IEEE 802.11 standard for infrastructure networks. Elements of this secure architecture include: server-oriented platform for greater trust and manageability; multiple wireless network parameters for improved accuracy; and Support Vector Regression (SVR) for accurate, high-resolution estimates. While these elements have been investigated individually in earlier research, none has combined them to a single security-oriented system. Thus this research investigates the feasibility of using these elements together.
349

Scalable and efficient distributed algorithms for defending against malicious Internet activity

Sung, Minho 31 July 2006 (has links)
The threat of malicious Internet activities such as Distributed Denial of Service (DDoS) attacks, spam emails or Internet worms/viruses has been increasing in the last several years. The impact and frequency of these malicious activities are expected to grow unless they are properly addressed. In this thesis, we propose to design and evaluate a set of practical and effective protection measures against potential malicious activities in current and future networks. Our research objective is twofold. First, we design the methods to defend against DDoS attacks. Our research focuses on two important issues related to DDoS attack defense mechanisms. One issue is the method to trace the sources of attacking packets, which is known as IP traceback. We propose a novel packet logging based (i.e., hash-based) traceback scheme using only a one-bit marking field in IP header. It reduces processing and storage cost by an order of magnitude than the existing hash-based schemes, and is therefore scalable to much higher link speed (e.g., OC-768). Next, we propose an improved traceback scheme with lower storage overhead by using more marking space in IP header. Another issue in DDoS defense is to investigate protocol-independent techniques for improving the throughput of legitimate traffic during DDoS attacks. We propose a novel technique that can effectively filter out the majority of DDoS traffic, thus improving the overall throughput of the legitimate traffic. Second, we investigate the problem of distributed network monitoring. We propose a set of novel distributed data streaming algorithms that allow scalable and efficient monitoring of aggregated traffic. Our algorithms target the specific network monitoring problem of finding common content in traffic traversing several nodes/links across the Internet. These algorithms find applications in network-wide intrusion detection, early warning for fast propagating worms, and detection of hot objects and spam traffic.
350

Multi-Gigahertz Encrypted Communication Using Electro-Optical Chaos Cryptography

Gastaud Gallagher, Nicolas Hugh René 16 October 2007 (has links)
Chaotic dynamics are at the center of multiple studies to perfect encrypted communication systems. Indeed, the particular time evolution nature of chaotic signals constitutes the fundamentals of their application to secure telecommunications. The pseudo random signal constitutes the carrier wave for the communication. The information coded on the carrier wave can be extracted with knowledge of the system dynamic evolution law. This evolution law consists of a second-order delay differential equation in which intervene the various parameters of the physical system setup. The set of precise parameter values forms the key, in a cryptographic sense, of the encrypted transmission. This thesis work presents the implementation of an experimental encryption system using chaos. The optical intensity of the emitter fluctuates chaotically and serves as carrier wave. A message of small amplitude, hidden inside the fluctuations of the carrier wave, is extracted from the transmitted signal by a properly tuned receiver. The influence of the message modulation format on the communication quality both in the back to back case and after propagation is investigated numerically.

Page generated in 0.0432 seconds