• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1963
  • 183
  • 183
  • 147
  • 36
  • 25
  • 25
  • 25
  • 25
  • 25
  • 24
  • 16
  • 11
  • 9
  • 7
  • Tagged with
  • 2878
  • 2878
  • 750
  • 637
  • 506
  • 499
  • 393
  • 336
  • 314
  • 300
  • 299
  • 289
  • 288
  • 277
  • 276
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
842

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
843

An adaptive wireless LAN MAC scheme to achieve maximum throughput and service differentiation

Zha, Wei. January 2005 (has links)
Thesis (M.S.) -- Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
844

The performance of Group Diffie-Hellman paradigms : a software framework and analysis /

Hagzan, Kieran S. January 2007 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2007. / Typescript. Includes bibliographical references (leaf 246).
845

Ad hoc wireless networks flooding and statistical understanding of node movement /

Mancera-Mendez, German Andres. January 2006 (has links)
Thesis (M.S.E.C.E)--University of Delaware, 2006. / Principal faculty advisor: Leonard J. Cimini, Jr., Electrical and Computer Engineering. Includes bibliographical references.
846

The "Mobius Cube" : an interconnection network for parallel computation

Larson, Shawn M. 26 November 1990 (has links)
Graduation date: 1991
847

Scalable and efficient distributed algorithms for defending against malicious Internet activity

Sung, Minho 31 July 2006 (has links)
The threat of malicious Internet activities such as Distributed Denial of Service (DDoS) attacks, spam emails or Internet worms/viruses has been increasing in the last several years. The impact and frequency of these malicious activities are expected to grow unless they are properly addressed. In this thesis, we propose to design and evaluate a set of practical and effective protection measures against potential malicious activities in current and future networks. Our research objective is twofold. First, we design the methods to defend against DDoS attacks. Our research focuses on two important issues related to DDoS attack defense mechanisms. One issue is the method to trace the sources of attacking packets, which is known as IP traceback. We propose a novel packet logging based (i.e., hash-based) traceback scheme using only a one-bit marking field in IP header. It reduces processing and storage cost by an order of magnitude than the existing hash-based schemes, and is therefore scalable to much higher link speed (e.g., OC-768). Next, we propose an improved traceback scheme with lower storage overhead by using more marking space in IP header. Another issue in DDoS defense is to investigate protocol-independent techniques for improving the throughput of legitimate traffic during DDoS attacks. We propose a novel technique that can effectively filter out the majority of DDoS traffic, thus improving the overall throughput of the legitimate traffic. Second, we investigate the problem of distributed network monitoring. We propose a set of novel distributed data streaming algorithms that allow scalable and efficient monitoring of aggregated traffic. Our algorithms target the specific network monitoring problem of finding common content in traffic traversing several nodes/links across the Internet. These algorithms find applications in network-wide intrusion detection, early warning for fast propagating worms, and detection of hot objects and spam traffic.
848

On robustness in high load mobile ad hoc networks

Rogers, Paul Edward. January 2005 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Department of Computer Science, 2005. / Includes bibliographical references (p. 129-136).
849

An adaptive approach for optimized opportunistic routing over Delay Tolerant Mobile Ad hoc Networks /

Zhao, Xiaogeng. January 2007 (has links)
Thesis (Ph.D. (Computer Science)) - Rhodes University, 2008.
850

Performance evaluation of on demand multicast routing protocol for ad hoc wireless networks

Khan, Nabeel Pervaiz. January 2009 (has links)
Thesis (M.S.)--University of Delaware, 2009. / Principal faculty advisor: Charles G. Boncelet, Dept. of Computer & Information Sciences. Includes bibliographical references.

Page generated in 0.0373 seconds