• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 14
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analyzing, modeling, and improving the performance of overlay networks

Thommes, Richard Winfried. January 2007 (has links)
No description available.
22

Using Plant Epidemiological Methods to Track Computer Network Worms

Pande, Rishikesh A. 28 May 2004 (has links)
Network worms that scan random computers have caused billions of dollars in damage to enterprises across the Internet. Earlier research has concentrated on using epidemiological models to predict the number of computers a worm will infect and how long it takes to do so. In this research, one possible approach is outlined for predicting the spatial flow of a worm within the local area network (LAN). The approach in this research is based on the application of mathematical models and variables inherent in plant epidemiology. In particular, spatial autocorrelation has been identified as a candidate variable that helps predict the spread of a worm over a LAN. This research describes the application of spatial autocorrelation to the geography and topology of the LAN and describes the methods used to determine spatial autocorrelation. Also discussed is the data collection process and methods used to extract pertinent information. Data collection and analyses are applied to the spread of three historical network worms on the Virginia Tech campus and the results are described. Spatial autocorrelation exists in the spread of network worms across the Virginia Tech campus when the geographic aspect is considered. If a new network worm were to start spreading across Virginia Tech's campus, spatial autocorrelation would facilitate tracking the geographical locations of the spread. In addition if an infection with a known value of spatial autocorrelation is detected, the characteristics of the worm can be identified without a complete analysis. / Master of Science
23

Overcoming Limitations in Computer Worm Models

Posluszny III, Frank S 31 January 2005 (has links)
In less than two decades, destruction and abuse caused by computer viruses and worms have grown from an anomaly to an everyday occurrence. In recent years, the Computer Emergency Response Team (CERT) has recorded a steady increase in software defects and vulnerabilities, similar to those exploited by the Slammer and Code Red worms. In response to such a threat, the academic community has started a set of research projects seeking to understand worm behavior through creation of highly theoretical and generalized models. Staniford et. al. created a model to explain the propagation behaviors of such worms in computer network environments. Their model makes use of the Kermack-McKendrick biological model of propagation as applied to digital systems. Liljenstam et. al. add a spatial perspective to this model, varying the infection rate by the scanning worms' source and destination groups. These models have been shown to describe generic Internet-scale behavior. However, they are lacking from a localized (campus-scale) network perspective. We make the claim that certain real-world constraints, such as bandwidth and heterogeneity of hosts, affect the propagation of worms and thus should not be ignored when creating models for analysis. In setting up a testing environment for this hypothesis, we have identified areas that need further work in the computer worm research community. These include availability of real-world data, a generalized and behaviorally complete worm model, and packet-based simulations. The major contributions of this thesis involve a parameterized, algorithmic worm model, an openly available worm simulation package (based on SSFNet and SSF.App.Worm), analysis of test results showing justification to our claim, and suggested future directions.
24

The Study on China Information operation

Tsao, Pang-Chuan 25 July 2001 (has links)
Abstract China has begun to pay close attention to information operation since 1985, in particular, after the Persian Gulf War in 1991. From the very high official level downwards, all military agencies of China have devoted to research on information operation. Generally speaking, the military experts of China agree that Information Operation is a highly integrated warfare, and that all military actions that can disrupt the enemy's capability of controlling information fall into the category of Information Operation. This is a new wave of military revolution which is based on information technology. Militaries of different countries are embarking on the revolution. The foundation of the information industry, the progress of the military theories, and the attitude towards military revolution will decide the order in which different nations completing the revolution. This thesis defines the information operation very similar to the definition adopted by the U.S. Army, and focuses on information operation at the national level. The main reason of China's emphasis on information operation is to c China believes that a small-scaled, localized war with the U.S. is unavoidable, and that the U.S. will either directly or indirectly intervene in any military conflict between China and Taiwan. China would like to disrupt the control and command system of the U.S. army and to stabilize balance the moderate and conservative factions within the party. Moreover, China believes that information operation has the advantages of launching a swift and precise attack and avoiding mass destruction on Taiwan's infrastructure and high-tech industry. It also has the benefits of low intensity, low loss, high efficiency, fast attack, and fast victory. In summary, information operation is regarded by China as a kind of warfare that is in conformance both with the ancient war theory and the modern economic demand. In the face of China's development on information operation, Taiwan should think bout how to make best use of its advantages to confront China's threats, and to gain a military edge over China. Thesis reaches nine points in conclusion: 1) China has placed more attention on offensive information operation; 2) China supports asymmetrical warfare; 3) China's rapid development in internet technology is increasing its capability in information operation; 4) China has also shown emphasis and determination in the area of Information Security; 5) China will aggressively push for the establishment of legal system for information security, and raise its defensive capability on information operation; 6) China will actively train skilled people to run information operation; 7) China's emphasis on conduct information operation during military exercises shows its determination to use it during a war; 8) China's current network military power is still inferior to that of the U.S.; 9) Taiwan has to look at China's development on Information Operation objectively.
25

Robust and efficient malware analysis and host-based monitoring

Sharif, Monirul Islam 15 November 2010 (has links)
Today, host-based malware detection approaches such as antivirus programs are severely lagging in terms of defense against malware. Two important aspects that the overall effectiveness of malware detection depend on are the success of extracting information from malware using malware analysis to generate signatures, and then the success of utilizing these signatures on target hosts with appropriate system monitoring techniques. Today's malware employ a vast array of anti-analysis and anti-monitoring techniques to deter analysis and to neutralize antivirus programs, reducing the overall success of malware detection. In this dissertation, we present a set of practical approaches of robust and efficient malware analysis and system monitoring that can help make malware detection on hosts become more effective. First, we present a framework called Eureka, which efficiently deobfuscates single-pass and multi-pass packed binaries and restores obfuscated API calls, providing a basis for extracting comprehensive information from the malware using further static analysis. Second, we present the formal framework of transparent malware analysis and Ether, a dynamic malware analysis environment based on this framework that provides transparent fine-(single instruction) and coarse-(system call) granularity tracing. Third, we introduce an input-based obfuscation technique that hides trigger-based behavior from any input-oblivious analyzer. Fourth, we present an approach that automatically reverse-engineers the emulator and extracts the syntax and semantics of the bytecode language, which helps constructing control-flow graphs of the bytecode program and enables further analysis on the malicious code. Finally, we present Secure In-VM Monitoring, an approach of efficiently monitoring a target host while being robust against unknown malware that may attempt to neutralize security tools.
26

Using random projections for dimensionality reduction in identifying rogue applications

Atkison, Travis Levestis, January 2009 (has links)
Thesis (Ph.D.)--Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
27

Amber : a aero-interaction honeypot with distributed intelligence

Schoeman, Adam January 2015 (has links)
For the greater part, security controls are based on the principle of Decision through Detection (DtD). The exception to this is a honeypot, which analyses interactions between a third party and itself, while occupying a piece of unused information space. As honeypots are not located on productive information resources, any interaction with it can be assumed to be non-productive. This allows the honeypot to make decisions based simply on the presence of data, rather than on the behaviour of the data. But due to limited resources in human capital, honeypots’ uptake in the South African market has been underwhelming. Amber attempts to change this by offering a zero-interaction security system, which will use the honeypot approach of decision through Presence (DtP) to generate a blacklist of third parties, which can be passed on to a network enforcer. Empirical testing has proved the usefulness of this alternative and low cost approach in defending networks. The functionality of the system was also extended by installing nodes in different geographical locations, and streaming their detections into the central Amber hive.
28

Computer Virus Spread Containment Using Feedback Control.

Yelimeli Guruprasad, Arun 12 1900 (has links)
In this research, a security architecture based on the feedback control theory has been proposed. The first loop has been designed, developed and tested. The architecture proposes a feedback model with many controllers located at different stages of network. The controller at each stage gives feedback to the one at higher level and a decision about network security is taken. The first loop implemented in this thesis detects one important anomaly of virus attack, rate of outgoing connection. Though there are other anomalies of a virus attack, rate of outgoing connection is an important one to contain the spread. Based on the feedback model, this symptom is fed back and a state model using queuing theory is developed to delay the connections and slow down the rate of outgoing connections. Upon implementation of this model, whenever an infected machine tries to make connections at a speed not considered safe, the controller kicks in and sends those connections to a delay queue. Because of delaying connections, rate of outgoing connections decrease. Also because of delaying, many connections timeout and get dropped, reducing the spread. PID controller is implemented to decide the number of connections going to safe or suspected queue. Multiple controllers can be implemented to control the parameters like delay and timeout. Control theory analysis is performed on the system to test for stability, controllability, observability. Sensitivity analysis is done to find out the sensitivity of the controller to the delay parameter. The first loop implemented gives feedback to the architecture proposed about symptoms of an attack at the node level. A controller needs to be developed to receive information from different controllers and decision about quarantining needs to be made. This research gives the basic information needed for the controller about what is going on at individual nodes of the network. This information can also be used to increase sensitivity of other loops to increase the effectiveness of feedback architecture.
29

Modeling and Simulations of Worms and Mitigation Techniques

Abdelhafez, Mohamed 14 November 2007 (has links)
Internet worm attacks have become increasingly more frequent and have had a major impact on the economy, making the detection and prevention of these attacks a top security concern. Several countermeasures have been proposed and evaluated in recent literature. However, the eect of these proposed defensive mechanisms on legitimate competing traffic has not been analyzed. The first contribution of this thesis is a comparative analysis of the effectiveness of several of these proposed mechanisms, including a measure of their effect on normal web browsing activities. In addition, we introduce a new defensive approach that can easily be implemented on existing hosts, and which significantly reduces the rate of spread of worms using TCP connections to perform the infiltration. Our approach has no measurable effect on legitimate traffic. The second contribution is presenting a variant of the flash worm that we term Compact Flash or CFlash that is capable of spreading even faster than its predecessor. We perform a comparative study between the flash worm and the CFlash worm using a full-detail packet-level simulator, and the results show the increase in propagation rate of the new worm given the same set of parameters. The third contribution is the study of the behavior of TCP based worms in MANETs. We develop an analytical model for the worm spread of TCP worms in the MANETs environment that accounts for payloadsize, bandwidthsharing, radio range, nodal density and several other parameters specific for MANET topologies. We also present numerical solutions for the model and verify the results using packetlevel simulations. The results show that the analytical model developed here matches the results of the packetlevel simulation in most cases.
30

Robust and secure monitoring and attribution of malicious behaviors

Srivastava, Abhinav 08 July 2011 (has links)
Worldwide computer systems continue to execute malicious software that degrades the systemsâ performance and consumes network capacity by generating high volumes of unwanted traffic. Network-based detectors can effectively identify machines participating in the ongoing attacks by monitoring the traffic to and from the systems. But, network detection alone is not enough; it does not improve the operation of the Internet or the health of other machines connected to the network. We must identify malicious code running on infected systems, participating in global attack networks. This dissertation describes a robust and secure approach that identifies malware present on infected systems based on its undesirable use of network. Our approach, using virtualization, attributes malicious traffic to host-level processes responsible for the traffic. The attribution identifies on-host processes, but malware instances often exhibit parasitic behaviors to subvert the execution of benign processes. We then augment the attribution software with a host-level monitor that detects parasitic behaviors occurring at the user- and kernel-level. User-level parasitic attack detection happens via the system-call interface because it is a non-bypassable interface for user-level processes. Due to the unavailability of one such interface inside the kernel for drivers, we create a new driver monitoring interface inside the kernel to detect parasitic attacks occurring through this interface. Our attribution software relies on a guest kernelâ s data to identify on-host processes. To allow secure attribution, we prevent illegal modifications of critical kernel data from kernel-level malware. Together, our contributions produce a unified research outcome --an improved malicious code identification system for user- and kernel-level malware.

Page generated in 0.0663 seconds