• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 52
  • 31
  • 17
  • 10
  • 7
  • 4
  • 4
  • 1
  • Tagged with
  • 437
  • 437
  • 179
  • 91
  • 84
  • 81
  • 74
  • 71
  • 64
  • 58
  • 55
  • 51
  • 51
  • 50
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Oversea incident report automatic and analysis

Lin, Char-Ming 01 October 2002 (has links)
This thesis focuses develops an automatic incident report system which provides Whois search function, incident report handlers can proceed to Whois search without any other tools and time-consuming training. The incident report system transforms the incident report e-mail into database. The TWCERT/CC staffs can immediately analyze incident report data, and attack tendency. This thesis brings following contributions: A. Reduce human and time resources Organization uses the incident report system developed by this thesis can save the workload of staffs and help staffs efficiently handle incident reports. B. Effective make use of incident report information This research transforms e-mail message into database, uses database is more effective to calculate variety of statistic values. C. Speed up reaction time Processing the incident reports requires heavy human workload. Using automatic incident report system timely cope with incident report, can make organization speed up reaction time.
62

Code Automation for Vulnerability Scanner

Wu, Ching-Chang 06 July 2003 (has links)
With enormous vulnerability discovered and Internet prevailing in the word, users confront with the more dangerous environment. As a result, the users have to understand the system risk necessarily. The vulnerability scanner provides the functionality that could check if the system is vulnerable. Nessus is a vulnerability scanner. It provides the customization capability that users could defined the security check. It develops a attack language called NASL. By use of NASL, users could write the security check by themselves. But before writing the security check, the users must know the architecture of Nessus and study how to write the security check by NASL. Different vulnerabilities have different the detection approach and communications method. If users don't know about above knowledge, they couldn¡¦t write the security check. In this research, we develop a automatic mechanism of generating code for the Nessus scanner and produce a security check. And we also provide two approaches to produce the security check. The one is the modularization. It takes part of function codes into a module, and combines the modules into a security check. The other one is package. The users can't involve the attack code and just only fill in some of parameters to produce the security check. This research proposes the design above and actually implements a system to generate attack codes. It attempts to decrease the needs of knowledge to users about security check, reduce the error rates by human typos, and enhance the efficiency and correctness for writing the security check
63

Efficient network camouflaging in wireless networks

Jiang, Shu 12 April 2006 (has links)
Camouflaging is about making something invisible or less visible. Network camouflaging is about hiding certain traffic information (e.g. traffic pattern, traffic flow identity, etc.) from internal and external eavesdroppers such that important information cannot be deduced from it for malicious use. It is one of the most challenging security requirements to meet in computer networks. Existing camouflaging techniques such as traffic padding, MIX-net, etc., incur significant performance degradation when protected networks are wireless networks, such as sensor networks and mobile ad hoc networks. The reason is that wireless networks are typically subject to resource constraints (e.g. bandwidth, power supply) and possess some unique characteristics (e.g. broadcast, node mobility) that traditional wired networks do not possess. This necessitates developing new techniques that take account of properties of wireless networks and are able to achieve a good balance between performance and security. In this three-part dissertation we investigate techniques for providing network camouflaging services in wireless networks. In the first part, we address a specific problem in a hierarchical multi-task sensor network, i.e. hiding the links between observable traffic patterns and user interests. To solve the problem, a temporally constant traffic pattern, called cover traffic pattern, is needed. We describe two traf- fic padding schemes that implement the cover traffic pattern and provide algorithms for achieving the optimal energy efficiencies with each scheme. In the second part, we explore the design of a MIX-net based anonymity system in mobile ad hoc networks. The objective is to hide the source-destination relationship with respect to each connection. We survey existing MIX route determination algorithms that do not account for dynamic network topology changes, which may result in high packet loss rate and large packet latency. We then introduce adaptive algorithms to overcome this problem. In the third part, we explore the notion of providing anonymity support at MAC layer in wireless networks, which employs the broadcast property of wireless transmission. We design an IEEE 802.11-compliant MAC protocol that provides receiver anonymity for unicast frames and offers better reliability than pure broadcast protocol.
64

Design and Implementation of an Authentication and Authorization Framework for a Nomadic Service Delivery System

Das, Devaraj 12 1900 (has links)
Internet has changed our lives. It has made the true distributed computing paradigm a reality. It has opened up a lot of opportunities both in the research domain and in business domain. One can now think of developing software and make it available to the large community of users. Hyper Text Transfer Protocol (HTTP), which was originally developed for the purpose of requesting/transferring content (text, images, etc.), is now a standard for remotely invoking services and getting back results. The wireless technologies have also matured. 802.11 is the existing standard for wireless communication in a LAN environment. Today, even the small computers like the Personal Digital Assistants (PDA) is wireless enabled. This makes access to information and computing significantly much more convenient. Hotspot! server has been designed to provide connectivity and services in public places (called hotspots). It acts as a wireless Network Access Server (NAS) to users who want to obtain connectivity and services at public places. We believe that the primary applications that have importance and relevance in public places are Internet Access, and specific context-based or location specific services. These services are deployed by Internet Service Providers. Secure access is one of the primary concerns in public networks. We designed, developed and tested a framework for secure access to HTTP-based services through the Hotspot! server. Internet Access is a special case of a HTTP-based Proxy service.
65

Exact and Heuristic Algorithms for Solving the Generalized Minimum Filter Placement Problem

Mofya, Enock Chisonge January 2005 (has links)
We consider a problem of placing route-based filters in a communication network to limit the number of forged address attacks to a prescribed level. Nodes in the network communicate by exchanging packets along arcs, and the originating node embeds the origin and destination addresses within each packet that it sends. In the absence of a validation mechanism, one node can send packets to another node using a forged origin address to launch an attack against that node. Route-based filters can be established at various nodes on the communication network to protect against these attacks. A route-based filter examines each packet arriving at a node, and determines whether or not the origin address could be legitimate, based on the arc on which the packet arrives, the routing information, and possibly the destination. The problem we consider seeks to find a minimum cardinality subset of nodes to filter so that the prescribed level of security is achieved.The primary contributions of this dissertation are as follows. We formulate and discuss the modeling of this filter placement problem as a mixed-integer program. We then show the sensitivity of the optimal number of deployed filters as the required level of security changes, and demonstrate that current vertex cover-based heuristics are ineffective for problems with relaxed security levels. We identify a set of special network topologies on which the filter placement problem is solvable in polynomial time, focusing our attention on the development of a dynamic programming algorithm for solving this problem on tree networks. These results can then in turn be used to derive valid inequalities for an integer programming model of the filter placement problem. Finally, we present heuristic algorithms based on the insights gained from our overall study for solving the problem, and evaluate their performance against the optimal solution provided by our integer programming model.
66

An Investigation of Using Machine Learning with Distribution Based Flow Features for Classifying SSL Encrypted Network Traffic

Arndt, Daniel Joseph 13 August 2012 (has links)
Encrypted protocols, such as Secure Socket Layer (SSL), are becoming more prevalent because of the growing use of e-commerce, anonymity services, gaming and Peer-to-Peer (P2P) applications such as Skype and Gtalk. The objective of this work is two-fold. First, an investigation is provided into the identification of web browsing behaviour in SSL tunnels. To this end, C5.0, naive Bayesian, AdaBoost and Genetic Programming learning models are evaluated under training and test conditions from a network traffic capture. In these experiments flow based features are employed without using Internet Protocol (IP) addresses, source/destination ports or payload information. Results indicate that it is possible to identify web browsing behaviour in SSL encrypted tunnels. Test performance of ~95% detection rate and ~2% false positive rate is achieved with a C5.0 model for identifying SSL. ~98% detection rate and ~3% false positive rate is achieved with an AdaBoost model for identifying web browsing within these tunnels. Second, the identifying characteristics of SSL traffic are investigated, whereby a new tool is introduced to generate new flow statistics that focus on presenting the features in a unique way, using bins to represent distributions of measurements. These new features are tested using the best performers from previous experiments, C5.0 and AdaBoost, and increase detection rates by up to 32.40%, and lower false positive rates by as much as 54.73% on data sets that contain traffic from a different network than the training set was captured on. Furthermore, the new feature set out-preforms the old feature set in every case.
67

Protecting Networked Systems from Malware Threats

Shin, Seungwon 16 December 2013 (has links)
Currently, networks and networked systems are essential media for us to communicate with other people, access resources, and share information. Reading (or sending) emails, navigating web sites, and uploading pictures to social medias are common behaviors using networks. Besides these, networks and networked systems are used to store or access sensitive or private information. In addition, major economic activities, such as buying food and selling used cars, can also be operated with networks. Likewise, we live with networks and networked systems. As network usages are increasing and popular, people face the problems of net- work attacks. Attackers on the networks can steal people’s private information, mislead people to pay money for fake products, and threaten people, who operate online commercial sites, by bothering their services. There are much more diverse types of network attacks that torture many people using networks, and the situation is still serious. The proposal in this dissertation starts from the following two research questions: (i) what kind of network attack is prevalent and how we can investigate it and (ii) how we can protect our networks and networked systems from these attacks. Therefore, this dissertation spans two main areas to provide answers for each question. First, we analyze the behaviors and characteristics of large-scale bot infected hosts, and it provides us new findings of network malware and new insights that are useful to detect (or defeat) recent network threats. To do this, we investigate the characteristics of victims infected by recent popular botnet - Conficker, MegaD, and Srizbi. In addition, we propose a method to detect these bots by correlating network and host features. Second, we suggest new frameworks to make our networks secure based on the new network technology of Software Defined Networking (SDN). Currently, SDN technology is considered as a future major network trend, and it can dynamically program networks as we want. Our suggested frameworks for SDN can be used to devise network security applications easily, and we also provide an approach to make SDN technology secure.
68

Scalable and adaptable security modelling and analysis.

Hong, Jin Bum January 2015 (has links)
Modern networked systems are complex in such a way that assessing the security of them is a difficult task. Security models are widely used to analyse the security of these systems, which are capable of evaluating the complex relationship between network components. Security models can be generated by identifying vulnerabilities, threats (e.g., cyber attacks), network configurations, and reachability of network components. These network components are then combined into a single model to evaluate how an attacker may penetrate through the networked system. Further, countermeasures can be enforced to minimise cyber attacks based on security analysis. However, modern networked systems are becoming large sized and dynamic (e.g., Cloud Computing systems). As a result, existing security models suffer from scalability problem, where it becomes infeasible to use them for modern networked systems that contain hundreds and thousands of hosts and vulnerabilities. Moreover, the dynamic nature of modern networked systems requires a responsive update in the security model to monitor how these changes may affect the security, but there is a lack of capabilities to efficiently manage these changes with existing security models. In addition, existing security models do not provide functionalities to capture and analyse the security of unknown attacks, where the combined effects of both known and unknown attacks can create unforeseen attack scenarios that may not be detected or mitigated. Therefore, the three goals of this thesis are to (i) develop security modelling and analysis methods that can scale to a large number of network components and adapts to changes in the networked system; (ii) develop efficient security assessment methods to formulate countermeasures; and (iii) develop models and metrics to incorporate and assess the security of unknown attacks. A lifecycle of security models is introduced in this thesis to concisely describe performance and functionalities of modern security models. The five phases in the lifecycle of security models are: (1) Preprocessing, (2) Generation, (3) Representation, (4) Evaluation, and (5) Modification. To achieve goal (i), a hierarchical security model is developed to reduce the computational costs of assessing the security while maintaining all security information, where each layer captures different security information. Then, a comparative analysis is presented to show the scalability and adaptability of security models. The complexity analysis showed that the hierarchical security model has better or equivalent complexities in all phases of the lifecycle in comparison to existing security models, while the performance analysis showed that in fact it is much more scalable in practical network scenarios. To achieve goal (ii), security assessment methods based on importance measures are developed. Network centrality measures are used to identify important hosts in the networked systems, and security metrics are used to identify important vulnerabilities in the host. Also, new network centrality measures are developed to improvise the lack of accuracy of existing network centrality measures when the attack scenarios consist of attackers located inside the networked system. Important hosts and vulnerabilities are identified using efficient algorithms with a polynomial time complexity, and the accuracy of these algorithms are shown as nearly equivalent to the naive method through experiments, which has an exponential complexity. To achieve goal (iii), unknown attacks are incorporated into the hierarchical security model and the combined effects of both known and unknown attacks are analysed. Algorithms taking into account all possible attack scenarios associated with unknown attacks are used to identify significant hosts and vulnerabilities. Approximation algorithms based on dynamic programming and greedy algorithms are also developed to improve the performance. Mitigation strategies to minimise the effects of unknown attacks are formulated on the basis of significant hosts and vulnerabilities identified in the analysis. Results show that mitigation strategies formulated on the basis of significant hosts and vulnerabilities can significantly reduce the system risk in comparison to randomly applying mitigations. In summary, the contributions of this thesis are: (1) the development and evaluation of the hierarchical security model to enhance the scalability and adaptability of security modelling and analysis; (2) a comparative analysis of security models taking into account scalability and adaptability; (3) the development of security assessment methods based on importance measures to identify important hosts and vulnerabilities in the networked system and evaluating their efficiencies in terms of accuracies and performances; and (4) the development of security analysis taking into account unknown attacks, which consists of evaluating the combined effects of both known and unknown attacks.
69

Efficient, Reliable and Secure Content Delivery

Lin, Yin January 2014 (has links)
<p>Delivering content of interest to clients is one of the most important tasks of the Internet </p><p>and an everlasting research question of today's networking. Content distribution networks(CDNs) </p><p>emerged in response to the rising demand of content providers to deliver contents to clients efficiently, </p><p>reliably, and securely at relatively low cost.</p><p>This dissertation explores how CDNs can achieve major performance benefits by adopting better </p><p>caching strategies without changing the network, or by collaboration with ISPs and taking advantage of their </p><p>better knowledge of network status and topology. It discusses the emerging trends of hybrid CDN architectures </p><p>and solutions to reliability problems introduced by them. Finally, it demonstrates how CDNs could better </p><p>protect both content providers and consumers from attacks and other malicious behaviors.</p> / Dissertation
70

Empowering bystanders to facilitate Internet censorship measurement and circumvention

Burnett, Samuel Read 27 August 2014 (has links)
Free and open exchange of information on the Internet is at risk: more than 60 countries practice some form of Internet censorship, and both the number of countries practicing censorship and the proportion of Internet users who are subject to it are on the rise. Understanding and mitigating these threats to Internet freedom is a continuous technological arms race with many of the most influential governments and corporations. By its very nature, Internet censorship varies drastically from region to region, which has impeded nearly all efforts to observe and fight it on a global scale. Researchers and developers in one country may find it very difficult to study censorship in another; this is particularly true for those in North America and Europe attempting to study notoriously pervasive censorship in Asia and the Middle East. This dissertation develops techniques and systems that empower users in one country, or bystanders, to assist in the measurement and circumvention of Internet censorship in another. Our work builds from the observation that there are people everywhere who are willing to help us if only they knew how. First, we develop Encore, which allows webmasters to help study Web censorship by collecting measurements from their sites' visitors. Encore leverages weaknesses in cross-origin security policy to collect measurements from a far more diverse set of vantage points than previously possible. Second, we build Collage, a technique that uses the pervasiveness and scalability of user-generated content to disseminate censored content. Collage's novel communication model is robust against censorship that is significantly more powerful than governments use today. Together, Encore and Collage help people everywhere study and circumvent Internet censorship.

Page generated in 0.0579 seconds