Spelling suggestions: "subject:"networksecurity"" "subject:"keyword:security""
261 |
Improving the Efficiency and Robustness of Intrusion Detection SystemsFogla, Prahlad 20 August 2007 (has links)
With the increase in the complexity of computer systems, existing security measures are not enough to prevent attacks. Intrusion detection systems have become an integral part of computer security to detect attempted intrusions. Intrusion detection systems need to be fast in order to detect intrusions in real time. Furthermore, intrusion detection systems need to be robust against the attacks which are disguised to evade them.
We improve the runtime complexity and space requirements of a host-based anomaly detection system that uses q-gram matching. q-gram matching is often used for approximate substring matching problems in a wide range of application areas, including intrusion detection. During the text pre-processing phase, we store all the q-grams present in the text in a tree. We use a tree redundancy pruning algorithm to reduce the size of the tree without losing any information. We also use suffix links for fast linear-time q-gram search during query matching. We compare our work with the Rabin-Karp based hash-table technique, commonly used for multiple q-gram matching.
To analyze the robustness of network anomaly detection systems, we develop a new class of polymorphic attacks called polymorphic blending attacks, that can effectively evade payload-based network anomaly IDSs by carefully matching the statistics of the mutated attack instances to the normal profile. Using PAYL anomaly detection system for our case study, we show that these attacks are practically feasible. We develop a formal framework which is used to analyze polymorphic blending attacks for several network anomaly detection systems. We show that generating an optimal polymorphic blending attack is NP-hard for these anomaly detection systems. However, we can generate polymorphic blending attacks using the proposed approximation algorithms. The framework can also be used to improve the robustness of an intrusion detector. We suggest some possible countermeasures one can take to improve the robustness of an intrusion detection system against polymorphic blending attacks.
|
262 |
Modeling and Simulations of Worms and Mitigation TechniquesAbdelhafez, Mohamed 14 November 2007 (has links)
Internet worm attacks have become increasingly more frequent and have had a major impact on the economy, making the detection and prevention of these attacks a top security concern. Several countermeasures have been proposed and evaluated in recent literature. However, the eect of these proposed defensive mechanisms on legitimate competing traffic has not been analyzed.
The first contribution of this thesis is a comparative analysis of the effectiveness of
several of these proposed mechanisms, including a measure of their effect on normal web browsing activities. In addition, we introduce a new defensive approach that can easily be implemented on existing hosts, and which significantly reduces the rate of spread of worms
using TCP connections to perform the infiltration. Our approach has no measurable effect on legitimate traffic.
The second contribution is presenting a variant of the flash worm that we term Compact Flash or CFlash that is capable of spreading even faster than its predecessor. We perform a comparative study between the flash worm and the CFlash worm using a full-detail packet-level simulator, and the results show the increase in propagation rate of the new worm given the same set of parameters.
The third contribution is the study of the behavior of TCP based worms in MANETs. We develop an analytical model for the worm spread of TCP worms in the MANETs environment that accounts for payloadsize, bandwidthsharing, radio range, nodal density and several other parameters specific for MANET topologies. We also present numerical solutions for the model and verify the results using packetlevel simulations. The results show that the analytical model developed here matches the results of the packetlevel simulation in most cases.
|
263 |
Design And Implementation Of An Unauthorized Internet Access Blocking System Validating The Source Information In Internet Access LogsUzunay, Yusuf 01 September 2006 (has links) (PDF)
Internet Access logs in a local area network are the most prominent records when the source of an Internet event is traced back. Especially in a case where an illegal activity having originated from your local area network is of concern, it is highly desirable to provide healthy records to the court including the source user and machine identity of the log record in question. To establish the validity of user and machine identity in the log records is known as source authentication.
In our study, after the problem of source authentication in each layer is discussed in detail, we argue that the only way to establish a secure source authentication is to
implement a system model that unifies low level and upper level defense mechanisms. Hence, in this thesis we propose an Unauthorized Internet Access Blocking System validating the Source Information in Internet Access Logs. The first version of our proposed system, UNIDES, is a proxy based system incorporating advanced switches and mostly deals with the low level source authentication problems. In the second version, we extend our system with SIACS which is an Internet access control system that deals with the user level source authentication problems. By supplementing the classical username-password authentication mechanism with SSL client authentication, SIACS integrates a robust user level authentication scheme into the proposed solution.
|
264 |
Real-time analysis of aggregate network traffic for anomaly detectionKim, Seong Soo 29 August 2005 (has links)
The frequent and large-scale network attacks have led to an increased need for
developing techniques for analyzing network traffic. If efficient analysis tools were
available, it could become possible to detect the attacks, anomalies and to appropriately
take action to contain the attacks before they have had time to propagate across the
network.
In this dissertation, we suggest a technique for traffic anomaly detection based on
analyzing the correlation of destination IP addresses and distribution of image-based
signal in postmortem and real-time, by passively monitoring packet headers of traffic.
This address correlation data are transformed using discrete wavelet transform for
effective detection of anomalies through statistical analysis. Results from trace-driven
evaluation suggest that the proposed approach could provide an effective means of
detecting anomalies close to the source. We present a multidimensional indicator using
the correlation of port numbers as a means of detecting anomalies.
We also present a network measurement approach that can simultaneously detect,
identify and visualize attacks and anomalous traffic in real-time. We propose to
represent samples of network packet header data as frames or images. With such a
formulation, a series of samples can be seen as a sequence of frames or video. Thisenables techniques from image processing and video compression such as DCT to be
applied to the packet header data to reveal interesting properties of traffic. We show that
??scene change analysis?? can reveal sudden changes in traffic behavior or anomalies. We
show that ??motion prediction?? techniques can be employed to understand the patterns of
some of the attacks. We show that it may be feasible to represent multiple pieces of data
as different colors of an image enabling a uniform treatment of multidimensional packet
header data.
Measurement-based techniques for analyzing network traffic treat traffic volume
and traffic header data as signals or images in order to make the analysis feasible. In this
dissertation, we propose an approach based on the classical Neyman-Pearson Test
employed in signal detection theory to evaluate these different strategies. We use both of
analytical models and trace-driven experiments for comparing the performance of
different strategies. Our evaluations on real traces reveal differences in the effectiveness
of different traffic header data as potential signals for traffic analysis in terms of their
detection rates and false alarm rates. Our results show that address distributions and
number of flows are better signals than traffic volume for anomaly detection. Our results
also show that sometimes statistical techniques can be more effective than the NP-test
when the attack patterns change over time.
|
265 |
Wavelet methods and statistical applications: network security and bioinformaticsKwon, Deukwoo 01 November 2005 (has links)
Wavelet methods possess versatile properties for statistical applications. We would
like to explore the advantages of using wavelets in the analyses in two different research
areas. First of all, we develop an integrated tool for online detection of network
anomalies. We consider statistical change point detection algorithms, for both local
changes in the variance and for jumps detection, and propose modified versions of
these algorithms based on moving window techniques. We investigate performances
on simulated data and on network traffic data with several superimposed attacks. All
detection methods are based on wavelet packets transformations.
We also propose a Bayesian model for the analysis of high-throughput data where
the outcome of interest has a natural ordering. The method provides a unified approach
for identifying relevant markers and predicting class memberships. This is
accomplished by building a stochastic search variable selection method into an ordinal
model. We apply the methodology to the analysis of proteomic studies in prostate
cancer. We explore wavelet-based techniques to remove noise from the protein mass
spectra. The goal is to identify protein markers associated with prostate-specific antigen
(PSA) level, an ordinal diagnostic measure currently used to stratify patients into different risk groups.
|
266 |
Practical authentication in large-scale internet applicationsDacosta, Italo 03 July 2012 (has links)
Due to their massive user base and request load, large-scale Internet applications have mainly focused on goals such as performance and scalability. As a result, many of these applications rely on weaker but more efficient and simpler authentication mechanisms. However, as recent incidents have demonstrated, powerful adversaries are exploiting the weaknesses in such mechanisms. While more robust authentication mechanisms exist, most of them fail to address the scale and security needs of these large-scale systems. In this dissertation we demonstrate that by taking into account the specific requirements and threat model of large-scale Internet applications, we can design authentication protocols for such applications that are not only more robust but also have low impact on performance, scalability and existing infrastructure. In particular, we show that there is no inherent conflict between stronger authentication and other system goals. For this purpose, we have designed, implemented and experimentally evaluated three robust authentication protocols: Proxychain, for SIP-based VoIP authentication; One-Time Cookies (OTC), for Web session authentication; and Direct Validation of SSL/TLS Certificates (DVCert), for server-side SSL/TLS authentication. These protocols not only offer better security guarantees, but they also have low performance overheads and do not require additional infrastructure. In so doing, we provide robust and practical authentication mechanisms that can improve the overall security of large-scale VoIP and Web applications.
|
267 |
A Modified Genetic Algorithm and Switch-Based Neural Network Model Applied to Misuse-Based Intrusion DetectionStewart, IAN 17 March 2009 (has links)
As our reliance on the Internet continues to grow, the need for secure, reliable networks also increases. Using a modified genetic algorithm and a switch-based neural network model, this thesis outlines the creation of a powerful intrusion detection system (IDS) capable of detecting network attacks.
The new genetic algorithm is tested against traditional and other modified genetic algorithms using common benchmark functions, and is found to produce better results in less time, and with less human interaction. The IDS is tested using the standard benchmark data collection for intrusion detection: the DARPA 98 KDD99 set. Results are found to be comparable to those achieved using ant colony optimization, and superior to those obtained with support vector machines and other genetic algorithms. / Thesis (Master, Computing) -- Queen's University, 2009-03-03 13:28:23.787
|
268 |
Security challenges within Software Defined NetworksSund, Gabriel, Ahmed, Haroon January 2014 (has links)
A large amount of today's communication occurs within data centers where a large number of virtual servers (running one or more virtual machines) provide service providers with the infrastructure needed for their applications and services. In this thesis, we will look at the next step in the virtualization revolution, the virtualized network. Software-defined networking (SDN) is a relatively new concept that is moving the field towards a more software-based solution to networking. Today when a packet is forwarded through a network of routers, decisions are made at each router as to which router is the next hop destination for the packet. With SDN these decisions are made by a centralized SDN controller that decides upon the best path and instructs the devices along this path as to what action each should perform. Taking SDN to its extreme minimizes the physical network components and increases the number of virtualized components. The reasons behind this trend are several, although the most prominent are simplified processing and network administration, a greater degree of automation, increased flexibility, and shorter provisioning times. This in turn leads to a reduction in operating expenditures and capital expenditures for data center owners, which both drive the further development of this technology. Virtualization has been gaining ground in the last decade. However, the initial introduction of virtualization began in the 1970s with server virtualization offering the ability to create several virtual server instances on one physical server. Today we already have taken small steps towards a virtualized network by virtualization of network equipment such as switches, routers, and firewalls. Common to virtualization is that it is in early stages all of the technologies have encountered trust issues and general concerns related to whether software-based solutions are as rugged and reliable as hardware-based solutions. SDN has also encountered these issues, and discussion of these issues continues among both believers and skeptics. Concerns about trust remain a problem for the growing number of cloud-based services where multitenant deployments may lead to loss of personal integrity and other security risks. As a relatively new technology, SDN is still immature and has a number of vulnerabilities. As with most software-based solutions, the potential for security risks increases. This thesis investigates how denial-of-service (DoS) attacks affect an SDN environment and a single-threaded controller, described by text and via simulations. The results of our investigations concerning trust in a multi-tenancy environment in SDN suggest that standardization and clear service level agreements are necessary to consolidate customers’ confidence. Attracting small groups of customers to participate in user cases in the initial stages of implementation can generate valuable support for a broader implementation of SDN in the underlying infrastructure. With regard to denial-of-service attacks, our conclusion is that hackers can by target the centralized SDN controller, thus negatively affect most of the network infrastructure (because the entire infrastructure directly depends upon a functioning SDN controller). SDN introduces new vulnerabilities, which is natural as SDN is a relatively new technology. Therefore, SDN needs to be thoroughly tested and examined before making a widespread deployment. / Dagens kommunikation sker till stor del via serverhallar där till stor grad virtualiserade servermiljöer förser serviceleverantörer med infrastukturen som krävs för att driva dess applikationer och tjänster. I vårt arbete kommer vi titta på nästa steg i denna virtualiseringsrevolution, den om virtualiserade nätverk. mjukvarudefinierat nätverk (eng. Software-defined network, eller SDN) kallas detta förhållandevis nya begrepp som syftar till mjukvarubaserade nätverk. När ett paket idag transporteras genom ett nätverk tas beslut lokalt vid varje router vilken router som är nästa destination för paketet, skillnaden i ett SDN nätverk är att besluten istället tas utifrån ett fågelperspektiv där den bästa vägen beslutas i en centraliserad mjukvaruprocess med överblick över hela nätverket och inte bara tom nästa router, denna process är även kallad SDN kontroll. Drar man uttrycket SDN till sin spets handlar det om att ersätta befintlig nätverksutrustning med virtualiserade dito. Anledningen till stegen mot denna utveckling är flera, de mest framträdande torde vara; förenklade processer samt nätverksadministration, större grad av automation, ökad flexibilitet och kortare provisionstider. Detta i sin tur leder till en sänkning av löpande kostnader samt anläggningskostnader för serverhallsinnehavare, något som driver på utvecklingen. Virtualisering har sedan början på 2000-talet varit på stark frammarsch, det började med servervirtualisering och förmågan att skapa flertalet virtualiserade servrar på en fysisk server. Idag har vi virtualisering av nätverksutrustning, såsom switchar, routrar och brandväggar. Gemensamt för all denna utveckling är att den har i tidigt stadie stött på förtroendefrågor och överlag problem kopplade till huruvida mjukvarubaserade lösningar är likvärdigt robusta och pålitliga som traditionella hårdvarubaserade lösningar. Detta problem är även något som SDN stött på och det diskuteras idag flitigt bland förespråkare och skeptiker. Dessa förtroendefrågor går på tvären mot det ökande antalet molnbaserade tjänster, typiska tjänster där säkerheten och den personliga integriten är vital. Vidare räknar man med att SDN, liksom annan ny teknik medför vissa barnsjukdomar såsom kryphål i säkerheten. Vi kommer i detta arbete att undersöka hur överbelastningsattacker (eng. Denial-of-Service, eller DoS-attacker) påverkar en SDN miljö och en singel-trådig kontroller, i text och genom simulering. Resultatet av våra undersökningar i ämnet SDN i en multitenans miljö är att standardisering och tydliga servicenivåavtal behövs för att befästa förtroendet bland kunder. Att attrahera kunder för att delta i mindre användningsfall (eng. user cases) i ett inledningsskede är också värdefullt i argumenteringen för en bredare implementering av SDN i underliggande infrastruktur. Vad gäller DoS-attacker kom vi fram till att det som hackare går att manipulera en SDN infrastruktur på ett sätt som inte är möjligt med dagens lösningar. Till exempel riktade attacker mot den centraliserade SDN kontrollen, slår man denna kontroll ur funktion påverkas stora delar av infrastrukturen eftersom de är i ett direkt beroende av en fungerande SDN kontroll. I och med att SDN är en ny teknik så öppnas också upp nya möjligheter för angrepp, med det i åtanke är det viktigt att SDN genomgår rigorösa tester innan större implementation.
|
269 |
Information Hiding in Networks : Covert ChannelsRíos del Pozo, Rubén January 2007 (has links)
<p>Covert Channels have existed for more than twenty years now. Although they did not receive a special attention in their early years, they are being more and more studied nowadays. This work focuses on network covert channels and it attempts to give an overview on their basics to later analyse several existing implementations which may compromise the security perimeter of a corporate network. The features under study are the bandwidth provided by the channel and the ease of detection. The studied tools have turned out to be in most cases unreliable and easy to detect with current detection techniques and the bandwidth provided is usually moderate but they might pose a threat if not taken into consideration.</p>
|
270 |
Uma proposta para medição de complexidade e estimação de custos de segurança em procedimentos de tecnologia da informação / An approach to measure the complexity and estimate the cost associated to Information Technology Security ProceduresMoura, Giovane Cesar Moreira January 2008 (has links)
Segurança de TI tornou-se nos últimos anos uma grande preocupação para empresas em geral. Entretanto, não é possível atingir níveis satisfatórios de segurança sem que estes venham acompanhados tanto de grandes investimentos para adquirir ferramentas que satisfaçam os requisitos de segurança quanto de procedimentos, em geral, complexos para instalar e manter a infra-estrutura protegida. A comunidade científica propôs, no passado recente, modelos e técnicas para medir a complexidade de procedimentos de configuração de TI, cientes de que eles são responsáveis por uma parcela significativa do custo operacional, freqüentemente dominando o total cost of ownership. No entanto, apesar do papel central de segurança neste contexto, ela não foi objeto de investigação até então. Para abordar este problema, neste trabalho aplica-se um modelo de complexidade proposto na literatura para mensurar o impacto de segurança na complexidade de procedimentos de TI. A proposta deste trabalho foi materializada através da implementação de um protótipo para análise de complexidade chamado Security Complexity Analyzer (SCA). Como prova de conceito e viabilidade de nossa proposta, o SCA foi utilizado para avaliar a complexidade de cenários reais de segurança. Além disso, foi conduzido um estudo para investigar a relação entre as métricas propostas no modelo de complexidade e o tempo gasto pelo administrador durante a execução dos procedimentos de segurança, através de um modelo quantitativo baseado em regressão linear, com o objetivo de prever custos associados à segurança. / IT security has become over the recent years a major concern for organizations. However, it doest not come without large investments on both the acquisition of tools to satisfy particular security requirements and complex procedures to deploy and maintain a protected infrastructure. The scientific community has proposed in the recent past models and techniques to estimate the complexity of configuration procedures, aware that they represent a significant operational cost, often dominating total cost of ownership. However, despite the central role played by security within this context, it has not been subject to any investigation to date. To address this issue, we apply a model of configuration complexity proposed in the literature in order to be able to estimate security impact on the complexity of IT procedures. Our proposal has been materialized through a prototypical implementation of a complexity scorer system called Security Complexity Analyzer (SCA). To prove concept and technical feasibility of our proposal, we have used SCA to evaluate real-life security scenarios. In addition, we have conducted a study in order to investigate the relation between the metrics proposed in the model and the time spent by the administrator while executing security procedures, with a quantitative model built using multiple regression analysis, in order to predict the costs associated to security.
|
Page generated in 0.051 seconds