• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 182
  • 52
  • 31
  • 17
  • 10
  • 7
  • 4
  • 4
  • 1
  • Tagged with
  • 430
  • 430
  • 178
  • 88
  • 80
  • 79
  • 71
  • 68
  • 63
  • 58
  • 51
  • 51
  • 50
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Real-time analysis of aggregate network traffic for anomaly detection

Kim, Seong Soo 29 August 2005 (has links)
The frequent and large-scale network attacks have led to an increased need for developing techniques for analyzing network traffic. If efficient analysis tools were available, it could become possible to detect the attacks, anomalies and to appropriately take action to contain the attacks before they have had time to propagate across the network. In this dissertation, we suggest a technique for traffic anomaly detection based on analyzing the correlation of destination IP addresses and distribution of image-based signal in postmortem and real-time, by passively monitoring packet headers of traffic. This address correlation data are transformed using discrete wavelet transform for effective detection of anomalies through statistical analysis. Results from trace-driven evaluation suggest that the proposed approach could provide an effective means of detecting anomalies close to the source. We present a multidimensional indicator using the correlation of port numbers as a means of detecting anomalies. We also present a network measurement approach that can simultaneously detect, identify and visualize attacks and anomalous traffic in real-time. We propose to represent samples of network packet header data as frames or images. With such a formulation, a series of samples can be seen as a sequence of frames or video. Thisenables techniques from image processing and video compression such as DCT to be applied to the packet header data to reveal interesting properties of traffic. We show that ??scene change analysis?? can reveal sudden changes in traffic behavior or anomalies. We show that ??motion prediction?? techniques can be employed to understand the patterns of some of the attacks. We show that it may be feasible to represent multiple pieces of data as different colors of an image enabling a uniform treatment of multidimensional packet header data. Measurement-based techniques for analyzing network traffic treat traffic volume and traffic header data as signals or images in order to make the analysis feasible. In this dissertation, we propose an approach based on the classical Neyman-Pearson Test employed in signal detection theory to evaluate these different strategies. We use both of analytical models and trace-driven experiments for comparing the performance of different strategies. Our evaluations on real traces reveal differences in the effectiveness of different traffic header data as potential signals for traffic analysis in terms of their detection rates and false alarm rates. Our results show that address distributions and number of flows are better signals than traffic volume for anomaly detection. Our results also show that sometimes statistical techniques can be more effective than the NP-test when the attack patterns change over time.
262

Wavelet methods and statistical applications: network security and bioinformatics

Kwon, Deukwoo 01 November 2005 (has links)
Wavelet methods possess versatile properties for statistical applications. We would like to explore the advantages of using wavelets in the analyses in two different research areas. First of all, we develop an integrated tool for online detection of network anomalies. We consider statistical change point detection algorithms, for both local changes in the variance and for jumps detection, and propose modified versions of these algorithms based on moving window techniques. We investigate performances on simulated data and on network traffic data with several superimposed attacks. All detection methods are based on wavelet packets transformations. We also propose a Bayesian model for the analysis of high-throughput data where the outcome of interest has a natural ordering. The method provides a unified approach for identifying relevant markers and predicting class memberships. This is accomplished by building a stochastic search variable selection method into an ordinal model. We apply the methodology to the analysis of proteomic studies in prostate cancer. We explore wavelet-based techniques to remove noise from the protein mass spectra. The goal is to identify protein markers associated with prostate-specific antigen (PSA) level, an ordinal diagnostic measure currently used to stratify patients into different risk groups.
263

Practical authentication in large-scale internet applications

Dacosta, Italo 03 July 2012 (has links)
Due to their massive user base and request load, large-scale Internet applications have mainly focused on goals such as performance and scalability. As a result, many of these applications rely on weaker but more efficient and simpler authentication mechanisms. However, as recent incidents have demonstrated, powerful adversaries are exploiting the weaknesses in such mechanisms. While more robust authentication mechanisms exist, most of them fail to address the scale and security needs of these large-scale systems. In this dissertation we demonstrate that by taking into account the specific requirements and threat model of large-scale Internet applications, we can design authentication protocols for such applications that are not only more robust but also have low impact on performance, scalability and existing infrastructure. In particular, we show that there is no inherent conflict between stronger authentication and other system goals. For this purpose, we have designed, implemented and experimentally evaluated three robust authentication protocols: Proxychain, for SIP-based VoIP authentication; One-Time Cookies (OTC), for Web session authentication; and Direct Validation of SSL/TLS Certificates (DVCert), for server-side SSL/TLS authentication. These protocols not only offer better security guarantees, but they also have low performance overheads and do not require additional infrastructure. In so doing, we provide robust and practical authentication mechanisms that can improve the overall security of large-scale VoIP and Web applications.
264

A Modified Genetic Algorithm and Switch-Based Neural Network Model Applied to Misuse-Based Intrusion Detection

Stewart, IAN 17 March 2009 (has links)
As our reliance on the Internet continues to grow, the need for secure, reliable networks also increases. Using a modified genetic algorithm and a switch-based neural network model, this thesis outlines the creation of a powerful intrusion detection system (IDS) capable of detecting network attacks. The new genetic algorithm is tested against traditional and other modified genetic algorithms using common benchmark functions, and is found to produce better results in less time, and with less human interaction. The IDS is tested using the standard benchmark data collection for intrusion detection: the DARPA 98 KDD99 set. Results are found to be comparable to those achieved using ant colony optimization, and superior to those obtained with support vector machines and other genetic algorithms. / Thesis (Master, Computing) -- Queen's University, 2009-03-03 13:28:23.787
265

Security challenges within Software Defined Networks

Sund, Gabriel, Ahmed, Haroon January 2014 (has links)
A large amount of today's communication occurs within data centers where a large number of virtual servers (running one or more virtual machines) provide service providers with the infrastructure needed for their applications and services. In this thesis, we will look at the next step in the virtualization revolution, the virtualized network. Software-defined networking (SDN) is a relatively new concept that is moving the field towards a more software-based solution to networking. Today when a packet is forwarded through a network of routers, decisions are made at each router as to which router is the next hop destination for the packet. With SDN these decisions are made by a centralized SDN controller that decides upon the best path and instructs the devices along this path as to what action each should perform. Taking SDN to its extreme minimizes the physical network components and increases the number of virtualized components. The reasons behind this trend are several, although the most prominent are simplified processing and network administration, a greater degree of automation, increased flexibility, and shorter provisioning times. This in turn leads to a reduction in operating expenditures and capital expenditures for data center owners, which both drive the further development of this technology. Virtualization has been gaining ground in the last decade. However, the initial introduction of virtualization began in the 1970s with server virtualization offering the ability to create several virtual server instances on one physical server. Today we already have taken small steps towards a virtualized network by virtualization of network equipment such as switches, routers, and firewalls. Common to virtualization is that it is in early stages all of the technologies have encountered trust issues and general concerns related to whether software-based solutions are as rugged and reliable as hardware-based solutions. SDN has also encountered these issues, and discussion of these issues continues among both believers and skeptics. Concerns about trust remain a problem for the growing number of cloud-based services where multitenant deployments may lead to loss of personal integrity and other security risks. As a relatively new technology, SDN is still immature and has a number of vulnerabilities. As with most software-based solutions, the potential for security risks increases. This thesis investigates how denial-of-service (DoS) attacks affect an SDN environment and a single-threaded controller, described by text and via simulations. The results of our investigations concerning trust in a multi-tenancy environment in SDN suggest that standardization and clear service level agreements are necessary to consolidate customers’ confidence. Attracting small groups of customers to participate in user cases in the initial stages of implementation can generate valuable support for a broader implementation of SDN in the underlying infrastructure. With regard to denial-of-service attacks, our conclusion is that hackers can by target the centralized SDN controller, thus negatively affect most of the network infrastructure (because the entire infrastructure directly depends upon a functioning SDN controller). SDN introduces new vulnerabilities, which is natural as SDN is a relatively new technology. Therefore, SDN needs to be thoroughly tested and examined before making a widespread deployment. / Dagens kommunikation sker till stor del via serverhallar där till stor grad virtualiserade servermiljöer förser serviceleverantörer med infrastukturen som krävs för att driva dess applikationer och tjänster. I vårt arbete kommer vi titta på nästa steg i denna virtualiseringsrevolution, den om virtualiserade nätverk. mjukvarudefinierat nätverk (eng. Software-defined network, eller SDN) kallas detta förhållandevis nya begrepp som syftar till mjukvarubaserade nätverk. När ett paket idag transporteras genom ett nätverk tas beslut lokalt vid varje router vilken router som är nästa destination för paketet, skillnaden i ett SDN nätverk är att besluten istället tas utifrån ett fågelperspektiv där den bästa vägen beslutas i en centraliserad mjukvaruprocess med överblick över hela nätverket och inte bara tom nästa router, denna process är även kallad SDN kontroll. Drar man uttrycket SDN till sin spets handlar det om att ersätta befintlig nätverksutrustning med virtualiserade dito. Anledningen till stegen mot denna utveckling är flera, de mest framträdande torde vara; förenklade processer samt nätverksadministration, större grad av automation, ökad flexibilitet och kortare provisionstider. Detta i sin tur leder till en sänkning av löpande kostnader samt anläggningskostnader för serverhallsinnehavare, något som driver på utvecklingen. Virtualisering har sedan början på 2000-talet varit på stark frammarsch, det började med servervirtualisering och förmågan att skapa flertalet virtualiserade servrar på en fysisk server. Idag har vi virtualisering av nätverksutrustning, såsom switchar, routrar och brandväggar. Gemensamt för all denna utveckling är att den har i tidigt stadie stött på förtroendefrågor och överlag problem kopplade till huruvida mjukvarubaserade lösningar är likvärdigt robusta och pålitliga som traditionella hårdvarubaserade lösningar. Detta problem är även något som SDN stött på och det diskuteras idag flitigt bland förespråkare och skeptiker. Dessa förtroendefrågor går på tvären mot det ökande antalet molnbaserade tjänster, typiska tjänster där säkerheten och den personliga integriten är vital. Vidare räknar man med att SDN, liksom annan ny teknik medför vissa barnsjukdomar såsom kryphål i säkerheten. Vi kommer i detta arbete att undersöka hur överbelastningsattacker (eng. Denial-of-Service, eller DoS-attacker) påverkar en SDN miljö och en singel-trådig kontroller, i text och genom simulering. Resultatet av våra undersökningar i ämnet SDN i en multitenans miljö är att standardisering och tydliga servicenivåavtal behövs för att befästa förtroendet bland kunder. Att attrahera kunder för att delta i mindre användningsfall (eng. user cases) i ett inledningsskede är också värdefullt i argumenteringen för en bredare implementering av SDN i underliggande infrastruktur. Vad gäller DoS-attacker kom vi fram till att det som hackare går att manipulera en SDN infrastruktur på ett sätt som inte är möjligt med dagens lösningar. Till exempel riktade attacker mot den centraliserade SDN kontrollen, slår man denna kontroll ur funktion påverkas stora delar av infrastrukturen eftersom de är i ett direkt beroende av en fungerande SDN kontroll. I och med att SDN är en ny teknik så öppnas också upp nya möjligheter för angrepp, med det i åtanke är det viktigt att SDN genomgår rigorösa tester innan större implementation.
266

Information Hiding in Networks : Covert Channels

Ríos del Pozo, Rubén January 2007 (has links)
<p>Covert Channels have existed for more than twenty years now. Although they did not receive a special attention in their early years, they are being more and more studied nowadays. This work focuses on network covert channels and it attempts to give an overview on their basics to later analyse several existing implementations which may compromise the security perimeter of a corporate network. The features under study are the bandwidth provided by the channel and the ease of detection. The studied tools have turned out to be in most cases unreliable and easy to detect with current detection techniques and the bandwidth provided is usually moderate but they might pose a threat if not taken into consideration.</p>
267

Uma proposta para medição de complexidade e estimação de custos de segurança em procedimentos de tecnologia da informação / An approach to measure the complexity and estimate the cost associated to Information Technology Security Procedures

Moura, Giovane Cesar Moreira January 2008 (has links)
Segurança de TI tornou-se nos últimos anos uma grande preocupação para empresas em geral. Entretanto, não é possível atingir níveis satisfatórios de segurança sem que estes venham acompanhados tanto de grandes investimentos para adquirir ferramentas que satisfaçam os requisitos de segurança quanto de procedimentos, em geral, complexos para instalar e manter a infra-estrutura protegida. A comunidade científica propôs, no passado recente, modelos e técnicas para medir a complexidade de procedimentos de configuração de TI, cientes de que eles são responsáveis por uma parcela significativa do custo operacional, freqüentemente dominando o total cost of ownership. No entanto, apesar do papel central de segurança neste contexto, ela não foi objeto de investigação até então. Para abordar este problema, neste trabalho aplica-se um modelo de complexidade proposto na literatura para mensurar o impacto de segurança na complexidade de procedimentos de TI. A proposta deste trabalho foi materializada através da implementação de um protótipo para análise de complexidade chamado Security Complexity Analyzer (SCA). Como prova de conceito e viabilidade de nossa proposta, o SCA foi utilizado para avaliar a complexidade de cenários reais de segurança. Além disso, foi conduzido um estudo para investigar a relação entre as métricas propostas no modelo de complexidade e o tempo gasto pelo administrador durante a execução dos procedimentos de segurança, através de um modelo quantitativo baseado em regressão linear, com o objetivo de prever custos associados à segurança. / IT security has become over the recent years a major concern for organizations. However, it doest not come without large investments on both the acquisition of tools to satisfy particular security requirements and complex procedures to deploy and maintain a protected infrastructure. The scientific community has proposed in the recent past models and techniques to estimate the complexity of configuration procedures, aware that they represent a significant operational cost, often dominating total cost of ownership. However, despite the central role played by security within this context, it has not been subject to any investigation to date. To address this issue, we apply a model of configuration complexity proposed in the literature in order to be able to estimate security impact on the complexity of IT procedures. Our proposal has been materialized through a prototypical implementation of a complexity scorer system called Security Complexity Analyzer (SCA). To prove concept and technical feasibility of our proposal, we have used SCA to evaluate real-life security scenarios. In addition, we have conducted a study in order to investigate the relation between the metrics proposed in the model and the time spent by the administrator while executing security procedures, with a quantitative model built using multiple regression analysis, in order to predict the costs associated to security.
268

Design and performance analysis of optical attocell networks

Yin, Liang January 2018 (has links)
The exponentially increasing demand for high-speed wireless communications will no longer be satisfied by the traditional radio frequency (RF) in the near future due to its limited spectrum and overutilization. To resolve this imminent issue, industrial and research communities have been looking into alternative technologies for communication. Among them, visible light communication (VLC) has attracted much attention because it utilizes the unlicensed, free and safe spectrum, whose bandwidth is thousand times larger than the entire RF spectrum. Moreover, VLC can be integrated into existing lighting systems to offer a dual-purpose, cost-effective and energy-efficient solution for next-generation small-cell networks (SCNs), giving birth to the concept of optical attocell networks. Most relevant works in the literature rely on system simulations to quantify the performance of attocell networks, which suffer from high computational complexity and provide limited insights about the network. Mathematical tools, on the other hand, are more tractable and scalable and are shown to closely approximate practical systems. The presented work utilizes stochastic geometry for downlink evaluation of optical attocell networks, where the co-channel interference (CCI) surpasses noise and becomes the limiting factor of the link throughput. By studying the moment generating function (MGF) of the aggregate interference, a theoretical framework for modeling the distribution of signal-to-interference-plus-noise ratio (SINR) is presented, which allows important performance metrics such as the coverage probability and link throughput to be derived. Depending on the source of interference, CCI can be classified into two categories: inter-cell interference (ICI) and intra-cell interference. In this work, both types of interference are characterized, based on which effective interference mitigation techniques such as the coordinated multipoint (CoMP), power-domain multiplexing and successive interference cancellation (SIC) are devised. The proposed mathematical framework is applicable to attocell networks with and without such interference mitigation techniques. Compared to RF networks, optical attocell networks are inherently more secure in the physical layer because visible light does not penetrate through opaque walls. This work analytically quantifies the physical-layer security of attocell networks from an information-theoretic point of view. Secrecy enhancement techniques such as AP cooperation and eavesdropper-free protected zones are also discussed. It is shown that compared to AP cooperation, implementing secrecy protected zones is more effective and it can contribute significantly to the network security.
269

Uma proposta para medição de complexidade e estimação de custos de segurança em procedimentos de tecnologia da informação / An approach to measure the complexity and estimate the cost associated to Information Technology Security Procedures

Moura, Giovane Cesar Moreira January 2008 (has links)
Segurança de TI tornou-se nos últimos anos uma grande preocupação para empresas em geral. Entretanto, não é possível atingir níveis satisfatórios de segurança sem que estes venham acompanhados tanto de grandes investimentos para adquirir ferramentas que satisfaçam os requisitos de segurança quanto de procedimentos, em geral, complexos para instalar e manter a infra-estrutura protegida. A comunidade científica propôs, no passado recente, modelos e técnicas para medir a complexidade de procedimentos de configuração de TI, cientes de que eles são responsáveis por uma parcela significativa do custo operacional, freqüentemente dominando o total cost of ownership. No entanto, apesar do papel central de segurança neste contexto, ela não foi objeto de investigação até então. Para abordar este problema, neste trabalho aplica-se um modelo de complexidade proposto na literatura para mensurar o impacto de segurança na complexidade de procedimentos de TI. A proposta deste trabalho foi materializada através da implementação de um protótipo para análise de complexidade chamado Security Complexity Analyzer (SCA). Como prova de conceito e viabilidade de nossa proposta, o SCA foi utilizado para avaliar a complexidade de cenários reais de segurança. Além disso, foi conduzido um estudo para investigar a relação entre as métricas propostas no modelo de complexidade e o tempo gasto pelo administrador durante a execução dos procedimentos de segurança, através de um modelo quantitativo baseado em regressão linear, com o objetivo de prever custos associados à segurança. / IT security has become over the recent years a major concern for organizations. However, it doest not come without large investments on both the acquisition of tools to satisfy particular security requirements and complex procedures to deploy and maintain a protected infrastructure. The scientific community has proposed in the recent past models and techniques to estimate the complexity of configuration procedures, aware that they represent a significant operational cost, often dominating total cost of ownership. However, despite the central role played by security within this context, it has not been subject to any investigation to date. To address this issue, we apply a model of configuration complexity proposed in the literature in order to be able to estimate security impact on the complexity of IT procedures. Our proposal has been materialized through a prototypical implementation of a complexity scorer system called Security Complexity Analyzer (SCA). To prove concept and technical feasibility of our proposal, we have used SCA to evaluate real-life security scenarios. In addition, we have conducted a study in order to investigate the relation between the metrics proposed in the model and the time spent by the administrator while executing security procedures, with a quantitative model built using multiple regression analysis, in order to predict the costs associated to security.
270

Establishing the Software-Defined Networking Based Defensive System in Clouds

January 2014 (has links)
abstract: Cloud computing is regarded as one of the most revolutionary technologies in the past decades. It provides scalable, flexible and secure resource provisioning services, which is also the reason why users prefer to migrate their locally processing workloads onto remote clouds. Besides commercial cloud system (i.e., Amazon EC2), ProtoGENI and PlanetLab have further improved the current Internet-based resource provisioning system by allowing end users to construct a virtual networking environment. By archiving the similar goal but with more flexible and efficient performance, I present the design and implementation of MobiCloud that is a geo-distributed mobile cloud computing platform, and G-PLaNE that focuses on how to construct the virtual networking environment upon the self-designed resource provisioning system consisting of multiple geo-distributed clusters. Furthermore, I conduct a comprehensive study to layout existing Mobile Cloud Computing (MCC) service models and corresponding representative related work. A new user-centric mobile cloud computing service model is proposed to advance the existing mobile cloud computing research. After building the MobiCloud, G-PLaNE and studying the MCC model, I have been using Software Defined Networking (SDN) approaches to enhance the system security in the cloud virtual networking environment. I present an OpenFlow based IPS solution called SDNIPS that includes a new IPS architecture based on Open vSwitch (OVS) in the cloud software-based networking environment. It is enabled with elasticity service provisioning and Network Reconfiguration (NR) features based on POX controller. Finally, SDNIPS demonstrates the feasibility and shows more efficiency than traditional approaches through a thorough evaluation. At last, I propose an OpenFlow-based defensive module composition framework called CloudArmour that is able to perform query, aggregation, analysis, and control function over distributed OpenFlow-enabled devices. I propose several modules and use the DDoS attack as an example to illustrate how to composite the comprehensive defensive solution based on CloudArmour framework. I introduce total 20 Python-based CloudArmour APIs. Finally, evaluation results prove the feasibility and efficiency of CloudArmour framework. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2014

Page generated in 0.1136 seconds