• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 428
  • 49
  • 43
  • 27
  • 23
  • 19
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 768
  • 234
  • 166
  • 159
  • 154
  • 141
  • 131
  • 89
  • 82
  • 81
  • 80
  • 78
  • 75
  • 74
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Identifying Resilience Against Social Engineering Attacks

Cerovic, Lazar January 2020 (has links)
Social engineering (SE) attacks are one of the most common cyber attacks and frauds, which causes a large economical destruction to individuals, companies and governments alike. The attacks are hard to protect from, since SE-attacks is based on exploiting human weaknesses. The goal of this study is to identify indicators of resilience against SE-attacks from individual computer space data, such as network settings, social media profiles, web browsing behaviour and more. This study is based on qualitative methods to collect information, analyse and evaluate data. Resilience is evaluated with models such as theory of planned behaviour and the big five personality traits, as well as personal and demographic information. Indicators of resilience were found in network settings such as service set identifiers (SSID) and routers, web history, social media use and more. The framework developed in this study could be expanded with more aspect of individual data and different evaluation criteria. Further studies can be done about this subject with tools such as artificial intelligence and machine learning. / Sociala manipulationer är bland de vanligaste cyber attackerna och bedrägerierna som orsakar enorma ekonomiska skador varje år för individer, företag och myndigheter. Dessa attacker är svåra att skydda ifrån då sociala manipulationer utnyttjar mänskliga svagheter som ett medel till att stjäla pengar eller information. Målet med studien är att identifiera indikatorer av motstånd mot sociala manipulationsattacker, vilket ska göras med hjälp av individuell data, som kan bestå av nätverksinställningar, sociala medieprofiler, webbaktivitet bland annat. Denna studie är baserat på kvalitativa metoder för att samla, analysera och utvärdera data. Motstånd mot social manipulation utvärderas med hjälp av relevanta teorier och modeller som har med beteende och personligheter att göra, sedan används även personlig och demografisk information i utvärderingen. De indikatorer som identifierades var bland annat inställningar i routrar, webbhistorik och social medianvändning. Det teoretiska ramverket som utvecklades för att utvärdera motstånd mot sociala manipulationsattacker kan utökas med fler aspekter av individuell data. Viktiga samhällshändelser och sammanhang kan vara en intressant faktor som är relaterat till ämnet. Framtida studier skulle kunna kombinera detta ramverk med tekniker som maskinlärning och artificiell intelligens.
522

War in Pakistan the effects of the Pakistani-American War on Terror in Pakistan

Qureshi, Akhtar 01 May 2011 (has links)
This research paper investigates the current turmoil in Pakistan and how much of it has been caused by the joint American-Pakistani War on Terror. The United States' portion of the War on Terror is in Afghanistan against the Al-Qaeda and Taliban forces that began after the September 11th attacks in 2001, as well as in Pakistan with unmanned drone attacks. Pakistan's portion of this war includes the support to the U.S. in Afghanistan and military campaigns within it's own borders against Taliban forces. Taliban forces have fought back against Pakistan with terrorist attacks and bombings that continue to ravage the nation. There have been a number of consequences from this war upon Pakistani society, one of particular importance to the U.S. is the increased anti-American sentiment. The war has also resulted in weak and widely unpopular leaders. The final major consequence this study examines is the increased conflict amongst the many ethnicities within Pakistan. The consequences of this war have had an effect on local, regional, American, and international politics.
523

BDD Based Synthesis Flow for Design of DPA Resistant Cryptographic Circuits

Chakkaravarthy, Manoj 19 April 2012 (has links)
No description available.
524

Practical Mitigations Against Memory Corruption and Transient Execution Attacks

Ismail, Mohannad Adel Abdelmoniem Ahmed 31 May 2024 (has links)
Memory corruption attacks have existed in C and C++ for more than 30 years, and over the years many defenses have been proposed. In addition to that, a new class of attacks, Spectre, has emerged that abuse speculative execution to leak secrets and sensitive data through micro-architectural side channels. Many defenses have been proposed to mitigate Spectre as well. However, with every new defense a new attack emerges, and then a new defense is proposed. This is an ongoing cycle between attackers and defenders. There exists many defenses for many different attack avenues. However, many suffer from either practicality or effectiveness issues, and security researchers need to balance out their compromises. Recently, many hardware vendors, such as Intel and ARM, have realized the extent of the issue of memory corruption attacks and have developed hardware security mechanisms that can be utilized to defend against these attacks. ARM, in particular, has released a mechanism called Pointer Authentication in which its main intended use is to protect the integrity of pointers by generating a Pointer Authentication Code (PAC) using a cryptographic hash function, as a Message Authentication Code (MAC), and placing it on the top unused bits of a 64-bit pointer. Placing the PAC on the top unused bits of the pointer changes its semantics and the pointer cannot be used unless it is properly authenticated. Hardware security features such as PAC are merely mechanisms not full fledged defences, and their effectiveness and practicality depends on how they are being utililzed. Naive use of these defenses doesn't alleviate the issues that exist in many state-of-the-art software defenses. The design of the defense that utilizes these hardware security features needs to have practicality and effectiveness in mind. Having both practicality and effectiveness is now a possible reality with these new hardware security features. This dissertation describes utilizing hardware security features, namely ARM PAC, to build effective and practical defense mechanisms. This dissertation first describes my past work called PACTight, a PAC based defense mechanism that defends against control-flow hijack- ing attacks. PACTight defines three security properties of a pointer such that, if achieved, prevent pointers from being tampered with. They are: 1) unforgeability: A pointer p should always point to its legitimate object; 2) non-copyability: A pointer p can only be used when it is at its specific legitimate location; 3) non-dangling: A pointer p cannot be used after it has been freed. PACTight tightly seals pointers and guarantees that a sealed pointer cannot be forged, copied, or dangling. PACTight protects all sensitive pointers, which are code pointers and pointers that point to code pointers. This completely prevents control-flow hijacking attacks, all while having low performance overhead. In addition to that, this dissertation proposes Scope-Type Integrity (STI), a new defense policy that enforces pointers to conform to the programmer's intended manner, by utilizing scope, type, and permission information. STI collects information offline about the type, scope, and permission (read/write) of every pointer in the program. This information can then be used at runtime to ensure that pointers comply with their intended purpose. This allows STI to defeat advanced pointer attacks since these attacks typically violate either the scope, type, or permission. We present Runtime Scope-Type Integrity (RSTI). RSTI leverages ARM Pointer Authentication (PA) to generate Pointer Authentication Codes (PACs), based on the information from STI, and place these PACs at the top bits of the pointer. At runtime, the PACs are then checked to ensure pointer usage complies with STI. RSTI overcomes two drawbacks that were present in PACTight: 1) PACTight relied on a large external metadata for protection, whereas RSTI uses very little metadata. 2) PACTight only protected a subset of pointers, whereas RSTI protects all pointers in a program. RSTI has large coverage with relatively low overhead. Also, this dissertation proposes sPACtre, a new and novel defense mechanism that aims to prevent Spectre control-flow attacks on existing hardware. sPACtre is an ARM-based defense mechanism that prevents Spectre control-flow attacks by relying on ARM's Pointer Authentication hardware security feature, annotations added to the program on the secrets that need to be protected from leakage and a dynamic tag-based bounds checking mechanism for arrays. We show that sPACtre can defend against these attacks. We evaluate sPACtre on a variety of cryptographic libraries with several cryptographic algorithms, as well as a synthetic benchmark, and show that it is efficient and has low performance overhead Finally, this dissertation explains a new direction for utilizing hardware security features to protect energy harvesting devices from checkpoint-recovery errors and malicious attackers. / Doctor of Philosophy / In recent years, cyber-threats against computer systems have become more and more preva- lent. In spite of many recent advancements in defenses, these attacks are becoming more threatening. However, many of these defenses are not implemented in the real-world. This is due to their high performance overhead. This limited efficiency is not acceptable in the real-world. In addition to that, many of these defenses have limited coverage and do not cover a wide variety of attacks. This makes the performance tradeoff even less convincing. Thus, there is a need for effective and practical defenses that can cover a wide variety of attacks. This dissertation first provides a comprehensive overview of the current state-of-the-art and most dangerous attacks. More specifically, three types of attacks are examined. First, control-flow hijacking attacks, which are attacks that divert the proper execution of a pro- gram to a malicious execution. Second, data oriented attacks. These are attacks that leak sensitive data in a program. Third, Spectre attacks, which are attacks that rely on sup- posedly hidden processor features to leak sensitive data. These "hidden" features are not entirely hidden. This dissertation explains these attacks in detail and the corresponding state-of-the-art defenses that have been proposed by the security research community to mitigate them. This dissertation then discusses effective and practical defense mechanisms that can mitigate these attacks. The dissertation discusses past work, PACTight, as well as its contributions, RSTI and sPACtre, presenting the full design, threat model, implementation, security eval- uation and performance evaluation of each one of these mechanisms. The dissertation relies on insights derived from the nature of the attack and compiler techniques. A compiler is a tool that transforms human-written code into machine code that is understandable by the computer. The compiler can be modified and used to make programs more secure with compiler techniques. The past work, PACTight, is a defense mechanism that defends against the first type of attacks, control-flow hijacking attacks, by preventing an attacker from abusing specific code in the program to divert the program to a malicious execution. Then, this dissertation presents RSTI, a new defense mechanism that overcomes the limitations of PACTight and extends it to cover data oriented attacks and prevent attackers from leaking sensitive data from the program. In addition to that, this dissertation presents sPACtre, a novel defesnse mechanism that defends against Spectre attacks, and prevents an attacker from abusing a processor's hidden features. Finally, this dissertation briefly discusses a possible future direction to protect a different class of devices, referred to as energy-harvesting devices, from attackers.
525

Anonymizing Faces without Destroying Information

Rosberg, Felix January 2024 (has links)
Anonymization is a broad term. Meaning that personal data, or rather data that identifies a person, is redacted or obscured. In the context of video and image data, the most palpable information is the face. Faces barely change compared to other aspect of a person, such as cloths, and we as people already have a strong sense of recognizing faces. Computers are also adroit at recognizing faces, with facial recognition models being exceptionally powerful at identifying and comparing faces. Therefore it is generally considered important to obscure the faces in video and image when aiming for keeping it anonymized. Traditionally this is simply done through blurring or masking. But this de- stroys useful information such as eye gaze, pose, expression and the fact that it is a face. This is an especial issue, as today our society is data-driven in many aspects. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object-detectors rely on deep learning to function. Due to the data hunger of deep learning in conjunction with society’s call for privacy and integrity through regulations such as the General Data Protection Regularization (GDPR), anonymization that preserve useful information becomes important. This Thesis investigates the potential and possible limitation of anonymizing faces without destroying the aforementioned useful information. The base approach to achieve this is through face swapping and face manipulation, where the current research focus on changing the face (or identity) while keeping the original attribute information. All while being incorporated and consistent in an image and/or video. Specifically, will this Thesis demonstrate how target-oriented and subject-agnostic face swapping methodologies can be utilized for realistic anonymization that preserves attributes. Thru this, this Thesis points out several approaches that is: 1) controllable, meaning the proposed models do not naively changes the identity. Meaning that what kind of change of identity and magnitude is adjustable, thus also tunable to guarantee anonymization. 2) subject-agnostic, meaning that the models can handle any identity. 3) fast, meaning that the models is able to run efficiently. Thus having the potential of running in real-time. The end product consist of an anonymizer that achieved state-of-the-art performance on identity transfer, pose retention and expression retention while providing a realism. Apart of identity manipulation, the Thesis demonstrate potential security issues. Specifically reconstruction attacks, where a bad-actor model learns convolutional traces/patterns in the anonymized images in such a way that it is able to completely reconstruct the original identity. The bad-actor networks is able to do this with simple black-box access of the anonymization model by constructing a pair-wise dataset of unanonymized and anonymized faces. To alleviate this issue, different defense measures that disrupts the traces in the anonymized image was investigated. The main take away from this, is that naively using what qualitatively looks convincing of hiding an identity is not necessary the case at all. Making robust quantitative evaluations important.
526

Scheduling Distributed Real-Time Tasks in Unreliable and Untrustworthy Systems

Han, Kai 06 May 2010 (has links)
In this dissertation, we consider scheduling distributed soft real-time tasks in unreliable (e.g., those with arbitrary node and network failures) and untrustworthy systems (e.g., those with Byzantine node behaviors). We present a distributed real-time scheduling algorithm called Gamma. Gamma considers a distributed (i.e., multi-node) task model where tasks are subject to Time/Utility Function (or TUF) end-to-end time constraints, and the scheduling optimality criterion of maximizing the total accrued utility. The algorithm makes three novel contributions. First, Gamma uses gossip for reliably propagating task scheduling parameters and for discovering task execution nodes. Second, Gamma achieves distributed real-time mutual exclusion in unreliable environments. Third, the algorithm guards against potential disruption of message propagation due to Byzantine attacks using a mechanism called Launcher-Attacker-Infective-Susceptible-Immunized-Removed-Consumer (or LAISIRC). By doing so, the algorithm schedules tasks with probabilistic termination-time satisfactions, despite system unreliability and untrustworthiness. We analytically establish several timeliness and non-timeliness properties of the algorithm including probabilistic end-to-end task termination time satisfactions, optimality of message overheads, mutual exclusion guarantees, and the mathematical model of the LAISIRC mechanism. We conducted simulation-based experimental studies and compared Gamma with its competitors. Our experimental studies reveal that Gamma's scheduling algorithm accrues greater utility and satisfies a greater number of deadlines than do competitor algorithms (e.g., HVDF) by as much as 47% and 45%, respectively. LAISIRC is more tolerant to Byzantine attacks than competitor protocols (e.g., Path Verification) by obtaining as much as 28% higher correctness ratio. Gamma's mutual exclusion algorithm accrues greater utility than do competitor algorithms (e.g., EDF-Sigma) by as much as 25%. Further, we implemented the basic Gamma algorithm in the Emulab/ChronOS 250-node testbed, and measured the algorithm's performance. Our implementation measurements validate our theoretical analysis and the algorithm's effectiveness and robustness. / Ph. D.
527

Usability-Driven Security Enhancements in Person-to-Person Communication

Yadav, Tarun Kumar 01 February 2024 (has links) (PDF)
In the contemporary digital landscape, ensuring secure communication amid widespread data exchange is imperative. This dissertation focuses on enhancing the security and privacy of end-to-end encryption (E2EE) applications while maintaining or improving usability. The dissertation first investigates and proposes improvements in two areas of existing E2EE applications: countering man-in-the-middle and impersonation attacks through automated key verification and studying user perceptions of cryptographic deniability. Insights from privacy-conscious users reveal concerns about the lack of E2EE support, app siloing, and data accessibility by client apps. To address these issues, we propose an innovative user-controlled encryption system, enabling encryption before data reaches the client app. Finally, the dissertation evaluates local threats in the FIDO2 protocol and devises defenses against these risks. Additionally, it explores streamlining FIDO2 authentication management across multiple websites for user convenience and security.
528

Data Analytics for Statistical Learning

Komolafe, Tomilayo A. 05 February 2019 (has links)
The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. Big data is a widely-used term without a clear definition. The difference between big data and traditional data can be characterized by four Vs: velocity (speed at which data is generated), volume (amount of data generated), variety (the data can take on different forms), and veracity (the data may be of poor/unknown quality). As many industries begin to recognize the value of big data, organizations try to capture it through means such as: side-channel data in a manufacturing operation, unstructured text-data reported by healthcare personnel, various demographic information of households from census surveys, and the range of communication data that define communities and social networks. Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called statistical learning of the data, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies in the process. However, several open challenges still exist in this framework for big data analytics. Recently, data types such as free-text data are also being captured. Although many established processing techniques exist for other data types, free-text data comes from a wide range of individuals and is subject to syntax, grammar, language, and colloquialisms that require substantially different processing approaches. Once the data is processed, open challenges still exist in the statistical learning step of understanding the data. Statistical learning aims to satisfy two objectives, (1) develop a model that highlights general patterns in the data (2) create a signaling mechanism to identify if outliers are present in the data. Statistical modeling is widely utilized as researchers have created a variety of statistical models to explain everyday phenomena such as predicting energy usage behavior, traffic patterns, and stock market behaviors, among others. However, new applications of big data with increasingly varied designs present interesting challenges. Consider the example of free-text analysis posed above. There's a renewed interest in modeling free-text narratives from sources such as online reviews, customer complaints, or patient safety event reports, into intuitive themes or topics. As previously mentioned, documents describing the same phenomena can vary widely in their word usage and structure. Another recent interest area of statistical learning is using the environmental conditions that people live, work, and grow in, to infer their quality of life. It is well established that social factors play a role in overall health outcomes, however, clinical applications of these social determinants of health is a recent and an open problem. These examples are just a few of many examples wherein new applications of big data pose complex challenges requiring thoughtful and inventive approaches to processing, analyzing, and modeling data. Although a large body of research exists in the area of anomaly detection increasingly complicated data sources (such as side-channel related data or network-based data) present equally convoluted challenges. For effective anomaly-detection, analysts define parameters and rules, so that when large collections of raw data are aggregated, pieces of data that do not conform are easily noticed and flagged. In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This paper focuses on the healthcare, manufacturing and social-networking industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows: • In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data. • In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection o I address the research area of statistical modeling in two ways: - There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups - In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors o I address the research area of anomaly detection in two ways: - A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network based anomaly detection technique and introduce methodological improvements - Manufacturing enterprises which are now more connected than ever are vulnerably to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process / PHD / The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. The fields of manufacturing and healthcare are two examples of industries that are currently undergoing significant transformations due to the rise of big data. The addition of large sensory systems is changing how parts are being manufactured and inspected and the prevalence of Health Information Technology (HIT) systems in healthcare systems is also changing the way healthcare services are delivered. These industries are turning to big data analytics in the hopes of acquiring many of the benefits other sectors are experiencing, including reducing cost, improving safety, and boosting productivity. However, there are many challenges that exist along with the framework of big data analytics, from pre-processing raw data, to statistical modeling of the data, and identifying anomalies present in the data or process. This work offers significant contributions in each of the aforementioned areas and includes practical real-world applications. Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called ‘statistical learning of the data’, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies or outliers in the process. In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This work focuses on the healthcare and manufacturing industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows: • In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data. • In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection o I address the research area of statistical modeling in two ways: - There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups - In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors o I address the research area of anomaly detection in two ways: - A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network-based anomaly detection technique and introduce methodological improvements - Manufacturing enterprises which are now more connected than ever are vulnerable to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process.
529

Digital Battlegrounds: Evaluating the Impact of Cyber Warfare on International Humanitarian Law in the Russian-Ukraine War

Broekstra, Aaron January 2024 (has links)
This study investigates the legal and ethical challenges posed by cyber warfare in the ongoing Russian-Ukraine war. Cyber warfare represents a transition from traditional conflict dynamics, impacting civilian populations and national security without direct physical confrontations. The significance of this research is the inadequacy of current legal norms that govern the rapidly evolving techniques of cyber-attacks which challenge established norms of International Humanitarian Law. Hence, the research question explores how cyber warfare challenges existing legal and ethical norms for civilian protection, and what the broader implications are for the regulation of modern conflicts. Through a qualitative case study approach, the thesis analyses three cases of Russian cyber-attacks on Ukrainian civilian infrastructure: the 2015 attack on the Ukrainian power grid, the 2023 cyber-attack on Kyivstar, and the 2022 Asylum Ambuscade. In the simplified legal framework by Hoffman and Rumsey, these cases were analysed using the Tallinn Manual, and Mary Kaldor’s New Wars theory to highlight the challenges and violations of IHL. The findings conclude that the IHL framework is insufficient for the unique challenges of cyber warfare. Moreover, the study addresses for the revaluation and updating of international legal norms to keep up with the constant development of cyber warfare. In all, this thesis showcases the need for enhanced legal standards that can safeguard civilian populations and maintain international security, contributing to the fields of international law and conflict resolution.
530

Using network resources to mitigate volumetric DDoS / Utiliser les ressources réseaux pour atténuer les attaques DDoS volumétriques

Fabre, Pierre-Edouard 13 December 2018 (has links)
Les attaques massives par déni de service représentent une menace pour les services Internet. Ils impactent aussi les fournisseurs de service réseau et menace même la stabilité de l’Internet. Il y a donc un besoin pressant de contrôler les dommages causés par ces attaques. De nombreuses recherches ont été menées, mais aucune n’a été capable de combiner le besoin d’atténuation de l’attaque, avec l’obligation de continuité de service et les contraintes réseau. Les contre mesures proposées portent sur l’authentification des clients légitimes, le filtrage du trafic malicieux, une utilisation efficace des interconnections entre les équipements réseaux, ou l’absorption de l’attaque par les ressources disponibles. Dans cette thèse, nous proposons un mécanisme de contrôle de dommages. Basé sur une nouvelle signature d’attaque et les fonctions réseaux du standard Multiprotocol Label Switching (MPLS), nous isolons le trafic malicieux du trafic légitime et appliquons des contraintes sur la transmission du trafic malicieux. Le but est de rejeter suffisamment de trafic d’attaque pour maintenir la stabilité du réseau tout en préservant le trafic légitime. La solution prend en compte des informations sur l’attaque, mais aussi les ressources réseaux. Considérant que les opérateurs réseaux n’ont pas une même visibilité sur leur réseau, nous étudions l’impact de contraintes opérationnelles sur l’efficacité d’une contre mesure régulièrement recommandée, le filtrage par liste noire. Les critères d’évaluation sont le niveau d’information sur l’attaque ainsi que sur le trafic réseau. Nous formulons des scénarios auxquels chaque opérateur peut s’identifier. Nous démontrons que la l’algorithme de génération des listes noires doit être choisi avec précaution afin de maximiser l’efficacité du filtrage / Massive Denial of Service attacks represent a genuine threat for Internet service, but also significantly impact network service providers and even threat the Internet stability. There is a pressing need to control damages caused by such attacks. Numerous works have been carried out, but were unable to combine the need for mitigation, the obligation to provide continuity of service and network constraints. Proposed countermeasures focus on authenticating legitimate traffic, filtering malicious traffic, making better use of interconnection between network equipment or absorbing attack with the help of available resources. In this thesis, we propose a damage control mechanism against volumetric Denial of Services. Based on a novel attack signature and with the help of Multiprotocol Label Switching (MPLS) network functions, we isolate malicious from legitimate traffic. We apply a constraint-based forwarding to malicious traffic. The goal is to discard enough attack traffic to sustain network stability while preserving legitimate traffic. It is not only aware of attack details but also network resource, especially available bandwidth. Following that network operators do not have equal visibility on their network, we also study the impact of operational constraints on the efficiency of a commonly recommended countermeasure, namely blacklist filtering. The operational criteria are the level of information about the attack and about the traffic inside the network. We then formulate scenario which operators can identify with. We demonstrate that the blacklist generation algorithm should be carefully chosen to fit the operator context while maximizing the filtering efficiency

Page generated in 0.052 seconds