• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Detecting Drive-by Download Based on Reputation System

Huang, Jhe-Jhun 10 January 2012 (has links)
Drive-by download is a sort of network attack which uses different techniques to plant malicious codes in their computers. It makes the traditional intrusion detection systems and firewalls nonfunctional in the reason that those devices could not detect web-based threats. The Crawler-based approach has been proposed by many studies to discover drive-by download sites. However, the Crawler-based approach could not simulate the real user behavior of web browsing when drive-by download attack happens. Therefore, this study proposes a new approach to detect drive-by download by sniffing HTTP flow. This study uses reputation system to improve the efficiency of client honeypots, and adjusts client honeypots to process the raw data of HTTP flow. In the experiment conducted in real network environment, this study show the performance of a single client honeypot could reach average 560,000 HTTP success access log per day. Even in the peak traffic, this mechanism reduced the process time to 22 hours, and detected drive-by download sites that users were actually browsing. Reputation system in this study is applicable to varieties of domain names because it does not refer to online WHOIS database. It established classification model on machine learning in 12 features. The correct classification rate of the reputation system applied in this study is 90.9%. Compared with other Reputation System studies, this study not only extract features from DNS A-Type but also extract features from DNS NS-Type. The experiment results show the Error Rate of the new features from DNS NS-Type is only 19.03%.
2

Malicious Web Page Detection Based on Anomaly Behavior

Tsai, Wan-yi 04 February 2009 (has links)
Because of the convenience of the Internet, we rely closely on the Internet to do information searching and sharing, forum discussion, and online services. However, most of the websites we visit are developed by people with limited security knowledge, and this condition results in many vulnerabilities in web applications. Unfortunately, hackers have successfully taken advantage of these vulnerabilities to inject malicious JavaScript into compromised web pages to trigger drive-by download attacks. Based on our long time observation of malicious web pages, malicious web pages have unusual behavior for evading detection which makes malicious web pages different form normal ones. Therefore, we propose a client-side malicious web page detection mechanism named Web Page Checker (WPC) which is based on anomaly behavior tracing and analyzing to identify malicious web pages. The experimental results show that our method can identify malicious web pages and alarm the website visitors efficiently.
3

Plusieurs axes d'analyse de sites web compromis et malicieux / A multidimensional analysis of malicious and compromised websites

Canali, Davide 12 February 2014 (has links)
L'incroyable développement du World Wide Web a permis la création de nouveaux métiers, services, ainsi que de nouveaux moyens de partage de connaissance. Le web attire aussi des malfaiteurs, qui le considèrent comme un moyen pour gagner de l'argent en exploitant les services et la propriété d'autrui. Cette thèse propose une étude des sites web compromis et malicieux sous plusieurs axes d'analyse. Même si les attaques web peuvent être de nature très compliquées, on peut quasiment toujours identifier quatre acteurs principaux dans chaque cas. Ceux sont les attaquants, les sites vulnérables hébergés par des fournisseurs d'hébergement, les utilisateurs (souvent victimes des attaques), et les sociétés de sécurité qui parcourent Internet à la recherche de sites web compromis à être bloqués. Dans cette thèse, nous analysons premièrement les attaques web du point de vue des hébergeurs, en montrant que, même si des outils gratuits permettent de détecter des signes simples de compromission, la majorité des hébergeurs échouent dans cette épreuve. Nous passons en suite à l'analyse des attaquants et des leurs motivations, en étudiant les attaques web collectés par des centaines de sites web vulnérables. Ensuite, nous étudions le comportement de milliers de victimes d'attaques web, en analysant leurs habitudes pendant la navigation, pour estimer s'il est possible de créer des "profils de risque", de façon similaire à ce que les compagnies d'assurance font aujourd'hui. Enfin, nous adoptons le point de vue des sociétés de sécurité, en proposant une solution efficace pour la détection d'attaques web convoyées par sites web compromis / The incredible growth of the World Wide Web has allowed society to create new jobs, marketplaces, as well as new ways of sharing information and money. Unfortunately, however, the web also attracts miscreants who see it as a means of making money by abusing services and other people's property. In this dissertation, we perform a multidimensional analysis of attacks involving malicious or compromised websites, by observing that, while web attacks can be very complex in nature, they generally involve four main actors. These are the attackers, the vulnerable websites hosted on the premises of hosting providers, the web users who end up being victims of attacks, and the security companies who scan the Internet trying to block malicious or compromised websites. In particular, we first analyze web attacks from a hosting provider's point of view, showing that, while simple and free security measures should allow to detect simple signs of compromise on customers' websites, most hosting providers fail to do so. Second, we switch our point of view on the attackers, by studying their modus operandi and their goals in a distributed experiment involving the collection of attacks performed against hundreds of vulnerable web sites. Third, we observe the behavior of victims of web attacks, based on the analysis of their browsing habits. This allows us to understand if it would be feasible to build risk profiles for web users, similarly to what insurance companies do. Finally, we adopt the point of view of security companies and focus on finding an efficient solution to detecting web attacks that spread on compromised websites, and infect thousands of web users every day
4

Anomaly Detection Through System and Program Behavior Modeling

Xu, Kui 15 December 2014 (has links)
Various vulnerabilities in software applications become easy targets for attackers. The trend constantly being observed in the evolution of advanced modern exploits is their growing sophistication in stealthy attacks. Code-reuse attacks such as return-oriented programming allow intruders to execute mal-intended instruction sequences on a victim machine without injecting external code. Successful exploitation leads to hijacked applications or the download of malicious software (drive-by download attack), which usually happens without the notice or permission from users. In this dissertation, we address the problem of host-based system anomaly detection, specifically by predicting expected behaviors of programs and detecting run-time deviations and anomalies. We first introduce an approach for detecting the drive-by download attack, which is one of the major vectors for malware infection. Our tool enforces the dependencies between user actions and system events, such as file-system access and process execution. It can be used to provide real time protection of a personal computer, as well as for diagnosing and evaluating untrusted websites for forensic purposes. We perform extensive experimental evaluation, including a user study with 21 participants, thousands of legitimate websites (for testing false alarms), 84 malicious websites in the wild, as well as lab reproduced exploits. Our solution demonstrates a usable host-based framework for controlling and enforcing the access of system resources. Secondly, we present a new anomaly-based detection technique that probabilistically models and learns a program's control flows for high-precision behavioral reasoning and monitoring. Existing solutions suffer from either incomplete behavioral modeling (for dynamic models) or overestimating the likelihood of call occurrences (for static models). We introduce a new probabilistic anomaly detection method for modeling program behaviors. Its uniqueness is the ability to quantify the static control flow in programs and to integrate the control flow information in probabilistic machine learning algorithms. The advantage of our technique is the significantly improved detection accuracy. We observed 11 up to 28-fold of improvement in detection accuracy compared to the state-of-the-art HMM-based anomaly models. We further integrate context information into our detection model, which achieves both strong flow-sensitivity and context-sensitivity. Our context-sensitive approach gives on average over 10 times of improvement for system call monitoring, and 3 orders of magnitude for library call monitoring, over existing regular HMM methods. Evaluated with a large amount of program traces and real-world exploits, our findings confirm that the probabilistic modeling of program dependences provides a significant source of behavior information for building high-precision models for real-time system monitoring. Abnormal traces (obtained through reproducing exploits and synthesized abnormal traces) can be well distinguished from normal traces by our model. / Ph. D.
5

Detekce škodlivých webových stránek pomocí strojového učení / Detection of Malicious Websites using Machine Learning

Šulák, Ladislav January 2018 (has links)
Táto práca sa zaoberá problematikou škodlivého kódu na webe so zameraním na analýzu a detekciu škodlivého JavaScriptu umiestneného na strane klienta s využitím strojového učenia. Navrhnutý prístup využíva známe i nové pozorovania s ohľadom na rozdiely medzi škodlivými a legitímnymi vzorkami. Tento prístup má potenciál detekovať nové exploity i zero-day útoky. Systém pre takúto detekciu bol implementovaný a využíva modely strojového učenia. Výkon modelov bol evaluovaný pomocou F1-skóre na základe niekoľkých experimentov. Použitie rozhodovacích stromov sa podľa experimentov ukázalo ako najefektívnejšia možnosť. Najefektívnejším modelom sa ukázal byť Adaboost klasifikátor s dosiahnutým F1-skóre až 99.16 %. Tento model pracoval s 200 inštanciami randomizovaného rozhodovacieho stromu založeného na algoritme Extra-Trees. Viacvrstvový perceptrón bol druhým najlepším modelom s dosiahnutým F1-skóre 97.94 %.

Page generated in 0.0492 seconds