• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 84
  • 36
  • 29
  • 28
  • 19
  • 18
  • 17
  • 17
  • 16
  • 15
  • 13
  • 13
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A MACHINE LEARNING BASED WEB SERVICE FOR MALICIOUS URL DETECTION IN A BROWSER

Hafiz Muhammad Junaid Khan (8119418) 12 December 2019 (has links)
Malicious URLs pose serious cyber-security threats to the Internet users. It is critical to detect malicious URLs so that they could be blocked from user access. In the past few years, several techniques have been proposed to differentiate malicious URLs from benign ones with the help of machine learning. Machine learning algorithms learn trends and patterns in a data-set and use them to identify any anomalies. In this work, we attempt to find generic features for detecting malicious URLs by analyzing two publicly available malicious URL data-sets. In order to achieve this task, we identify a list of substantial features that can be used to classify all types of malicious URLs. Then, we select the most significant lexical features by using Chi-Square and ANOVA based statistical tests. The effectiveness of these feature sets is then tested by using a combination of single and ensemble machine learning algorithms. We build a machine learning based real-time malicious URL detection system as a web service to detect malicious URLs in a browser. We implement a chrome extension that intercepts a browser’s URL requests and sends them to web service for analysis. We implement the web service as well that classifies a URL as benign or malicious using the saved ML model. We also evaluate the performance of our web service to test whether the service is scalable.
22

Using Machine Learning to Detect Malicious URLs

Cheng, Aidan 01 January 2017 (has links)
There is a need for better predictive model that reduces the number of malicious URLs being sent through emails. This system should learn from existing metadata about URLs. The ideal solution for this problem would be able to learn from its predictions. For example, if it predicts a URL to be malicious, and that URL is deemed safe by the sandboxing environment, the predictor should refine its model to account for this data. The problem, then, is to construct a model with these characteristics that can make these predictions for the vast number of URLs being processed. Given that the current system does not employ machine learning methods, we intend to investigate multiple such models and summarize which of those might be worth pursuing on a large scale.
23

Analyse dynamique de logiciels malveillants / Dynamic Analysis of Malicious Software

Calvet, Joan 23 August 2013 (has links)
L'objectif de cette thèse est le développement de méthodes de compréhension des logiciels malveillants, afin d'aider l'analyste humain à mieux appréhender cette menace. La première réalisation de cette thèse est une analyse à grande échelle et en profondeur des protections de logiciels malveillants. Plus précisément, nous avons étudié des centaines d'exemplaires de logiciels malveillants, soigneusement sélectionnés pour leur dangerosité. En mesurant de façon automatique un ensemble de caractéristiques originales, nous avons pu alors montrer l'existence d'un modèle de protection particulièrement prévalent dans ces programmes, qui est basé sur l'auto modification du code et sur une limite stricte entre code de protection et code utile. Ensuite, nous avons développé une méthode d'identification d'implémentations cryptographiques adaptée aux programmes en langage machine protégés. Nous avons validé notre approche en identifiant de nombreuses implémentations d'algorithmes cryptographiques -- dont la majorité sont complètement invisibles pour les outils existants --, et ceci en particulier dans des protections singulièrement obscures de logiciels malveillants. Finalement, nous avons développé ce qui est, à notre connaissance, le premier environnement d'émulation de réseaux de machines infectées avec plusieurs milliers de machines. Grâce à cela, nous avons montré que l'exploitation d'une vulnérabilité du protocole pair-à-pair du réseau Waledac permet de prendre son contrôle / The main goal of this thesis is the development of malware analysis methods to help human analysts better comprehend the threat it represents. The first achievement in this thesis is the large-scale and in-depth analysis of malware protection techniques. In particular, we have studied hundreds of malware samples, carefully selected according to their threat level. By automatically measuring a set of original characteristics, we have been able to demonstrate the existence of a particularly prevalent model of protection in these programmes that is based on self-modifying code and on a strict delimitation between protection code and payload code. Then, we have developed an identification method for cryptographic implementations adapted to protected machine language programmes. We have validated our approach by identifying several implementations of cryptographic algorithms ---the majority unidentified by existing tools--- and this even in particularly obscure malware protection schemes. Finally, we have developed what is, to our knowledge, the first emulation environment for botnets involving several thousands of machines. Thanks to this, we were able to validate the viability of the use of a vulnerability in the peer-to-peer protocol in the Waledac botnet to take over this network
24

Detecting Malicious Campaigns in Crowdsourcing Platforms

Choi, Hongkyu 01 May 2017 (has links)
Crowdsourcing systems enable new opportunities for requesters with limited funds to accomplish various tasks using human computation. However, the power of human computation is abused by malicious requesters who create malicious campaigns to manipulate information in web systems such as social networking sites, online review sites, and search engines. To mitigate the impact and reach of these malicious campaigns to targeted sites, we propose and evaluate a machine learning based classification approach for detecting malicious campaigns in crowdsourcing platforms as a first line of defense, and build a malicious campaign blacklist service for targeted site providers, researchers and users. Specifically, we (i) conduct a comprehensive analysis to understand the characteristics of malicious campaigns and legitimate campaigns in crowdsourcing platforms, (ii) propose various features to distinguish between malicious campaigns and legitimate campaigns, (iii) evaluate a classification approach against baselines, and (iv) build a malicious campaign blacklist service. Our experimental results show that our proposed approaches effectively detect malicious campaigns with low false negative and false positive rates.
25

Κατασκευή συστήματος αναγνώρισης κακόβουλων χρηστών στο διαδίκτυο

Βήττας, Ιωάννης 08 March 2010 (has links)
Στη συγκεκριμένη Διπλωματική εργασία μελετώνται μέθοδοι κατασκευής συστήματος αναγνώρισης κακόβουλων - spammer χρηστών στο Διαδίκτυο. Συγκεκριμένα, επικεντρωνόμαστε στα Συστήματα Κοινωνικής Σελιδοσήμανσης, που αποτελούν έναν από τους βασικότερους τομείς σήμερα στο Διαδίκτυο. Οι μέθοδοι που χρησιμοποιούνται βασίζονται στο επιστημονικό πεδίο της Μηχανικής Μάθησης. Δοσμένου ενός πραγματικού συνόλου δεδομένων που περιγράφει έναν από τους πιο δημοφιλής Ιστότοπους Κοινωνικής Σελιδοσήμανσης, τον BibSonomy, εξάγονται χαρακτηριστικά σημασιολογικής φύσεως και εισάγονται σε ταξινομητές ώστε να διερευνηθεί η απόδοσή τους και να ευρεθούν οι βέλτιστες ρυθμίσεις τους στη διαδικασία ταυτοποίησης spammer και νόμιμων χρηστών. / In this Thesis are studied methods of designing a system that identifies malicious – spammer users on the Internet. In particular, we focus on Social Bookmarking Systems, which form one of the key areas on the Internet today. Methods are based on the scientific field of Machine Learning. Given a real dataset that describes one of the most popular Social Bookmarking website, BibSonomy, semantic features are extracted and introduced at classifiers in order to investigate the performance and determine the best settings in the process of identifying spammer and legitimate users.
26

Runtime Analysis of Malware

Iqbal, Muhammad Shahid, Sohail, Muhammad January 2011 (has links)
Context: Every day increasing number of malwares are spreading around the world and infecting not only end users but also large organizations. This results in massive security threat for private data and expensive computer resources. There is lot of research going on to cope up with this large amount of malicious software. Researchers and practitioners developed many new methods to deal with them. One of the most effective methods used to capture malicious software is dynamic malware analysis. Dynamic analysis methods used today are very time consuming and resource greedy. Normally it could take days or at least some hours to analyze a single instance of suspected software. This is not good enough especially if we look at amount of attacks occurring every day. Objective: To save time and expensive resources used to perform these analyses, AMA: an automated malware analysis system is developed to analyze large number of suspected software. Analysis of any software inside AMA, results in a detailed report of its behavior, which includes changes made to file system, registry, processes and network traffic consumed. Main focus of this study is to develop a model to automate the runtime analysis of software which provide detailed analysis report and evaluation of its effectiveness. Methods: A thorough background study is conducted to gain the knowledge about malicious software and their behavior. Further software analysis techniques are studied to come up with a model that will automate the runtime analysis of software. A prototype system is developed and quasi experiment performed on malicious and benign software to evaluate the accuracy of the newly developed system and generated reports are compared with Norman and Anubis. Results: Based on thorough background study an automated runtime analysis model is developed and quasi experiment performed using implemented prototype system on selected legitimate and benign software. The experiment results show AMA has captured more detailed software behavior then Norman and Anubis and it could be used to better classify software. Conclusions: We concluded that AMA could capture more detailed behavior of the software analyzed and it will give more accurate classification of the software. We also can see from experiment results that there is no concrete distinguishing factors between general behaviors of both types of software. However, by digging a bit deep into analysis report one could understand the intensions of the software. That means reports generated by AMA provide enough information about software behavior and can be used to draw correct conclusions. / +46 736 51 83 01
27

HTTP botnet detection using passive DNS analysis and application profiling

Alenazi, Abdelrahman Aziz 15 December 2017 (has links)
HTTP botnets are currently the most popular form of botnets compared to IRC and P2P botnets. This is because, they are not only easier to implement, operate, and maintain, but they can easily evade detection. Likewise, HTTP botnets flows can easily be buried in the huge volume of legitimate HTTP traffic occurring in many organizations, which makes the detection harder. In this thesis, a new detection framework involving three detection models is proposed, which can run independently or in tandem. The first detector profiles the individual applications based on their interactions, and isolates accordingly the malicious ones. The second detector tracks the regularity in the timing of the bot DNS queries, and uses this as basis for detection. The third detector analyzes the characteristics of the domain names involved in the DNS, and identifies the algorithmically generated and fast flux domains, which are staples of typical HTTP botnets. Several machine learning classifiers are investigated for each of the detectors. Experimental evaluation using public datasets and datasets collected in our testbed yield very encouraging performance results. / Graduate
28

Exploring Data Security Management Strategies for Preventing Data Breaches

Ofori-Duodu, Michael Samuel 01 January 2019 (has links)
Insider threat continues to pose a risk to organizations, and in some cases, the country at large. Data breach events continue to show the insider threat risk has not subsided. This qualitative case study sought to explore the data security management strategies used by database and system administrators to prevent data breaches by malicious insiders. The study population consisted of database administrators and system administrators from a government contracting agency in the northeastern region of the United States. The general systems theory, developed by Von Bertalanffy, was used as the conceptual framework for the research study. The data collection process involved interviewing database and system administrators (n = 8), organizational documents and processes (n = 6), and direct observation of a training meeting (n = 3). By using methodological triangulation and by member checking with interviews and direct observation, efforts were taken to enhance the validity of the findings of this study. Through thematic analysis, 4 major themes emerged from the study: enforcement of organizational security policy through training, use of multifaceted identity and access management techniques, use of security frameworks, and use of strong technical control operations mechanisms. The findings of this study may benefit database and system administrators by enhancing their data security management strategies to prevent data breaches by malicious insiders. Enhanced data security management strategies may contribute to social change by protecting organizational and customer data from malicious insiders that could potentially lead to espionage, identity theft, trade secrets exposure, and cyber extortion.
29

A Novel Approach for Analyzing and Classifying Malicious Web Pages

Hiremath, Panchakshari N 18 May 2021 (has links)
No description available.
30

Detekce škodlivých domén za pomoci analýzy pasivního DNS provozu / Detection of Malicious Domains Using Passive DNS Analysis

Doležal, Jiří January 2014 (has links)
Tato diplomová práce se zabývá detekcí škodlivých domén za pomoci analýzy pasivního DNS provozu, návrhem a implementací vlastního systému detekce. Provoz DNS se stává terčem mnoha útočníků, kteří využívají toho, že služba DNS je nezbytná pro fungování Internetu. Téměř každá internetová komunikace totiž začíná DNS dotazem a odpovědí. Zneužívání služby DNS nebo využívání slabin této služby se projevuje anomálním chováním DNS provozu. Tato práce obsahuje popis různých metod používaných pro odhalování anomálií a škodlivých domén v DNS datech. Hlavní částí práce je návrh a implementace systému pro detekci škodlivých domén. Implementovaný systém byl testován na DNS datech získaných z reálného provozu.

Page generated in 0.0154 seconds