• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 81
  • 34
  • 28
  • 27
  • 17
  • 17
  • 16
  • 16
  • 15
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Using Machine Learning to Detect Malicious URLs

Cheng, Aidan 01 January 2017 (has links)
There is a need for better predictive model that reduces the number of malicious URLs being sent through emails. This system should learn from existing metadata about URLs. The ideal solution for this problem would be able to learn from its predictions. For example, if it predicts a URL to be malicious, and that URL is deemed safe by the sandboxing environment, the predictor should refine its model to account for this data. The problem, then, is to construct a model with these characteristics that can make these predictions for the vast number of URLs being processed. Given that the current system does not employ machine learning methods, we intend to investigate multiple such models and summarize which of those might be worth pursuing on a large scale.
22

Analyse dynamique de logiciels malveillants / Dynamic Analysis of Malicious Software

Calvet, Joan 23 August 2013 (has links)
L'objectif de cette thèse est le développement de méthodes de compréhension des logiciels malveillants, afin d'aider l'analyste humain à mieux appréhender cette menace. La première réalisation de cette thèse est une analyse à grande échelle et en profondeur des protections de logiciels malveillants. Plus précisément, nous avons étudié des centaines d'exemplaires de logiciels malveillants, soigneusement sélectionnés pour leur dangerosité. En mesurant de façon automatique un ensemble de caractéristiques originales, nous avons pu alors montrer l'existence d'un modèle de protection particulièrement prévalent dans ces programmes, qui est basé sur l'auto modification du code et sur une limite stricte entre code de protection et code utile. Ensuite, nous avons développé une méthode d'identification d'implémentations cryptographiques adaptée aux programmes en langage machine protégés. Nous avons validé notre approche en identifiant de nombreuses implémentations d'algorithmes cryptographiques -- dont la majorité sont complètement invisibles pour les outils existants --, et ceci en particulier dans des protections singulièrement obscures de logiciels malveillants. Finalement, nous avons développé ce qui est, à notre connaissance, le premier environnement d'émulation de réseaux de machines infectées avec plusieurs milliers de machines. Grâce à cela, nous avons montré que l'exploitation d'une vulnérabilité du protocole pair-à-pair du réseau Waledac permet de prendre son contrôle / The main goal of this thesis is the development of malware analysis methods to help human analysts better comprehend the threat it represents. The first achievement in this thesis is the large-scale and in-depth analysis of malware protection techniques. In particular, we have studied hundreds of malware samples, carefully selected according to their threat level. By automatically measuring a set of original characteristics, we have been able to demonstrate the existence of a particularly prevalent model of protection in these programmes that is based on self-modifying code and on a strict delimitation between protection code and payload code. Then, we have developed an identification method for cryptographic implementations adapted to protected machine language programmes. We have validated our approach by identifying several implementations of cryptographic algorithms ---the majority unidentified by existing tools--- and this even in particularly obscure malware protection schemes. Finally, we have developed what is, to our knowledge, the first emulation environment for botnets involving several thousands of machines. Thanks to this, we were able to validate the viability of the use of a vulnerability in the peer-to-peer protocol in the Waledac botnet to take over this network
23

Detecting Malicious Campaigns in Crowdsourcing Platforms

Choi, Hongkyu 01 May 2017 (has links)
Crowdsourcing systems enable new opportunities for requesters with limited funds to accomplish various tasks using human computation. However, the power of human computation is abused by malicious requesters who create malicious campaigns to manipulate information in web systems such as social networking sites, online review sites, and search engines. To mitigate the impact and reach of these malicious campaigns to targeted sites, we propose and evaluate a machine learning based classification approach for detecting malicious campaigns in crowdsourcing platforms as a first line of defense, and build a malicious campaign blacklist service for targeted site providers, researchers and users. Specifically, we (i) conduct a comprehensive analysis to understand the characteristics of malicious campaigns and legitimate campaigns in crowdsourcing platforms, (ii) propose various features to distinguish between malicious campaigns and legitimate campaigns, (iii) evaluate a classification approach against baselines, and (iv) build a malicious campaign blacklist service. Our experimental results show that our proposed approaches effectively detect malicious campaigns with low false negative and false positive rates.
24

Κατασκευή συστήματος αναγνώρισης κακόβουλων χρηστών στο διαδίκτυο

Βήττας, Ιωάννης 08 March 2010 (has links)
Στη συγκεκριμένη Διπλωματική εργασία μελετώνται μέθοδοι κατασκευής συστήματος αναγνώρισης κακόβουλων - spammer χρηστών στο Διαδίκτυο. Συγκεκριμένα, επικεντρωνόμαστε στα Συστήματα Κοινωνικής Σελιδοσήμανσης, που αποτελούν έναν από τους βασικότερους τομείς σήμερα στο Διαδίκτυο. Οι μέθοδοι που χρησιμοποιούνται βασίζονται στο επιστημονικό πεδίο της Μηχανικής Μάθησης. Δοσμένου ενός πραγματικού συνόλου δεδομένων που περιγράφει έναν από τους πιο δημοφιλής Ιστότοπους Κοινωνικής Σελιδοσήμανσης, τον BibSonomy, εξάγονται χαρακτηριστικά σημασιολογικής φύσεως και εισάγονται σε ταξινομητές ώστε να διερευνηθεί η απόδοσή τους και να ευρεθούν οι βέλτιστες ρυθμίσεις τους στη διαδικασία ταυτοποίησης spammer και νόμιμων χρηστών. / In this Thesis are studied methods of designing a system that identifies malicious – spammer users on the Internet. In particular, we focus on Social Bookmarking Systems, which form one of the key areas on the Internet today. Methods are based on the scientific field of Machine Learning. Given a real dataset that describes one of the most popular Social Bookmarking website, BibSonomy, semantic features are extracted and introduced at classifiers in order to investigate the performance and determine the best settings in the process of identifying spammer and legitimate users.
25

Runtime Analysis of Malware

Iqbal, Muhammad Shahid, Sohail, Muhammad January 2011 (has links)
Context: Every day increasing number of malwares are spreading around the world and infecting not only end users but also large organizations. This results in massive security threat for private data and expensive computer resources. There is lot of research going on to cope up with this large amount of malicious software. Researchers and practitioners developed many new methods to deal with them. One of the most effective methods used to capture malicious software is dynamic malware analysis. Dynamic analysis methods used today are very time consuming and resource greedy. Normally it could take days or at least some hours to analyze a single instance of suspected software. This is not good enough especially if we look at amount of attacks occurring every day. Objective: To save time and expensive resources used to perform these analyses, AMA: an automated malware analysis system is developed to analyze large number of suspected software. Analysis of any software inside AMA, results in a detailed report of its behavior, which includes changes made to file system, registry, processes and network traffic consumed. Main focus of this study is to develop a model to automate the runtime analysis of software which provide detailed analysis report and evaluation of its effectiveness. Methods: A thorough background study is conducted to gain the knowledge about malicious software and their behavior. Further software analysis techniques are studied to come up with a model that will automate the runtime analysis of software. A prototype system is developed and quasi experiment performed on malicious and benign software to evaluate the accuracy of the newly developed system and generated reports are compared with Norman and Anubis. Results: Based on thorough background study an automated runtime analysis model is developed and quasi experiment performed using implemented prototype system on selected legitimate and benign software. The experiment results show AMA has captured more detailed software behavior then Norman and Anubis and it could be used to better classify software. Conclusions: We concluded that AMA could capture more detailed behavior of the software analyzed and it will give more accurate classification of the software. We also can see from experiment results that there is no concrete distinguishing factors between general behaviors of both types of software. However, by digging a bit deep into analysis report one could understand the intensions of the software. That means reports generated by AMA provide enough information about software behavior and can be used to draw correct conclusions. / +46 736 51 83 01
26

HTTP botnet detection using passive DNS analysis and application profiling

Alenazi, Abdelrahman Aziz 15 December 2017 (has links)
HTTP botnets are currently the most popular form of botnets compared to IRC and P2P botnets. This is because, they are not only easier to implement, operate, and maintain, but they can easily evade detection. Likewise, HTTP botnets flows can easily be buried in the huge volume of legitimate HTTP traffic occurring in many organizations, which makes the detection harder. In this thesis, a new detection framework involving three detection models is proposed, which can run independently or in tandem. The first detector profiles the individual applications based on their interactions, and isolates accordingly the malicious ones. The second detector tracks the regularity in the timing of the bot DNS queries, and uses this as basis for detection. The third detector analyzes the characteristics of the domain names involved in the DNS, and identifies the algorithmically generated and fast flux domains, which are staples of typical HTTP botnets. Several machine learning classifiers are investigated for each of the detectors. Experimental evaluation using public datasets and datasets collected in our testbed yield very encouraging performance results. / Graduate
27

Exploring Data Security Management Strategies for Preventing Data Breaches

Ofori-Duodu, Michael Samuel 01 January 2019 (has links)
Insider threat continues to pose a risk to organizations, and in some cases, the country at large. Data breach events continue to show the insider threat risk has not subsided. This qualitative case study sought to explore the data security management strategies used by database and system administrators to prevent data breaches by malicious insiders. The study population consisted of database administrators and system administrators from a government contracting agency in the northeastern region of the United States. The general systems theory, developed by Von Bertalanffy, was used as the conceptual framework for the research study. The data collection process involved interviewing database and system administrators (n = 8), organizational documents and processes (n = 6), and direct observation of a training meeting (n = 3). By using methodological triangulation and by member checking with interviews and direct observation, efforts were taken to enhance the validity of the findings of this study. Through thematic analysis, 4 major themes emerged from the study: enforcement of organizational security policy through training, use of multifaceted identity and access management techniques, use of security frameworks, and use of strong technical control operations mechanisms. The findings of this study may benefit database and system administrators by enhancing their data security management strategies to prevent data breaches by malicious insiders. Enhanced data security management strategies may contribute to social change by protecting organizational and customer data from malicious insiders that could potentially lead to espionage, identity theft, trade secrets exposure, and cyber extortion.
28

A Novel Approach for Analyzing and Classifying Malicious Web Pages

Hiremath, Panchakshari N 18 May 2021 (has links)
No description available.
29

Detekce škodlivých domén za pomoci analýzy pasivního DNS provozu / Detection of Malicious Domains Using Passive DNS Analysis

Doležal, Jiří January 2014 (has links)
Tato diplomová práce se zabývá detekcí škodlivých domén za pomoci analýzy pasivního DNS provozu, návrhem a implementací vlastního systému detekce. Provoz DNS se stává terčem mnoha útočníků, kteří využívají toho, že služba DNS je nezbytná pro fungování Internetu. Téměř každá internetová komunikace totiž začíná DNS dotazem a odpovědí. Zneužívání služby DNS nebo využívání slabin této služby se projevuje anomálním chováním DNS provozu. Tato práce obsahuje popis různých metod používaných pro odhalování anomálií a škodlivých domén v DNS datech. Hlavní částí práce je návrh a implementace systému pro detekci škodlivých domén. Implementovaný systém byl testován na DNS datech získaných z reálného provozu.
30

Malicious user attacks in decentralised cognitive radio networks

Sivakumaran, Arun January 2020 (has links)
Cognitive radio networks (CRNs) have emerged as a solution for the looming spectrum crunch caused by the rapid adoption of wireless devices over the previous decade. This technology enables efficient spectrum utility by dynamically reusing existing spectral bands. A CRN achieves this by requiring its users – called secondary users (SUs) – to measure and opportunistically utilise the band of a legacy broadcaster – called a primary user (PU) – in a process called spectrum sensing. Sensing requires the distribution and fusion of measurements from all SUs, which is facilitated by a variety of architectures and topologies. CRNs possessing a central computation node are called centralised networks, while CRNs composed of multiple computation nodes are called decentralised networks. While simpler to implement, centralised networks are reliant on the central node – the entire network fails if this node is compromised. In contrast, decentralised networks require more sophisticated protocols to implement, while offering greater robustness to node failure. Relay-based networks, a subset of decentralised networks, distribute the computation over a number of specialised relay nodes – little research exists on spectrum sensing using these networks. CRNs are vulnerable to unique physical layer attacks targeted at their spectrum sensing functionality. One such attack is the Byzantine attack; these attacks occur when malicious SUs (MUs) alter their sensing reports to achieve some goal (e.g. exploitation of the CRN’s resources, reduction of the CRN’s sensing performance, etc.). Mitigation strategies for Byzantine attacks vary based on the CRN’s network architecture, requiring defence algorithms to be explored for all architectures. Because of the sparse literature regarding relay-based networks, a novel algorithm – suitable for relay-based networks – is proposed in this work. The proposed algorithm performs joint MU detection and secure sensing by large-scale probabilistic inference of a statistical model. The proposed algorithm’s development is separated into the following two parts. • The first part involves the construction of a probabilistic graphical model representing the likelihood of all possible outcomes in the sensing process of a relay-based network. This is done by discovering the conditional dependencies present between the variables of the model. Various candidate graphical models are explored, and the mathematical description of the chosen graphical model is determined. • The second part involves the extraction of information from the graphical model to provide utility for sensing. Marginal inference is used to enable this information extraction. Belief propagation is used to infer the developed graphical model efficiently. Sensing is performed by exchanging the intermediate belief propagation computations between the relays of the CRN. Through a performance evaluation, the proposed algorithm was found to be resistant to probabilistic MU attacks of all frequencies and proportions. The sensing performance was highly sensitive to the placement of the relays and honest SUs, with the performance improving when the number of relays was increased. The transient behaviour of the proposed algorithm was evaluated in terms of its dynamics and computational complexity, with the algorithm’s results deemed satisfactory in this regard. Finally, an analysis of the effectiveness of the graphical model’s components was conducted, with a few model components accounting for most of the performance, implying that further simplifications to the proposed algorithm are possible. / Dissertation (MEng)--University of Pretoria, 2020. / Electrical, Electronic and Computer Engineering / MEng / Unrestricted

Page generated in 0.0562 seconds