• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 9
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

State of the Art Botnet-Centric Honeynet Design

Syers, John, III 16 January 2010 (has links)
The problem of malware has escalated at a rate that security professionals and researchers have been unable to deal with. Attackers savage the information technology (IT) infrastructure of corporations and governments with impunity. Of particular significance is the rise of botnets within the past ten years. In response, honeypots and honeynets were developed to gain critical intelligence on attackers and ultimately to neutralize their threats. Unfortunately, the malware community has adapted, and strategies used in the early half of the decade have diminished significantly in their effectiveness. This thesis explores the design characteristics necessary to create a honeynet capable of reversing the current trend and defeating botnet countermeasures. This thesis finds that anti-virtual machine detection techniques along with appropriate failsafes are essential to analyze modern botnet binaries.
2

Detecting Networks Employing Algorithmically Generated Domain Names

Ashwath Kumar Krishna Reddy 2010 August 1900 (has links)
Recent Botnets such as Conficker, Kraken and Torpig have used DNS based "domain fluxing" for command-and-control, where each Bot queries for existence of a series of domain names and the owner has to register only one such domain name. In this report, we develop a methodology to detect such "domain fluxes" in DNS traffic by looking for patterns inherent to domain names that are generated algorithmically, in contrast to those generated by humans. In particular, we look at distribution of alphanumeric characters as well as bigrams in all domains that are mapped to the same set of IP-addresses. We present and compare the performance of several distance metrics, including KL-distance and Edit distance. We train by using a good data set of domains obtained via a crawl of domains mapped to all IPv4 address space and modeling bad data sets based on behaviors seen so far and expected. We also apply our methodology to packet traces collected at two Tier-1 ISPs and show we can automatically detect domain fluxing as used by Conficker botnet with minimal false positives. We are also able to detect new botnets and other malicious networks using our method.
3

Statistical Assessment of Peer-to-Peer Botnet Features

Godkin, Teghan 17 April 2013 (has links)
Botnets are collections of compromised machines which are controlled by a remotely located adversary. Botnets are of signi cant interest to cybersecurity researchers as they are a core mechanism that allows adversarial groups to gain control over large scale computing resources. Recent botnets have become increasingly complex, relying on Peer-to-Peer (P2P) protocols for botnet command and control (C&C). In this work, a packet-level simulation of a Kademlia-based P2P botnet is used in conjunction with a statistical analysis framework to investigate how measured botnet features change over time and across an ensemble of simulations. The simulation results include non-stationary and non-ergodic behaviours illustrating the complex nature of botnet operation and highlighting the need for rigorous statistical analysis as part of the engineering process. / Graduate / 0984, 0537, 0544
4

Infrastructure distribuée permettant la détection d'attaques logicielles

Deneault, Sébastien January 2013 (has links)
Le nombre de systèmes informatiques augmente de jour en jour et beaucoup d'entités malveillantes tentent d'abuser de leurs vulnérabilités. Il existe un fléau qui fait rage depuis quelques années et qui cause beaucoup de difficultés aux experts en sécurité informatique : les armées de robots (botnets). Des armées d'ordinateurs infectés sont constituées pour ensuite être louées et utilisées à des fins peu enviables. La société fait face à un problème : il est très difficile d'arrêter ces armées et encore plus de trouver leurs coordonnateurs. L'objectif de ce travail de recherche est de développer des outils destinés à identifier ces entités et aider à démanteler ces réseaux. Plus précisément, ce projet porte sur la conception d'une plateforme distribuée permettant de faire un pré-traitement des données collectées sur divers réseaux et de les distribuer dans un système d'analyse. Cette plateforme sera en libre source, facilement adaptable et flexible. De plus, elle devra être en mesure de traiter une grande quantité de données dans un court laps de temps. Ce système se distinguera étant donné qu'il sera distribué sur plusieurs réseaux sous un modèle client-serveur et collaborera dans le but de trouver les coordonnateurs de ces armées de robots.
5

Alcance infraccional de las botnets utilizadas para la comisión de delitos conforme a la actual legislación chilena

Maldonado Cárcamo, Danic January 2017 (has links)
Tesis (magíster en derecho con mención en derecho y nuevas tecnologías ) / La presente tesis, tiene por objeto dar a conocer una visión general, respecto del alcance infraccional que posee la utilización de códigos maliciosos que, para este caso en particular, se centró en la utilización de botnets, para la comisión de delitos en el ámbito informático. En ese contexto, se presentan importantes desafíos desde lo técnico para generar contramedidas que sean eficientes a la hora de mitigar las amenazas en este campo, pero también existe otro desafío muy relevante que dice relación con la problemática que se presenta al juzgador al momento de la persecución penal, a la delincuencia del mundo informático. En esa misma orden de ideas, se hace preciso considerar la normativa vigente que establece sanciones para aquellas acciones que atenten contra sistemas transaccionales, y los datos contenidos en éstos, dado que ya se han cumplido dos décadas que dicha normativa no ha sufrido ninguna actualización, lo que favorece que determinadas acciones no puedan ser perseguidas adecuadamente porque no cumplir el tipo penal, o bien, no puedan ser castigadas porque derechamente no existe norma para tales efectos. Considerando el reciente lanzamiento de la política nacional de ciberseguridad, y la adhesión de Chile, al convenio de Budapest, marcaran sin duda, un verdadero punto de inflexión, a fin de subir los actuales estándares a niveles internacionales, así como la generación de redes que permitan unificar los esfuerzos en materia de persecución penal de los delitos informáticos, considerando que su alcance no reconoce fronteras, por lo que, de no mediar un trabajo mancomunado a nivel de cooperación internacional, y la homogeneización de los cuerpos legales, no se podrá dar una lucha férrea, a los delitos informáticos a nivel global.
6

Fast Identification of Structured P2P Botnets Using Community Detection Algorithms

Venkatesh, Bharath January 2013 (has links) (PDF)
Botnets are a global problem, and effective botnet detection requires cooperation of large Internet Service Providers, allowing near global visibility of traffic that can be exploited to detect them. The global visibility comes with huge challenges, especially in the amount of data that has to be analysed. To handle such large volumes of data, a robust and effective detection method is the need of the hour and it must rely primarily on a reduced or abstracted form of data such as a graph of hosts, with the presence of an edge between two hosts if there is any data communication between them. Such an abstraction would be easy to construct and store, as very little of the packet needs to be looked at. Structured P2P command and control have been shown to be robust against targeted and random node failures, thus are ideal mechanisms for botmasters to organize and command their botnets effectively. Thus this thesis develops a scalable, efficient and robust algorithm for the detection of structured P2P botnets in large traffic graphs. It draws from the advances in the state of the art in Community Detection, which aim to partition a graph into dense communities. Popular Community Detection Algorithms with low theoretical time complexities such as Label Propagation, Infomap and Louvain Method have been implemented and compared on large LFR benchmark graphs to study their efficiency. Louvain method is found to be capable of handling graphs of millions of vertices and billions of edges. This thesis analyses the performance of this method with two objective functions, Modularity and Stability and found that neither of them are robust and general. In order to overcome the limitations of these objective functions, a third objective function proposed in the literature is considered. This objective function has previously been used in the case of Protein Interaction Networks successfully, and used in this thesis to detect structured P2P botnets for the first time. Further, the differences in the topological properties - assortativity and density, of structured P2P botnet communities and benign communities are discussed. In order to exploit these differences, a novel measure based on mean regular degree is proposed, which captures both the assortativity and the density of a graph and its properties are studied. This thesis proposes a robust and efficient algorithm that combines the use of greedy community detection and community filtering using the proposed measure mean regular degree. The proposed algorithm is tested extensively on a large number of datasets and found to be comparable in performance in most cases to an existing botnet detection algorithm called BotGrep and found to be significantly faster.
7

Improving detection and annotation of malware downloads and infections through deep packet inspection

Nelms, Terry Lee 27 May 2016 (has links)
Malware continues to be one of the primary tools employed by attackers. It is used in attacks ranging from click fraud to nation state espionage. Malware infects hosts over the network through drive-by downloads and social engineering. These infected hosts communicate with remote command and control (C&C) servers to perform tasks and exfiltrate data. Malware's reliance on the network provides an opportunity for the detection and annotation of malicious communication. This thesis presents four main contributions. First, we design and implement a novel incident investigation system, named WebWitness. It automatically traces back and labels the sequence of events (e.g., visited web pages) preceding malware downloads to highlight how users reach attack pages on the web; providing a better understanding of current attack trends and aiding in the development of more effective defenses. Second, we conduct the first systematic study of modern web based social engineering malware download attacks. From this study we develop a categorization system for classifying social engineering downloads and use it to measure attack properties. From these measurements we show that it is possible to detect the majority of social engineering downloads using features from the download path. Third, we design and implement ExecScent, a novel system for mining new malware C&C domains from live networks. ExecScent automatically learns C&C traffic models that can adapt to the deployment network's traffic. This adaptive approach allows us to greatly reduce the false positives while maintaining a high number of true positives. Lastly, we develop a new packet scheduling algorithm for deep packet inspection that maximizes throughput by optimizing for cache affinity. By scheduling for cache affinity, we are able to deploy our systems on multi-gigabit networks.
8

A self-healing framework to combat cyber attacks : analysis and development of a self-healing mitigation framework against controlled malware attacks for enterprise networks

Alhomoud, Adeeb M. January 2014 (has links)
Cybercrime costs a total loss of about $338 billion annually which makes it one of the most profitable criminal activities in the world. Controlled malware (Botnet) is one of the most prominent tools used by cybercriminals to infect, compromise computer networks and steal important information. Infecting a computer is relatively easy nowadays with malware that propagates through social networking in addition to the traditional methods like SPAM messages and email attachments. In fact, more than 1/4 of all computers in the world are infected by malware which makes them viable for botnet use. This thesis proposes, implements and presents the Self-healing framework that takes inspiration from the human immune system. The designed self-healing framework utilises the key characteristics and attributes of the nature’s immune system to reverse botnet infections. It employs its main components to heal the infected nodes. If the healing process was not successful for any reason, it immediately removes the infected node from the Enterprise’s network to a quarantined network to avoid any further botnet propagation and alert the Administrators for human intervention. The designed self-healing framework was tested and validated using different experiments and the results show that it efficiently heals the infected workstations in an Enterprise network.
9

Scalable Techniques for Anomaly Detection

Yadav, Sandeep 1985- 14 March 2013 (has links)
Computer networks are constantly being attacked by malicious entities for various reasons. Network based attacks include but are not limited to, Distributed Denial of Service (DDoS), DNS based attacks, Cross-site Scripting (XSS) etc. Such attacks have exploited either the network protocol or the end-host software vulnerabilities for perpetration. Current network traffic analysis techniques employed for detection and/or prevention of these anomalies suffer from significant delay or have only limited scalability because of their huge resource requirements. This dissertation proposes more scalable techniques for network anomaly detection. We propose using DNS analysis for detecting a wide variety of network anomalies. The use of DNS is motivated by the fact that DNS traffic comprises only 2-3% of total network traffic reducing the burden on anomaly detection resources. Our motivation additionally follows from the observation that almost any Internet activity (legitimate or otherwise) is marked by the use of DNS. We propose several techniques for DNS traffic analysis to distinguish anomalous DNS traffic patterns which in turn identify different categories of network attacks. First, we present MiND, a system to detect misdirected DNS packets arising due to poisoned name server records or due to local infections such as caused by worms like DNSChanger. MiND validates misdirected DNS packets using an externally collected database of authoritative name servers for second or third-level domains. We deploy this tool at the edge of a university campus network for evaluation. Secondly, we focus on domain-fluxing botnet detection by exploiting the high entropy inherent in the set of domains used for locating the Command and Control (C&C) server. We apply three metrics namely the Kullback-Leibler divergence, the Jaccard Index, and the Edit distance, to different groups of domain names present in Tier-1 ISP DNS traces obtained from South Asia and South America. Our evaluation successfully detects existing domain-fluxing botnets such as Conficker and also recognizes new botnets. We extend this approach by utilizing DNS failures to improve the latency of detection. Alternatively, we propose a system which uses temporal and entropy-based correlation between successful and failed DNS queries, for fluxing botnet detection. We also present an approach which computes the reputation of domains in a bipartite graph of hosts within a network, and the domains accessed by them. The inference technique utilizes belief propagation, an approximation algorithm for marginal probability estimation. The computation of reputation scores is seeded through a small fraction of domains found in black and white lists. An application of this technique, on an HTTP-proxy dataset from a large enterprise, shows a high detection rate with low false positive rates.
10

Honeynet design and implementation

Artore, Diane 20 December 2007 (has links)
Over the past decade, webcriminality has become a real issue. Because they allow the botmasters to control hundreds to millions of machines, botnets became the first-choice attack platform for the network attackers, to launch distributed denial of service attacks, steal sensitive information and spend spam emails. This work aims at designing and implementing a honeynet, specific to IRC bots. Our system works in 3 phasis: (1) binaries collection, (2) simulation, and (3) activity capturing and monitoring. Our phase 2 simulation uses an IRC redirection to extract the connection information thanks to a IRC redirection (using a DNS redirection and a "fakeserver"). In phase 3, we use the information previously extracted to launch our honeyclient, which will capture and monitor the traffic on the C&C channel. Thanks to our honeynet, we create a database of the activity of IRC botnets (their connection characteristics, commands on the C&C ), and hope to learn more about their behavior and the underground market they create.

Page generated in 0.6177 seconds