• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 8
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving detection and annotation of malware downloads and infections through deep packet inspection

Nelms, Terry Lee 27 May 2016 (has links)
Malware continues to be one of the primary tools employed by attackers. It is used in attacks ranging from click fraud to nation state espionage. Malware infects hosts over the network through drive-by downloads and social engineering. These infected hosts communicate with remote command and control (C&C) servers to perform tasks and exfiltrate data. Malware's reliance on the network provides an opportunity for the detection and annotation of malicious communication. This thesis presents four main contributions. First, we design and implement a novel incident investigation system, named WebWitness. It automatically traces back and labels the sequence of events (e.g., visited web pages) preceding malware downloads to highlight how users reach attack pages on the web; providing a better understanding of current attack trends and aiding in the development of more effective defenses. Second, we conduct the first systematic study of modern web based social engineering malware download attacks. From this study we develop a categorization system for classifying social engineering downloads and use it to measure attack properties. From these measurements we show that it is possible to detect the majority of social engineering downloads using features from the download path. Third, we design and implement ExecScent, a novel system for mining new malware C&C domains from live networks. ExecScent automatically learns C&C traffic models that can adapt to the deployment network's traffic. This adaptive approach allows us to greatly reduce the false positives while maintaining a high number of true positives. Lastly, we develop a new packet scheduling algorithm for deep packet inspection that maximizes throughput by optimizing for cache affinity. By scheduling for cache affinity, we are able to deploy our systems on multi-gigabit networks.
2

An Analysis and Comparison of The Security Features of Firewalls and IDSs

Sulaman, Sardar Muhammad January 2011 (has links)
In last few years we have observed a significant increase in the usage of computing devices and their capabilities to communicate with each other. With the increase in usage and communicating capabilities the higher level of network security is also required. Today the main devices used for the network security are the firewalls and IDS/IPS that provide perimeter defense. Both devices provide many overlapping security features but they have different aims, different protection potential and need to be used together. A firewall is an active device that implements ACLs and restricts unauthorized access to protected resources. An IDS only provides information for further necessary actions, not necessarily perimeter related, but some of these needed actions can be automated, such as automatic blocking in the firewall of attacking sites, which creates an IPS. This thesis report analyzed some common firewall and IDS products, and described their security features, functionalities, and limitations in detail. It also contains the comparison of the security features of the both devices. The firewall and IDS perform different functions for the network security, so they should be used in layered defense architecture. The passwords, firewalls, IDSs/IPSs and physical security all together provide a layered defense and complement each other. The firewall and IDS alone cannot offer sufficient network protection against the network attacks, and they should be used together to enhance the defense-in-depth or layered approach.
3

AETOS: An Architecture for Offloading Core LTE Traffic Using Software Defined Networking Concepts

Nasim, Kamraan January 2016 (has links)
It goes without saying that cellular users of today have an insatiable appetite for bandwidth and data. Data-intensive applications, such as video on demand, online gaming and video conferencing, have gained prominence. This, coupled with recent innovations in the mobile network such as LTE/4G, poses a unique challenge to network operators in how to extract the most value from their deployments all the while reducing their Total Cost of Operations(TCO). To this end, a number of enhancements have been proposed to the ”conventional” LTE mobile network. Most of these recognize the monolithic and non-elastic nature of the mobile backend and propose complimenting core functionality with concepts borrowed from Software Defined Networking (SDN). In this thesis we shall attempt to explore some existing options within the LTE standard to mitigate large traffic churns. We will then review some SDN-enabled alternatives, and attempt to derive a proof based critique on their merits and drawbacks.
4

Collecting and analyzing Tor exit node traffic

Jonsson, Torbjörn, Edeby, Gustaf January 2021 (has links)
Background. With increased Internet usage occurring across the world journalists, dissidents and criminals have moved their operations online, and in turn, governments and law enforcement have increased their surveillance of their country’s networks. This has increased the popularity of programs masking users’ identities online such as the Tor Project. By encrypting and routing the traffic through several nodes, the users’ identity is hidden. But how are Tor users utilizing the network, and is any of it in the plain text despite the dangers of it? How has the usage of Tor changed compared to 11 years ago? Objectives. The thesis objective is to analyze captured Tor network traffic that reveals what data is sent through the network. The collected data helps draw conclusions about Tor usage and is compared with previous studies. Methods. Three Tor exit nodes are set up and operated for one week in the US, Germany, and Japan. We deploy packet sniffers performing a deep packet inspection on each traffic flow to identify attributes such as application protocol, number of bytes sent in a flow, and content-type if the traffic was sent in plain text. All stored data is anonymized. Results. The results show that 100.35 million flows were recorded, with 32.47%of them sending 4 or fewer packets in total. The most used application protocol was TLS with 55.03% of total traffic. The HTTP usage was 15.91% and 16% was unknown protocol(s). The countries receiving the most traffic were the US with over45% of all traffic, followed by the Netherlands, UK, and Germany with less than 10%of recorded traffic as its destination. The most frequently used destination ports were 443 at 49.5%, 5222 at 12.7%, 80 with 11.9%, and 25 at 9.3%.Conclusions. The experiment shows that it is possible to perform traffic analysis on the Tor network and acquire significant data. It shows that the Tor network is widely used in the world but with the US and Europe accounting for most of the traffic. As expected there has been a shift from HTTP to HTTPS traffic when compared to previous research. However, there is still unencrypted traffic on the network, where some of the traffic could be explained by automated tools like web crawlers. Tor users need to increase their awareness in what traffic they are sending through the network, as a user with malicious intent can perform the same experiment and potentially acquire unencrypted sensitive data.
5

Hardwarové předzpracování paketů pro urychlení síťových aplikací / Hardware Packet Preprocessing for Acceleration of Network Applications

Vondruška, Lukáš Unknown Date (has links)
This thesis particularly deals with design and implementation of FPGA unit, which performs hardware acclerated header field extraction of network packets. By utilizing NetCOPE platform it is proposed flexible and effective high-peformance solution for high-speed networks. A theoretical part presents a classical protocol model and an analysis of the Internet traffic. Main part of the thesis is further focused on key issues in hardware packet preprocessing, such as packet classification and deep packet inspection. The author of this thesis also discusses possible technology platforms, which can be utilized to acceleration of network applications.
6

An approach for profiling distributed applications through network traffic analysis

Vieira, Thiago Pereira de Brito 05 March 2013 (has links)
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T17:32:13Z No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T14:22:30Z (GMT) No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-13T14:22:30Z (GMT). No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-03-05 / Distributed systems has been adopted for building modern Internet services and cloud computing infrastructures, in order to obtain services with high performance, scalability, and reliability. Cloud computing SLAs require low time to identify, diagnose and solve problems in a cloud computing production infrastructure, in order to avoid negative impacts into the quality of service provided for its clients. Thus, the detection of error causes, diagnose and reproduction of errors are challenges that motivate efforts to the development of less intrusive mechanisms for monitoring and debugging distributed applications at runtime. Network traffic analysis is one option to the distributed systems measurement, although there are limitations on capacity to process large amounts of network traffic in short time, and on scalability to process network traffic where there is variation of resource demand. The goal of this dissertation is to analyse the processing capacity problem for measuring distributed systems through network traffic analysis, in order to evaluate the performance of distributed systems at a data center, using commodity hardware and cloud computing services, in a minimally intrusive way. We propose a new approach based on MapReduce, for deep inspection of distributed application traffic, in order to evaluate the performance of distributed systems at runtime, using commodity hardware. In this dissertation we evaluated the effectiveness of MapReduce for a deep packet inspection algorithm, its processing capacity, completion time speedup, processing capacity scalability, and the behavior followed by MapReduce phases, when applied to deep packet inspection for extracting indicators of distributed applications. / Sistemas distribuídos têm sido utilizados na construção de modernos serviços da Internet e infraestrutura de computação em núvem, com o intuito de obter serviços com alto desempenho, escalabilidade e confiabilidade. Os acordos de níves de serviço adotados pela computação na núvem requerem um reduzido tempo para identificar, diagnosticar e solucionar problemas em sua infraestrutura, de modo a evitar que problemas gerem impactos negativos na qualidade dos serviços prestados aos seus clientes. Então, a detecção de causas de erros, diagnóstico e reprodução de erros provenientes de sistemas distribuídos são desafios que motivam esforços para o desenvolvimento de mecanismos menos intrusivos e mais eficientes, para o monitoramento e depuração de aplicações distribuídas em tempo de execução. A análise de tráfego de rede é uma opção para a medição de sistemas distribuídos, embora haja limitações na capacidade de processar grande quantidade de tráfego de rede em curto tempo, e na escalabilidade para processar tráfego de rede sob variação de demanda de recursos. O objetivo desta dissertação é analisar o problema da capacidade de processamento para mensurar sistemas distribuídos através da análise de tráfego de rede, com o intuito de avaliar o desempenho de sistemas distribuídos de um data center, usando hardware não especializado e serviços de computação em núvem, de uma forma minimamente intrusiva. Nós propusemos uma nova abordagem baseada em MapReduce para profundamente inspecionar tráfego de rede de aplicações distribuídas, com o objetivo de avaliar o desempenho de sistemas distribuídos em tempo de execução, usando hardware não especializado. Nesta dissertação nós avaliamos a eficácia do MapReduce para um algoritimo de avaliação profunda de pacotes, sua capacidade de processamento, o ganho no tempo de conclusão de tarefas, a escalabilidade na capacidade de processamento, e o comportamento seguido pelas fases do MapReduce, quando aplicado à inspeção profunda de pacotes, para extrair indicadores de aplicações distribuídas.
7

Detekce P2P sítí / Detection of P2P Networks

Březina, Matej January 2008 (has links)
This document deals with design, implementation and testing of software system for detecting p2p (peer-to-peer) networks based on combination of BPF prefiltering and POSIX regular expressions packet payload matching with known p2p protocol communications. The proposed detection system includes a database with some rules of most effuse p2p protocols in format resembling to definitions for L7-filter classifier. The application is implemented in C, runs in userspace and is targeted to all POSIX compatible platforms. The combination of detector with user attached QoS controlling is complete solution for traffic reduction of common p2p protocols.
8

Eingriffe in den Internet-Datenverkehr zur Durchsetzung des Urheberrechts

Fokken, Martin 28 October 2021 (has links)
Die auf mitgliedstaatlicher und EU-Ebene grundrechtlich verbürgte Freiheit des Eigen-tums verlangt, das Urheberrecht effektiv zu schützen. Staatlich durchgeführte oder ange-ordnete technische Maßnahmen wie Netzsperren (IP- oder DNS-Sperren) und Deep Packet Inspection ermöglichen es u.a., gezielt die Übertragung von Daten zu blockieren, deren unlizenzierter Austausch über das Internet – etwa über Streaming-Portale – das Urheber-recht verletzt. Im Internet besteht ohne derartige technische Maßnahmen ein Durchset-zungsdefizit, da die unmittelbaren („Content Provider“) und mittelbaren Anbieter („Host-Provider“) der Inhalte oft nicht effektiv in Haftung genommen werden können; die techni-schen Betreiber der Infrastruktur des Internets („Internet Service Provider“) hingegen können dem staatlichen Zugriff nicht ausweichen. Die angesprochenen technischen Maß-nahmen greifen jedoch in verschiedene Grundrechte des Grundgesetzes und der Charta der Grundrechte der Europäischen Union ein. Betroffen sind insbesondere die unterneh-merische Freiheit (Art. 16 Charta) der Internet Service Provider, die Informationsfreiheit (Art. 11 Abs. 1 Charta), das Recht auf Achtung der Kommunikation (Art. 7 Charta), das Recht auf Schutz personenbezogener Daten (Art. 8 Abs. 1 Charta) der Internet-Nutzer sowie die jeweiligen mitgliedstaatlichen Entsprechungen dieser Grundrechte. Der Gegen-stand dieser Arbeit ist die Untersuchung der Vereinbarkeit der Anwendung technischer Maßnahmen zur Durchsetzung des Urheberrechts mit europäischem Primärrecht und dem Grundgesetz. / The Fundamental Right to Property, which is guaranteed at Member State and EU level, requires that copyright be effectively protected. Technical measures implemented by or required by states, such as IP/DNS blocking or Deep Packet Inspection, enable, inter alia, the targeted blocking of transmissions of data whose unlicensed exchange over the inter-net – e.g. via streaming portals – infringes copyrights. Without such technical measures, there is an enforcement deficit in the internet, as the direct ("content providers") and indi-rect providers ("host providers") of the content often cannot be effectively held liable; the technical operators of internet infrastructure ("internet service providers"), on the other hand, cannot evade governmental intervention. The technical measures mentioned, how-ever, affect various fundamental rights of the German Constitution (the “Grundgesetz”) and the Charter of Fundamental Rights of the European Union. The rights affected are, in particular, the Freedom to Conduct a Business (Article 16 of the Charter) of internet ser-vice providers, the Freedom of Information (Article 11(1) of the Charter), the Right to Re-spect for Communications (Article 7 of the Charter) and the Right to Protection of Person-al Data (Article 8 (1) of the Charter) of internet users, and the respective Member State equivalents of these fundamental rights. Subject matter of this thesis is to examine whether the use of technological measures to enforce copyrights is in compliance with Eu-ropean primary law and the German Grundgesetz.
9

Aspekte van regsbeheer in die konteks van die Internet / Aspects of legal regulation in the context of the Internet

Gordon, Barrie James 06 1900 (has links)
Die wêreld soos dit vandag bestaan, is gebaseer op die Internasionaalregtelike konsep van soewereiniteit. State het die bevoegdheid om hulle eie sake te reël, maar die ontwikkeling van die Internet as ’n netwerk wat globaal verspreid is, het hierdie beginsel verontagsaam. Dit wou voorkom asof die Internet die einde van soewereiniteit en staatskap sou beteken. ’n Geskiedkundige oorsig toon dat reguleerders aanvanklik onseker was oor hoe hierdie nuwe medium hanteer moes word. Dit het geblyk dat nuwe tegnologieë wat fragmentasie van die Internet bewerkstellig, gebruik kon word om staatsgebonde regsreëls af te dwing. Verskeie state van die wêreld het uiteenlopende metodologieë gevolg om die Internet op staatsvlak te probeer reguleer, en dit het tot die lukraak-wyse waarop die Internet tans gereguleer word, aanleiding gegee. Hierdie studie bespreek verskeie aspekte van regsbeheer in die konteks van die Internet, en bepaal daardeur hoe die Internet tans gereguleer word. Toepaslike wetgewing van verskeie state word regdeur die studie bespreek. Vier prominente state, wat verskeie belangrike ingrepe ten aansien van Internetregulering gemaak het, word verder uitgelig. Dit is die Verenigde State van Amerika, die Volksrepubliek van Sjina, die Europese Unie as verteenwoordiger van Europese state, en Suid-Afrika. Aspekte wat op Internasionaalregtelike vlak aangespreek moet word, soos internasionale organisasies en internasionale regsteorieë ten aansien van die regulering van die Internet, word ook onder die loep geneem. Die bevindings wat uit die studie volg, word gebruik om verskeie aanbevelings te maak, en die aanbevelings word uiteindelik in ’n nuwe model saamgevoegom’n sinvoller wyse van regulering van die Internet voor te stel. Aangesien die huidige studie in die konteks van die Internasionale reg onderneem word, word die studie afgesluit met ’n bespreking van kubersoewereiniteit, wat ’n uiteensetting is van hoe soewereiniteit ten aansien van die Internet toegepas behoort te word. Die gevolgtrekking is insiggewend — die ontwikkeling van die Internet het nie die einde van soewereiniteit beteken nie, maar het dit juis bevestig. / The world is currently structured in different states, and this is premised on the International law concept of sovereignty. States have the capacity to structure their own affairs, but the development of the Internet as a globally distributed network has violated this principle. It would seem that the development of the Internet would mean the end of sovereignty and statehood. A historical overview shows that regulators were initially unsure of how this new medium should be dealt with. It appeared that new technologies that could fragment the Internet, could be used to enforce state bound law. Several states of the world have used different methodologies trying to regulate the Internet at state level, and this led to the random way in which the Internet is currently regulated. This study examines various aspects of legal regulation in the context of the Internet, and determines how the Internet is currently regulated. Appropriate legislation of several states are discussed throughout the study. Four prominent states, which made several important interventions regarding the regulation of the Internet, are highlighted further. It is the United States, the People’s Republic of China, the European Union as the representative of European countries, and South Africa. Aspects that need to be addressed on International law level, such as international organizations and international legal theories regarding the regulation of the Internet, are also discussed. The findings that follow from this study are used to make several recommendations, which in turn are used to construct a new model for a more meaningful way in which the Internet could be regulated. Since the present study is undertaken in the context of the International law, the study is concluded with a discussion of cyber sovereignty, which is a discussion of how sovereignty should be applied with regards to the Internet. The conclusion is enlightening—the development of the Internet does not indicate the end of sovereignty, but rather confirms it. / Criminal and Procedural Law / LLD
10

Internet censorship in the European Union

Ververis, Vasilis 30 August 2023 (has links)
Diese Arbeit befasst sich mit Internetzensur innnerhalb der EU, und hier insbesondere mit der technischen Umsetzung, das heißt mit den angewandten Sperrmethoden und Filterinfrastrukturen, in verschiedenen EU-Ländern. Neben einer Darstellung einiger Methoden und Infrastrukturen wird deren Nutzung zur Informationskontrolle und die Sperrung des Zugangs zu Websites und anderen im Internet verfügbaren Netzdiensten untersucht. Die Arbeit ist in drei Teile gegliedert. Zunächst werden Fälle von Internetzensur in verschiedenen EU-Ländern untersucht, insbesondere in Griechenland, Zypern und Spanien. Anschließend wird eine neue Testmethodik zur Ermittlung der Zensur mittels einiger Anwendungen, welche in mobilen Stores erhältlich sind, vorgestellt. Darüber hinaus werden alle 27 EU-Länder anhand historischer Netzwerkmessungen, die von freiwilligen Nutzern von OONI aus der ganzen Welt gesammelt wurden, öffentlich zugänglichen Blocklisten der EU-Mitgliedstaaten und Berichten von Netzwerkregulierungsbehörden im jeweiligen Land analysiert. / This is a thesis on Internet censorship in the European Union (EU), specifically regarding the technical implementation of blocking methodologies and filtering infrastructure in various EU countries. The analysis examines the use of this infrastructure for information controls and the blocking of access to websites and other network services available on the Internet. The thesis follows a three-part structure. Firstly, it examines the cases of Internet censorship in various EU countries, specifically Greece, Cyprus, and Spain. Subsequently, this paper presents a new testing methodology for determining censorship of mobile store applications. Additionally, it analyzes all 27 EU countries using historical network measurements collected by Open Observatory of Network Interference (OONI) volunteers from around the world, publicly available blocklists used by EU member states, and reports issued by network regulators in each country.

Page generated in 0.0979 seconds