• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 14
  • 14
  • 11
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Distributed Approach to Passively Gathering End-to-End Network Performance Measurements

Simpson, Charles Robert, Jr. 12 April 2004 (has links)
NETI@home is an open-source software package that collects network performance statistics from end-systems. It has been written for and tested on the Windows, Solaris, and Linux operating systems, with testing for other operating systems to be completed soon. NETI@home is designed to run on end-user machines and collect various statistics about Internet performance. These statistics are then sent to a server at the Georgia Institute of Technology, where they are collected and made publicly available. This tool gives researchers much needed data on the end-to-end performance of the Internet, as measured by end-users. NETI@homes basic approach is to sniff packets sent from and received by the host and infer performance metrics based on these observed packets. NETI@home users are able to select a privacy level that determines what types of data are gathered, and what is not reported. NETI@home is designed to be an unobtrusive software system that runs quietly in the background with little or no intervention by the user, and using few resources.
12

Towards Ideal Network Traffic Measurement: A Statistical Algorithmic Approach

Zhao, Qi 03 October 2007 (has links)
With the emergence of computer networks as one of the primary platforms of communication, and with their adoption for an increasingly broad range of applications, there is a growing need for high-quality network traffic measurements to better understand, characterize and engineer the network behaviors. Due to the inherent lack of fine-grained measurement capabilities in the original design of the Internet, it does not have enough data or information to compute or even approximate some traffic statistics such as traffic matrices and per-link delay. While it is possible to infer these statistics from indirect aggregate measurements that are widely supported by network measurement devices (e.g., routers), how to obtain the best possible inferences is often a challenging research problem. We name this as "too little data" problem after its root cause. Interestingly, while "too little data" is clearly a problem, "too much data" is not a blessing either. With the rapid increase of network link speeds, even to keep sampled summarized network traffic (for inferring various network statistics) at low sample rates results in too much data to be stored, processed, and transmitted over measurement devices. In summary high-quality measurements in today's Internet is very challenging due to resource limitations and lack of built-in support, manifested as either "too little data" or "too much data". We present some new practices and proposals to alleviate these two problems.The contribution is four fold: i) designing universal methodologies towards ideal network traffic measurements; ii) providing accurate estimations for several critical traffic statistics guided by the proposed methodologies; iii) offering multiple useful and extensible building blocks which can be used to construct a universal network measurement system in the future; iv) leading to some notable mathematical results such as a new large deviation theorem that finds applications in various areas.
13

Towards Realistic Datasets forClassification of VPN Traffic : The Effects of Background Noise on Website Fingerprinting Attacks / Mot realistiska dataset för klassificering av VPN trafik : Effekten av bakgrundsoljud på website fingerprint attacker

Sandquist, Christoffer, Ersson, Jon-Erik January 2023 (has links)
Virtual Private Networks (VPNs) is a booming business with significant margins once a solid user base has been established and big VPN providers are putting considerable amounts of money into marketing. However, there exists Website Fingerprinting (WF) attacks that are able to correctly predict which website a user is visiting based on web traffic even though it is going through a VPN tunnel. These attacks are fairly accurate when it comes to closed world scenarios but a problem is that these scenarios are still far away from capturing typical user behaviour.In this thesis, we explore and build tools that can collect VPN traffic from different sources. This traffic can then be combined into more realistic datasets that we evaluate the accuracy of WF attacks on. We hope that these datasets will help us and others better simulate more realistic scenarios.Over the course of the project we developed automation scripts and data processing tools using Bash and Python. Traffic was collected on a server provided by our university using a combination of containerisation, the scripts we developed, Unix tools and Wireshark. After some manual data cleaning we combined our captured traffic together with a provided dataset of web traffic and created a new dataset that we used in order to evaluate the accuracy of three WF attacks.By the end we had collected 1345 capture files of VPN traffic. All of the traffic were collected from the popular livestreaming website twitch.tv. Livestreaming channels were picked from the twitch.tv frontpage and we ended up with 245 unique channels in our dataset. Using our dataset we managed to decrease the accuracy of all three tested WF attacks from 90% down to 47% with a WF attack confidence threshold of0.0 and from 74% down to 17% with a confidence threshold of 0.99. Even though this is a significant decrease in accuracy it comes with a roughly tenfold increase in the number of captured packets for the WF attacker.Thesis artifacts are available at github.com/C-Sand/rds-collect. / Virtual Private Network (VPN) marknaden har växt kraftigt och det finns stora marginaler när en solid användarbas väl har etablerats. Stora VPN-leverantörer lägger dessutom avsevärda summor pengar på marknadsföring. Det finns dock WF-attacker som kan korrekt gissa vilken webbplats en användare besöker baserat på webbtrafik, även om den går genom en VPN-tunnel.Dessa attacker har rätt bra precision när det kommer till scenarier i sluten värld, men problemet är att dessa fortfarande är långt borta från att simulera typiskt användarbeteende.I det här examensarbetet utforskar och bygger vi verktyg som kan samla in VPNtrafik från olika källor. Trafiken kan användas för att kombineras till mera realistiska dataset och sedan användas för att utvärdera träffsäkerheten av WF-attacker. Vi hoppas att dessa dataset kommer att hjälpa oss och andra att bättre simulera verkliga scenarier.Under projektets gång utvecklade vi ett par automatiserings skript och verktyg för databearbetning med hjälp av Bash och Python. Trafik samlades in på en server från vårt universitet med en kombination av containeriseringen, skripten vi utvecklade, Unix-verktyg och Wireshark. Efter en del manuell datarensning kombinerade vi vår infångade trafik tillsammans med det tillhandahållna datasetet med webbtrafik och skapade ett nytt dataset som vi använde för att utvärdera riktigheten av tre WF attacker.Vid slutet hade vi samlat in 1345 filer med VPN-trafik. All trafik samlades in från den populära livestream plattformen twitch.tv. Livestreamingkanaler plockades ut från twitchs förstasida och vi slutade med 245 unika kanaler i vårat dataset. Med hjälp av vårat dataset lyckades vi minska noggrannheten för alla tre testade WF-attacker från 90% ner till 47% med tröskeln på 0,0 och från 74% ner till 17% med en tröskel på 0,99. Även om detta är en betydande minskning av noggrannheten kommer det med en ungefär tiofaldig ökning av antalet paket. I slutändan samlade vi bara trafik från twitch.tv men fick ändå några intressanta resultat och skulle gärna se fortsatt forskning inom detta område.Kod, instruktioner, dataset och andra artefakter finns tillgängliga via github.com/CSand/rds-collect.
14

Internet censorship in the European Union

Ververis, Vasilis 30 August 2023 (has links)
Diese Arbeit befasst sich mit Internetzensur innnerhalb der EU, und hier insbesondere mit der technischen Umsetzung, das heißt mit den angewandten Sperrmethoden und Filterinfrastrukturen, in verschiedenen EU-Ländern. Neben einer Darstellung einiger Methoden und Infrastrukturen wird deren Nutzung zur Informationskontrolle und die Sperrung des Zugangs zu Websites und anderen im Internet verfügbaren Netzdiensten untersucht. Die Arbeit ist in drei Teile gegliedert. Zunächst werden Fälle von Internetzensur in verschiedenen EU-Ländern untersucht, insbesondere in Griechenland, Zypern und Spanien. Anschließend wird eine neue Testmethodik zur Ermittlung der Zensur mittels einiger Anwendungen, welche in mobilen Stores erhältlich sind, vorgestellt. Darüber hinaus werden alle 27 EU-Länder anhand historischer Netzwerkmessungen, die von freiwilligen Nutzern von OONI aus der ganzen Welt gesammelt wurden, öffentlich zugänglichen Blocklisten der EU-Mitgliedstaaten und Berichten von Netzwerkregulierungsbehörden im jeweiligen Land analysiert. / This is a thesis on Internet censorship in the European Union (EU), specifically regarding the technical implementation of blocking methodologies and filtering infrastructure in various EU countries. The analysis examines the use of this infrastructure for information controls and the blocking of access to websites and other network services available on the Internet. The thesis follows a three-part structure. Firstly, it examines the cases of Internet censorship in various EU countries, specifically Greece, Cyprus, and Spain. Subsequently, this paper presents a new testing methodology for determining censorship of mobile store applications. Additionally, it analyzes all 27 EU countries using historical network measurements collected by Open Observatory of Network Interference (OONI) volunteers from around the world, publicly available blocklists used by EU member states, and reports issued by network regulators in each country.

Page generated in 0.1759 seconds