• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

WBG (Whois Based Geolocation): uma estratégia para localização geográfica de hosts na Internet

Endo, Patricia Takako 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:54:02Z (GMT). No. of bitstreams: 2 arquivo1953_1.pdf: 1722836 bytes, checksum: 2be769931d2befdf5a296cf78a205f34 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Baseado, por exemplo, na localização geográfica de um determinado host na Internet, podese oferecer serviços especializados, como: a) páginas web com preferências regionais (e.g. usuários online podem receber propagandas direcionadas ou ter a linguagem para apresentação de conteúdo selecionada automaticamente); b) controle de disponibilidade de dados, de acordo com a localização do usuário (e.g. pode-se restringir o acesso a determinados dados através de políticas regionais e autorização de transações a partir de localidades pré-estabelecidas), c) estudo e análise de tráfego geolocalizado para entender quem se comunica com quem no nível de usuários, regiões e países e identificação de anomalias de roteamento. Os aspectos comuns destas aplicações são a sua dependência em relação a estratégias, denominadas geolocalização. Contudo, alguns destes mecanismos apresentam uma baixa acurácia ou uma estimativa de localização geográfica não-aceitável para determinadas aplicações. Portanto, torna-se de grande importância estudos que melhorem a precisão, bem como a completude das estratégias utilizadas para inferir a geolocalização de hosts na Internet. Este trabalho tem como principais objetivos o estudo sobre as estratégias de geolocalização existentes; a proposta de uma estratégia que melhore a precisão das inferências de localização geográfica de hosts na Internet e a completude dos resultados; e o estudo de tráfego geolocalizado de uma base de dados da rede acadêmica do Estado de Pernambuco. A estratégia desenvolvida, denominada WBG (Whois Based Geolocation), é baseada em buscas whois online e possui uma heurística baseada na ferramenta traceroute
2

AI-Powered Network Traffic Prediction / AI baserad prediktering av nätverkstraffik

Bolakhrif, Amin January 2021 (has links)
In this Internet and big data era, resource management has become a crucial task to ensure the quality of service for users in modern wireless networks. Accurate and rapid Internet traffic data is essential for many applications in computer networking to enable high networking performance. Such applications facilitate admission control, congestion control, anomaly detection, and bandwidth allocation. In radio networks, these mechanisms are typically handled by features such as Carrier Aggregation, Inter-Frequency Handover, and Predictive Scheduling. Since these mechanisms often take time and cost radio resources, it is desirable to only enable them for users expected to gain from them. The problem of network traffic flow prediction is forecasting aspects of an ongoing traffic flow to mobilize networking mechanisms that ensures both user experience quality and resource management. The expected size of an active traffic flow, its expected duration, and the anticipated amount of packets within the flow are some of the aspects. Additionally, forecasting individual packet sizes and arrival times can also be beneficial. The wide-spread availability of Internet flow data allows machine learning algorithms to learn the complex relationships in network traffic and form models capable of forecasting traffic flows. This study proposes a deep-learning-based flow prediction method, established using a residual neural network (ResNet) for regression. The proposed model architecture demonstrates the ability to accurately predict the packet count, size, and duration of flows using only the information available at the arrival of the first packet. Additionally, the proposed method manages to outperform traditional machine learning methods such as linear regression and decision trees, in addition to conventional deep neural networks. The results indicate that the proposed method is able to predict the general magnitude of flows with high accuracy, providing precise magnitude classifications. / I denna Internet och data era har resurshantering blivit allt mer avgörande för att säkerställa tjänstekvaliteten för användare i moderna trådlösa nätverk. Noggrann och hastig Internet-trafikinformation är avgörande för många applikationer inom datanätverk för att möjliggöra hög nätverksprestanda. Sådana applikationer underlättar kontroll av behörighet, kontroller av trängsel, detektering av avvikelser och allokering av bandbredd. I radionätverk hanteras dessa mekanismer vanligtvis av funktioner som Carrier Aggregation, Inter- Frequency Handover och Predictive Scheduling. Eftersom dessa funktioner ofta tar tid och kostar resurser så är det önskvärt att nätverk endast möjliggör sådana funktioner för användare som förväntas dra nytta av dem. Prediktering av trafikflöden i nätverk grundar sig i att förutsäga aspekter av ett pågående trafikflöde för att kunna mobilisera nätverksfunktioner som säkerställer både kvaliteten för användare samt resurshantering. Den förväntade storleken på ett aktivt trafikflöde, dess varaktighet och mängden paket inom flödet är några av dessa aspekter. Det kan dessutom vara fördelaktigt att förutsäga individuella paketstorlekar och ankomsttider. Den stora tillgången till data med nätverks-flöden gör det möjligt för maskininlärningsmetoder att lära sig de komplexa förhållandena i nätverkstrafik och därigenom formulera modeller som kan förutsäga flöden i nätverk. Denna studie föreslår en djupinlärningsbaserad metod för att prediktera flöden i nätverk, med hjälp av ett anpassat neuralt nätverk som utnyttjar genvägar i modellens konstruktion (ResNet). Den föreslagna modell-arkitekturen visar sig nöjaktigt kunna förutsäga antalet paket, storlek och varaktighet för flöden med endast den information som är tillgänglig från det första paketet. Dessutom lyckas den föreslagna metoden att överträffa både traditionella maskininlärningsmetoder som linjär regression och beslutsträd, samt konventionella djupa neurala nätverk. Resultaten indikerar att den föreslagna metoden kan förutsäga den allmänna storleken på flödens egenskaper med hög noggrannhet, givet att IP-adresser är tillgängliga.
3

EXAMINING FACTORS INFLUENCING NETWORK EXPERT‟S DECISION WHETHER TO RECOMMEND INTERNET PROTOCOL (IP) MIGRATION (IPV4 to IPV6) OR NOT IN ORGANIZATIONS

Kibru Shomoro, Abenezer January 2014 (has links)
Findings of this research work provide clear understanding of why organizationaltechnology(Network technology) adoption decision makers decide to recommend or notrecommend an Internet Protocol migration(adoption of the latest protocol), or a migrationfrom Ipv4 to IPv6/ or adoption of IPv6, to their organizations. A meticulous review ofliterature on the practice of various organizations technology adoption process served as abase for developing relevant research questions and corresponding hypothesis. The researchhypothesis was developed to examine organization‟s technology (Network technology)adoption decision maker‟s perception of IPv6: quality of service, auto configurationcapability, security, mobility, address abundance and cost effectiveness and its effect on theirdecision.The study result indicated that network expert‟s decision to recommend a new networktechnology adoption, specifically, internet protocol migration from IPv4 to IPv6 is highlyinfluenced by their perception of the factors listed in the previous paragraph, thereforenetwork expert‟s perception of the aforementioned factors are instrumental for their decisionof recommending whether to encourage the internet protocol migration. It is also implied thatmanagers at top level can make a technology adoption or migration decision based on therecommendation from the experts already knowing that their decision is highly influenced bytheir perception of capabilities and functionality of the new IP (IPv6). In addition to theapparent contribution of this study to organizations that fall in to the category of organization,where this study was conducted, the result of this study also helps different organizationsengaged in other kinds of business activity, such as: network infrastructure manufacturersand application developers by providing essential information regarding which functionalitiesand capabilities are playing a major role for organization‟s choice for a certain networkinfrastructure. / Program: Masterutbildning i Informatik
4

Analýza zdrojů nevyžádané elektronické pošty / Analysis of email spam sources

Caha, Tomáš January 2019 (has links)
The first part of this thesis deals with an approach of designing and developing of application for IP geolocation using various geolocation databases. Methods of geographical location of network devices and amount of available data provided by chosen commercial and freely accessible geolocation databases are presented. The data are summarized with focus on methods of obtaining information about IP addresses from various databases. In the paper there are also presented ways used to develop the Python application and its parts, which can be easily reused in other programs. The command line program was created to demonstrate that all parts of the developed application work properly. The whole application is freely accesible under the conditions of the MIT license, published on GitHub (https://github.com/tomas-net/ip2geotools/) and in the Python package index PyPi. The second part of this thesis deals with the description of cyberthreats and the use of developed application to mass geolocation of IP addresses that are listed in chosen freely accessible lists as sources of email spam or cyberattacks. Geographical analysis of cyberattacks sources shows countries of origin. Results of analysis are presented in graphs and maps.
5

Vyhledávání podobností v síťových bezpečnostních hlášeních / Similarity Search in Network Security Alerts

Štoffa, Imrich January 2020 (has links)
Network monitoring systems generate a high number of alerts reporting on anomalies and suspicious activity of IP addresses. From a huge number of alerts, only a small fraction is high priority and relevant from human evaluation. The rest is likely to be neglected. Assume that by analyzing large sums of these low priority alerts we can discover valuable information, namely, coordinated IP addresses and type of alerts likely to be correlated. This knowledge improves situational awareness in the field of network monitoring and reflects the requirement of security analysts. They need to have at their disposal proper tools for retrieving contextual information about events on the network, to make informed decisions. To validate the assumption new method is introduced to discover groups of coordinated IP addresses that exhibit temporal correlation in the arrival pattern of their events. The method is evaluated on real-world data from a sharing platform that accumulates 2.2 million alerts per day. The results show, that method indeed detected truly correlated groups of IP addresses.
6

Webová aplikace zobrazující polohu IP stanic / Web application for getting location of IP nodes

Modrák, Zdeněk January 2015 (has links)
Thesis deal with geolocation in internet network. There are described possibilities of geolocation and thesis is mainly focused on passive geolocation methods. Under passive geolocation belongs location databases which there are described as in theoretical way as used in practical part of thesis. In practical part there is created complex system for geolocation in internet environment which used paid and free geolocation databases. Another used database is WHOIS. Data from paid databases is processed and accuracy of databases is evaluated.
7

Registrační databáze IP adres / Registration database for IP nodes

Smrčka, Lukáš January 2015 (has links)
This thesis is focused on finding the physical location of stations by the passive geolocation techniques, particularly using the registration database of IP addresses. The first two part are focused on a theoretical analysis of this problem, the next two parts of this thesis deal with the solution of this problem and discussion of the results.
8

Longitudinal Characterization of the IP Allocation of the Major Cloud Providers and Other Popular Service Providers : An analysis of the Internet Plane projects collection / Karaktärisering av IP adresser allokerade till moln- och andra populära tjänstleverantörer

Girma Abera, Hyab, Grikainis, Gasparas January 2022 (has links)
With the growth of the internet and exhaustion of IPv4 addresses, the allocation of IP addresses and routing between autonomous systems is an important factor on what paths are taken on the internet. Paths to different destinations are impacted by different neighbouring autonomous systems and their relations with eachother are important in order to find an optimal route from source to destination. In this thesis we look at a longitudinal change of IP observed on the internet that is owned by large organizations. To achieve this we build tools for extracting and parsing data from a dataset from iPlane where we then compare this to the largest domains and cloud providers. From our results we conclude that large domains and cloud providers are found more often as time has passed and they seem to not peer with eachother. We also find that the routing policies within different autonomous systems varies.
9

Anonymizace PCAP souborů / Anonymization of PCAP Files

Navrátil, Petr January 2020 (has links)
This diploma thesis deals with the design and implementation of an application suitable for the anonymization of PCAP files. The thesis presents TCP/IP model and for each layer highlights attributes that can be used to identify real people or organizations. Some of the anonymization methods suitable to modify highlighted attributes and sensitive data are described. The implemented application uses TShark tool to parse byte data of PCAP format to JSON format that is used in the application. TShark supports lots of network protocols which allows the application to anonymize various attributes.  Anonymization process is controlled by anonymization politics that can be customized by adding new attributes or anonymization methods.
10

La protection des libertés individuelles sur le réseau internet / The protection of Individuals rights on the internet

Criqui-Barthalais, Géraldine 07 December 2018 (has links)
Cette étude envisage le réseau internet comme un nouvel espace invitant à réinterpréter les libertés de la personne physique. Au titre de celles-ci, sont protégées la liberté individuelle, entendue comme le fait de ne pouvoir être arbitrairement détenu et la liberté d’aller et venir. Il doit en aller de même sur le réseau. Etablissant une analogie avec ces libertés, la première partie de la thèse consacre deux libertés : la liberté d’accès au réseau et la liberté de naviguer sur le web. La première implique de définir le contenu d’un service public de l’accès. De plus, il faut affirmer que la coupure d’accès au réseau doit être envisagée comme une mesure privative de liberté ; elle ne peut donc être décidée que par le juge judiciaire. L’affirmation de la liberté de naviguer sur le web conduit à envisager le régime du blocage des sites, une mesure qui ne peut intervenir que dans le cadre d’une police administrative spéciale. Dans la seconde partie il apparaît que ces deux libertés n’ont toutefois de sens que si l’individu a accès au réseau anonymement et n’est pas surveillé arbitrairement quand il navigue sur le web. Cette étude cherche ainsi à préciser le régime devant encadrer le mécanisme d’adressage du réseau. Sont définies les conditions du contrôle de l’identité de l’internaute à partir de son adresse IP. Enfin, il est soutenu qu’un principe général d’effacement des données révélant les sites visités doit être affirmé, principe qui s’applique aux différents acteurs du réseau, notamment les moteurs de recherche. L’interception de ces données ne peut procéder que d’un pouvoir sécuritaire ou hiérarchique sur l’internaute. / This study considers the internet as a new territory where rights guaranteed to each individual in physical space can be promoted; not only free speech and privacy, but also the Habeas Corpus prerogative writ, which protects against unlawful imprisonment, and the right to freedom of movement. Thus, processing by analogy, the dissertation intends to promote two specific digital rights: the freedom to connect to the internet and the freedom to surf on the web. The freedom to connect should be part of a public service which promotes this access through public policies. Moreover, barring someone from using the internet can only be decided by a judge. The freedom to surf should protect the web users against unreasonable restrictions. Thus, measures blocking illegal websites should not come through self-regulation but through a legal framework which defines how administrative authorities are entitled to decide such restrictions. The protection of these two rights entails further obligations. Individuals must access the internet anonymously and they must be aware of how the government monitors their actions on the web. This study tries to outline the content of measures aiming to frame network addressing mechanisms. Identity checks based on the IP address should be subject to a strict legal regime. The study concludes that individuals have to be protected from surveillance when data reveal their choices among websites while they are connected. Internet access providers, but also search engines and browsers, must delete this data. Only special measures taken by a public entity or someone entitled to control the web users may lead to this kind of data retention.

Page generated in 0.0571 seconds