• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 4
  • 3
  • 2
  • Tagged with
  • 22
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Nalezení fyzické pozice stanice v síti Internet / Location of node real position on the Internet

Kopeček, Tomáš January 2010 (has links)
In this thesis I focus on finding the position of computers on the Internet. This need for locating computers originated in the last several years through the creation of overlay networks. For this activity there are many algorithms. This paper describes the King method that estimated the distance between communicating stations by using the domain name system. The aim of this work is to verify the accuracy of the King method in experimental PlanetLab network. This network provides access for more than 1000 stations worldwide.
12

Lokalizace stanic v Internetu pomocí metody Vivaldi s adaptivním časovým krokem / Localization of nodes in Internet using Vivaldi system with adaptive time step

Mašín, Jan January 2011 (has links)
The aim of this thesis was to identification with the principles of logical evaluation of the position of stations on the Internet. Read up on the localization algorithm called Vivaldi with adaptive time step and subsequently to its implementation in the operating system GNU/Linux CentOS distribution. Do one's homework the PlanetLab experimental network (http://www.planet-lab.org/). At selected stations from the network transfer created by the application and verify its function on the real servers located at various places around the globe and assess the accuracy achieved by estimating the distance between stations on the PlanetLab network. In this scope of activity, the application was created to measure the delay prediction using Vivaldi algorithm with adaptive time step which is on principle of operation a client-server where the client performs the steps of the algorithm, Vivaldi and the server only listens, collects the resulting data Vivaldi algorithm and stores them neatly file. Furthermore, the application was developed for direct measurement of the delay, which also functions as a client-server. These applications have been transferred to the selected nodes from the PlanetLab experimental network. Subsequently, these nodes were running, to carry out the necessary measurements. The resulting values were work into tables of using Microsoft Excel. These values were then compared with direct measurements and competitive positioning by the King. Vivaldi localization methods with adaptive time step and the King, were compared based on calculated estimates of both real estate errors and measurement using distribution function of the relative errors of both methods. All this information was evaluated to compare accuracy of both the localization methods and direct measurements.
13

Využití znalosti topologie páteřních sítí pro určování fyzické polohy stanic v síti Internet / Geolocation in Internet using network topologies

Dvořák, Filip January 2012 (has links)
The thesis discusses about modern geolocation methods and it describes the basic principles of their work. The work is divided into 2 parts - the theoretical one and the practical one. The first part of the thesis is focused on the description of these methods and on the explanation of its basic concepts which are used for determining of the physical position of the station according to its IP address. The second more extensive part of the work focuses on the description of the realization of algorithm in the Octant method in the programming language of Java and its use in the experimental net of PlanetLab. One important thing is to create a set of reference points and targets, which are necessary for the testing of the whole algorithm of the Octant Method. The results of estimated accuracy of target location obtained by the Octant method and their comparison with the results obtained by active methods of CBG, SOI and with the passive method of GeoIP are listed at the end of this work.
14

Nalezení fyzické polohy stanic v síti Internet pomocí měření přenosového zpoždění / Geolocation in Internet using latency measurements

Harth, Petr January 2012 (has links)
This diploma thesis is concerned with practical realization of CBG (Constraint-based Geolocation) algorithm, which is one of the IP (Internet Protocol) geolocation technique. IP geolocation determines the localization of a computer workstation location on the basis of on its IP address. The factors causing delays in data transfer are discussed first, followed by discussion of the issue of measuring these delays. The detailed explanation of IP geolocation follows where its contexts as well as the active geolocation techniques (techniques based on delay measurement mentioned above) are described. After that a brief description of PlanetLab experimental network, which was used for geolocation techniques measuring, is presented followed by a section explaining the creation of reference points and targets, which are another necessary prerequisite for practical realization of the method. Then the practical realization is explained in the form of CBGfinder program and its verification on the basis of artificial input data along with an actual example of IP geolocation of a point in the Internet are provided. Last but not least the measurement results of CBG algorithm are introduced, based on the analysis of Bestline parameters of one of the PlanetLab nodes measured in the period of one month, followed by a discussion of the inaccuracy of geological position and the computation speed. The cumulative distribution function as well as the kernel density estimation are also described. Final part of the thesis consists of discussion on measured results compared to results of other geological techniques results implemented by colleagues of the author of this diploma thesis. The results are compared on the basis of average inaccuracy of geological position estimations and its median, computation time, cumulative distribution function and kernel density estimation are also taken into regard.
15

Verifikace pozice serverů sítě PlanetLab / Verification of PlanetLab servers location

Pružinský, Ján January 2014 (has links)
The main objective of this thesis is to analyze the nodes of PlanetLab network. The analysis is focusing mainly on verifying the availability of nodes and on verifying the physical position of nodes. Individual nodes are tested for availability of ICMP protocol and SSH protocol. Availability of ICMP protocol is verified by using ping program. The main part of the thesis is devoted to verifying the addresses of nodes. Identified addresses are compared with entered addresses and the resulting conformity is evaluated at the level of the state, county, city and streets. Precision of specified address estimation is calculated based on given GPS coordinates. This thesis also deals with the dividing of nodes based on calculated usability index and accuracy index. The theoretical part contains a description of an experimental PlanetLab network and Google Geocoding API.
16

Large scale platform : Instantiable models and algorithmic design of communication schemes / Modélisation des communications sur plates-formes à grande echelles

Uznanski, Przemyslaw 11 October 2013 (has links)
La popularité croissante des applications Internet très gourmandes en bande passante (P2P, streaming,...) nous pousse à considérer le problème suivant :Comment construire des systèmes de communications collectives efficaces sur une plateforme à grande échelle ? Le développement de schéma de communications collectives dans le cadre d'un réseau distribué à grande échelle est une tâche difficile, qui a été largement étudiée et dont de multiples solutions ont été proposées. Toutefois, une nouvelle approche globale et systématique est nécessaire, une approche qui combine des modèles de réseaux et la conception algorithmique.Dans ce mémoire nous proposons l'utilisation de modèles capables de capturer le comportement d'un réseau réel et suffisamment simples pour que leurs propriétés mathématiques puissentêtre étudiées et pour qu'il soit possible de créer des algorithmesoptimaux. Premièrement, nous considérons le problème d'évaluation de la bande passante disponible pour une connexion point-à-point donnée. Nousétudions la façon d'obtenir des jeux de données de bande passante, utilisant plateforme PlanetLab. Nous présentons aussi nos propres jeux de données, jeux obtenus avec bedibe, un logiciel que nous avons développé. Ces données sont nécessaires pour évaluer les performances des différents algorithmesde réseau. Bien qu'on trouve de nombreux jeux de données de latence,les jeux de données de bande passante sont très rares. Nous présentons ensuite un modèle, appelé LastMile, qui estime la bande passante. En profitant des jeux de données décrits précédemment, nous montrons que cet algorithme est capable de prédire la bande passante entre deux noeuds donnés avec une précision comparable au meilleur algorithme connu de prédiction (DMF). De plus le modèle LastMile s'étend naturellement aux prédictions dans le scénario de congestion (plusieurs connexions partageant un même lien). Nous sommes effectivement en mesure de démontrer, à l'aide des ensembles de données PlanetLab, que la prédiction LastMile est préférable dans des tels scénarios.Dans le troisième chapitre, nous proposons des nouveaux algorithmes pour résoudre le problème de diffusion. Nous supposons que le réseau est modélisé par le modèle LastMile. Nous montrons que, sous cette hypothèse, nous sommes en mesure de fournir des algorithmes avec des ratios d'approximation élevés. De plus nous étendons le modèle LastMile, de manière à y intégrer des artéfacts de connectivité, dans notre cas ce sont des firewalls qui empêchent certains nœuds de communiquer directement entre eux. Dans ce dernier cas, nous sommes également en mesure de fournir des algorithmes d'approximation avec des garanties de performances prouvables. Les chapitres 1 à 3 forment les trois étapes accomplies de notre programme qui visent trois buts. Premièrement, développer à partir dezéro un modèle de réseau de communication. Deuxièmement, prouver expérimentalement sa performance. Troisièmement, montrer qu'il peut être utilisé pour développer des algorithmes qui résolvent les problèmes de communications collectives. Dans le 4e chapitre, nous montrons comment on peut concevoir dessystèmes de communication efficaces, selon différents modèles decoûts, en utilisant des techniques combinatoires,tout en utilisant des hypothèses simplificatrices sur la structure duréseau et les requêtes. Ce travail est complémentaire au chapitre précédent puisque auparavant, nous avons adopté l'hypothèse que les connectionsétaient autonomes (i.e. nous n'avons aucun contrôle sur le routage des connexions simples). Dans le chapitre 4, nous montrons comment résoudre le problème du routage économe en énergie, étant donnée une topologie fixée. / The increasing popularity of Internet bandwidth-intensive applications prompts us to consider followingproblem: How to compute efficient collective communication schemes on large-scale platform?The issue of designing a collective communication in the context of a large scale distributed networkis a difficult and a multi-level problem. A lot of solutions have been extensively studied andproposed. But a new, comprehensive and systematic approach is required, that combines networkmodels and algorithmic design of solutions.In this work we advocate the use of models that are able to capture real-life network behavior,but also are simple enough that a mathematical analysis of their properties and the design of optimalalgorithms is achievable.First, we consider the problem of the measuring available bandwidth for a given point-topointconnection. We discuss how to obtain reliable datasets of bandwidth measurements usingPlanetLab platform, and we provide our own datasets together with the distributed software usedto obtain it. While those datasets are not a part of our model per se, they are necessary whenevaluating the performance of various network algorithms. Such datasets are common for latencyrelatedproblems, but very rare when dealing with bandwidth-related ones.Then, we advocate for a model that tries to accurately capture the capabilities of a network,named LastMile model. This model assumes that essentially the congestion happens at the edgesconnecting machines to the wide Internet. It has a natural consequence in a bandwidth predictionalgorithm based on this model. Using datasets described earlier, we prove that this algorithm is ableto predict with an accuracy comparable to best known network prediction algorithm (DistributedMatrix Factorization) available bandwidth between two given nodes. While we were unable toimprove upon DMF algorithm in the field of point-to-point prediction, we show that our algorithmhas a clear advantage coming from its simplicity, i.e. it naturally extends to the network predictionsunder congestion scenario (multiple connections sharing a bandwidth over a single link). We areactually able to show, using PlanetLab datasets, that LastMile prediction is better in such scenarios.In the third chapter, we propose new algorithms for solving the large scale broadcast problem.We assume that the network is modeled by the LastMile model. We show that under thisassumption, we are able to provide algorithms with provable, strong approximation ratios. Takingadvantage of the simplicity and elasticity of the model, we can even extend it, so that it captures theidea of connectivity artifacts, in our case firewalls preventing some nodes to communicate directlybetween each other. In the extended case we are also able to provide approximation algorithmswith provable performance.The chapters 1 to 3 form three successful steps of our program to develop from scratch amathematical network communication model, prove it experimentally, and show that it can beapplied to develop algorithms solving hard problems related to design of communication schemesin networks.In the chapter 4 we show how under different network cost models, using some simplifyingassumptions on the structure of network and queries, one can design very efficient communicationschemes using simple combinatorial techniques. This work is complementary to the previous chapter in the sense that previously when designing communication schemes, we assumed atomicityof connections, i.e. that we have no control over routing of simple connections. In chapter 4 weshow how to solve the problem of an efficient routing of network request, given that we know thetopology of the network. It shows the importance of instantiating the parameters and the structureof the network in the context of designing efficient communication schemes.
17

Virtuální prostředí přístupu k uzlům v PlanetLab / Virtual Access to Nodes in PlanetLab

Fic, Jiří January 2008 (has links)
PlanetLab as a distributed systems testbed offers a unique opportunity for developing and testing new applications useful for future Internet. This work brings up a scheme and a solution of the problem with accessing PlanetLab by a larger group of students e.g. for the purpose of solving their courseworks. A designed system empowers its administrator to create and control virtual user accounts which provide possibility for all its users to connect to selected nodes in the PlanetLab.
18

Zjištění fyzické pozice počítače v Internetu / Establishing the physical position of a computer in the Internet

Relovský, Josef January 2008 (has links)
This master‘s thesis is formed as a part of the research project for whose analyse is used the worldwide experimental network called PlanetLab. The whole dilemma is engaged by IPTV technology. IPTV is a protocol that makes possible transfer data of a television content over the Internet to the end user. In the IPTV technology the server is a source. These data are presented as a video and audio signal (streem) which are required to deliver to the end users. Some structure, which presents alternate these computers´connection, has been established, because the technology making use of the big pretention is used. The most patent way between the source and the destination is found. My objective is design this structure in the pursuit. The principle of the signal ramification from one node to several nodes (in group) is called multicast. Rather said from one node to the set of nodes. In the IPTV is presented each one single program for one multicast group. The concrete end users (recipients) are members of one or several obtainable multicast groups. The switch between programs demands a change from one multicast group to other group. For an analyse is used the worldwide experimental network called PlanetLab. This network was created after the floatation of three American´s universities in 2002. Nowadays it takes in more than 800 nodes which are distributed over the world. The PlanetLab is used by multinational company such as the Intel or the Hewlett-Packard. It is created for the testing and the scientific scope. I make the scripts in the Linux for the formation of the interconnecting structure. The main item by the course of the unreeling is response time. I investigate it with a command called “ping”. Everything is created in the Linux because all nodes use the operation system Linux in PlanetLab. By the help of the command “ping” I take the active nodes and response times. According to the response time I make a distance vectors which are used for the finding a location in the face of the references points which were determined before. According to the similarity of these vectors is designated to what end point is put to the point.
19

Grafické zobrazení relací mezi počítači v Internetu / Visualization of relations between computers in the Internet

Cimbálek, Přemysl January 2008 (has links)
Internet Protocol Television (IPTV) transmits the television signal over the TCP/IP family of protocols. Its advantages are for example that the transmitting is not only one-way as in the “classical” TV broadcasting, but it can provide feedback such as interactivity. There are also some problems which avoids development, for example low channel capacity of access networks. That is why new methods , for example how to get more efficiency in IPTV transmission, are proposed. The main task of this diploma thesis is to visualize tree structure of relations between nodes in the network, based on understanding of principles of the hierarchical summarization and IPTV transmitting. The nodes in the tree structure provide computing and summarizing of the data in back-way channel. There is the data from the end users in this channel. In the first part of this diploma thesis there is explained the principle of IPTV and its differences as compared with classical TV broadcasting. The part is also aimed for the supported services, advantages and disadvantages. There is explained the compressing data with the MPEG-2 and MPEG-4 standards and problems with transport networks called “last mile problem.” To transmitting data IPTV uses Source Specific Multicast – every user connects to the multicast session with requested TV program. Feedback is provided by unicast. Feedback network uses the hierarchical summarization principle to reduce the data. This problem, connected with RTP, RTCP and TTP protocols, is in the work described as well. There is an international experimental network called PlanetLab mentioned in theoretical part of this work. Proposed structure of new protocol and applications including the visualization for IPTV broadcast, is tested in that network. In the practical part of this work there are discussed possibilities and methods for the visualization and data storage. Because of high availability and flexibility, there were chosen web technologies, such as MySQL for data storage. The tree model is executed by Java. The visualization is solved by web technologies, source code for visualization is dynamically generated by scripts in JSP (Java Server Pages). Graphical output is provided by the vector format SVG (Scalable Vector Graphics) which is created for graphical expression on the internet and in the mobile phones. There were created interactive web application thanks its ability to cooperation with Javascript technology. This application visualizes relation-tree structure of nodes. In this work there are explained basics of all used technologies, there are also given reasons for chosen methods and formats. Examples and interesting parts of solution are mentioned as well.
20

Vzdálené připojení na virtualizovaný operační systém / Remote connection to virtualized operating system

Veselý, Marek January 2015 (has links)
The thesis is focused on the new IT trend – virtualization. Below are described the types of virtualization, as well as different implementations of it. In the following chapters there will be described the TCP and ICMP protocols, as well as the SSH services. The last part of this work is dedicated to measurements where there will be highlighted the advantages and disadvantages of virtualization.

Page generated in 0.022 seconds