• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 15
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 29
  • 22
  • 20
  • 17
  • 14
  • 14
  • 11
  • 11
  • 11
  • 10
  • 8
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Analyzing Cross-layer Interaction in Overlay Networks

Seetharaman, Srinivasan 14 November 2007 (has links)
Overlay networks have recently gained popularity as a viable alternative to overcome functionality limitations of the Internet (e.g., lack of QoS, multicast routing). They offer enhanced functionality to end-users by forming an independent and customizable virtual network over the native network. Typically, the routing at the overlay layer operates independent of that at the underlying native layer. There are several potential problems with this approach because overlay networks are selfish entities that are chiefly concerned with achieving the routing objective of their own users. This leads to complex cross-layer interactions between the native and overlay layers, and often tends to degrade the achieved performance for both layers. As overlay applications proliferate and the amount of selfish overlay traffic surges, there is a clear need for understanding the complex interactions and for strategies to manage them appropriately. Our work addresses these issues in the context of "service overlay networks", which represent virtual networks formed of persistent nodes that collaborate to offer improved services to actual end-systems. Typically, service overlays alter the route between the overlay nodes in a dynamic manner in order to satisfy a selfish objective. The objective of this thesis is to improve the stability and performance of overlay routing in this multi-layer environment. <br><br> We investigate the common problems of functionality overlap, lack of cross-layer awareness, mismatch or misalignment in routing objectives and the contention for native resources between the two layers. These problems often lead to deterioration in performance for the end-users. This thesis presents an analysis of the cross-layer interaction during fault recovery, inter-domain policy enforcement and traffic engineering in the multi-layer context. Based on our characterization of the interaction, we propose effective strategies that improve overall routing performance, with minimal side-effects on other traffic. These strategies typically 1) increase the layer-awareness (awareness of information about the other layer) at each layer, 2) introduce better control over routing dynamics and 3) offer improved overlay node placement options. Our results demonstrate how applying these strategies lead to better management of the cross-layer interaction, which in turn leads to improved routing performance for end-users.
42

Conception d'une architecture Pair-à-Pair orientée opérateur de services

Saad, Radwane 17 September 2010 (has links) (PDF)
Les paradigmes et architectures du pair-à-pair (P2P) sont au centre des réalisations d'applications à grande échelle de tout type. Il est nécessaire d'intégrer un niveau de contrôle sur de telles applications. De telles applications seront ainsi opérées et auront comme maître d'œuvre un opérateur de services. Dans la pratique actuelle les entités pairs partageant des ressources se placent d'une manière aléatoire sur un large réseau physique (IP). Nous proposons la conception d'une architecture globale pour la mise en place de telles applications sur des plateformes de type P2P. Dans ce paradigme il est possible d'isoler trois principales composantes : la première est celle qui concerne le service applicatif, la deuxième est le routage (ou la recherche), la troisième est celle qui traite du transport des données. Ce travail consiste à optimiser chaque composante du modèle P2P. Ces études nous permettent de spécifier des structures pour trois principales contributions. La première a pour objectif de cloisonner le trafic P2P et, après généralisation, d'appliquer un algorithme sensible au contexte où chaque groupe de pairs (appartenant à un même système autonome par exemple) est basé sur une DHT (Distributed Hash Table). La seconde est d'accélérer le transfert des données à l'aide du mécanisme FEC (Forward Error Correction). La troisième est d'intégrer une entité de Contrôle/Gestion. BitTorrent est le protocole choisi au niveau transport sur une architecture intégrant ces contributions. L'architecture SPOP (Service Oriented Provider P2P) a été validée par simulation et grâce à une application de sécurité de défense contre les attaques DDoS
43

Long-Term Location-Independent Research Data Dissemination Using Persistent Identifiers

Wannenwetsch, Oliver 11 January 2017 (has links)
No description available.
44

On the Resilience of Network Coding in Peer-to-Peer Networks and its Applications

Niu, Di 14 July 2009 (has links)
Most current-generation P2P content distribution protocols use fine-granularity blocks to distribute content in a decentralized fashion. Such systems often suffer from a significant variation in block distributions, such that certain blocks become rare or even unavailable, adversely affecting content availability and download efficiency. This phenomenon is further aggravated by peer dynamics which is inherent in P2P networks. In this thesis, we quantitatively analyze how network coding may improve block availability and introduce resilience to peer dynamics. Since in reality, network coding can only be performed within segments, each containing a subset of blocks, we explore the fundamental tradeoff between the resilience gain of network coding and its inherent coding complexity, as the number of blocks in a segment varies. As another application of the resilience of network coding, we also devise an indirect data collection scheme based on network coding for the purpose of large-scale network measurements.
45

Scalable download protocols

Carlsson, Niklas 15 December 2006
Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.<p>Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.<p>In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only local (rather than global) request information.<p>Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence.
46

On the Resilience of Network Coding in Peer-to-Peer Networks and its Applications

Niu, Di 14 July 2009 (has links)
Most current-generation P2P content distribution protocols use fine-granularity blocks to distribute content in a decentralized fashion. Such systems often suffer from a significant variation in block distributions, such that certain blocks become rare or even unavailable, adversely affecting content availability and download efficiency. This phenomenon is further aggravated by peer dynamics which is inherent in P2P networks. In this thesis, we quantitatively analyze how network coding may improve block availability and introduce resilience to peer dynamics. Since in reality, network coding can only be performed within segments, each containing a subset of blocks, we explore the fundamental tradeoff between the resilience gain of network coding and its inherent coding complexity, as the number of blocks in a segment varies. As another application of the resilience of network coding, we also devise an indirect data collection scheme based on network coding for the purpose of large-scale network measurements.
47

Scalable download protocols

Carlsson, Niklas 15 December 2006 (has links)
Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.<p>Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.<p>In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only local (rather than global) request information.<p>Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence.
48

Peer-to-peer distribution of web content using WebRTC within a web browser

Ersson, Kerstin, Siri, Persson January 2015 (has links)
The aim of this project was to investigate if it is possible to host websites using the BitTorrent protocol, a protocol for distribution of data on the web. This was done using several Node.js modules, small clusters of code written in JavaScript, such as Browserify and a modified version of WebTorrent. In these modules, technologies like websockets and WebRTC are implemented. The project resulted in a working WebTorrent module, implemented on the website www.peerweb.io. However, the module still needs optimization concerning the time it takes to set up a WebRTC peer connection. With these modifications, we believe that hosting websites via peer-to-peer network will be the future of the web.
49

Detekce dynamických síťových aplikací / Detection of Dynamic Network Applications

Burián, Pavel January 2013 (has links)
This thesis deals with detection of dynamic network applications. It describes some of the existing protocols and methods of their identification from IP flow and packet contents. It constitues a design of a detection system based on the automatic creation of regular expressions and describes its implementation. It presents the created regular expressions for BitTorrent and eDonkey protocol. It compares their quality with the solution of L7-filter.
50

Towards Automation in Digital Investigations : Seeking Efficiency in Digital Forensics in Mobile and Cloud Environments

Homem, Irvin January 2016 (has links)
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.

Page generated in 0.078 seconds