• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 47
  • 47
  • 20
  • 19
  • 16
  • 16
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Energy management in content distribution network servers / Gestion d'énergie dans les serveurs de réseau de distribution de contenu

Islam, Saif Ul 15 January 2015 (has links)
Les infrastructures Internet et l'installation d'appareils très gourmands en énergie (en raison de l'explosion du nombre d'internautes et de la concurrence entre les services efficaces offerts par Internet) se développent de manière exponentielle. Cela entraîne une augmentation importante de la consommation d'énergie. La gestion de l'énergie dans les systèmes de distribution de contenus à grande échelle joue un rôle déterminant dans la diminution de l'empreinte énergétique globale de l'industrie des TIC (Technologies de l'information et de la communication). Elle permet également de diminuer les coûts énergétiques d'un produit ou d'un service. Les CDN (Content Delivery Networks) sont parmi les systèmes de distribution à grande échelle les plus populaires, dans lesquels les requêtes des clients sont transférées vers des serveurs et traitées par des serveurs proxy ou le serveur d'origine, selon la disponibilité des contenus et la politique de redirection des CDN. Par conséquent, notre objectif principal est de proposer et de développer des mécanismes basés sur la simulation afin de concevoir des politiques de redirection des CDN. Ces politiques prendront la décision dynamique de réduire la consommation d'énergie des CDN. Enfin, nous analyserons son impact sur l'expérience utilisateur. Nous commencerons par une modélisation de l'utilisation des serveurs proxy et un modèle de consommation d'énergie des serveurs proxy basé sur leur utilisation. Nous ciblerons les politiques de redirection des CDN en proposant et en développant des politiques d'équilibre et de déséquilibre des charges (en utilisant la loi de Zipf) pour rediriger les requêtes des clients vers les serveurs. Nous avons pris en compte deux techniques de réduction de la consommation d'énergie : le DVFS (Dynamic Voltage Frequency Scaling) et la consolidation de serveurs. Nous avons appliqué ces techniques de réduction de la consommation d'énergie au contexte d'un CDN (au niveau d'un serveur proxy), mais aussi aux politiques d'équilibre et de déséquilibre des charges afin d'économiser l'énergie. Afin d'évaluer les politiques et les mécanismes que nous proposons, nous avons mis l'accent sur la manière de rendre l'utilisation des ressources des CDN plus efficace, mais nous nous sommes également intéressés à leur coût en énergie, à leur impact sur l'expérience utilisateur et sur la qualité de la gestion des infrastructures. Dans ce but, nous avons défini comme métriques d'évaluation l'utilisation des serveurs proxy, d'échec des requêtes comme les paramètres les plus importants. Nous avons transformé un simulateur d'événements discrets CDNsim en Green CDNsim, et évalué notre travail selon différents scénarios de CDN en modifiant : les infrastructures proxy des CDN (nombre de serveurs proxy), le trafic (nombre de requêtes clients) et l'intensité du trafic (fréquence des requêtes client) en prenant d'abord en compte les métriques d'évaluation mentionnées précédemment. Nous sommes les premiers à proposer un DVFS et la combinaison d'un DVFS avec la consolidation d'un environnement de simulation de CDN en prenant en compte les politiques d'équilibre et de déséquilibre des charges. Nous avons conclu que les techniques d'économie d'énergie permettent de réduire considérablement la consommation d'énergie mais dégradent l'expérience utilisateur. Nous avons montré que la technique de consolidation des serveurs est plus efficace dans la réduction d'énergie lorsque les serveurs proxy ne sont pas beaucoup chargés. Dans le même temps, il apparaît que l'impact du DVFS sur l'économie d'énergie est plus important lorsque les serveurs proxy sont bien chargés. La combinaison des deux (DVFS et consolidation des serveurs) permet de consommer moins d'énergie mais dégrade davantage l'expérience utilisateur que lorsque ces deux techniques sont utilisées séparément. / Explosive increase in Internet infrastructure and installation of energy hungry devices because of huge increase in Internet users and competition of efficient Internet services causing a great increase in energy consumption. Energy management in large scale distributed systems has an important role to minimize the contribution of Information and Communication Technology (ICT) industry in global CO2 (Carbon Dioxide) footprint and to decrease the energy cost of a product or service. Content distribution Networks (CDNs) are one of the popular large scale distributed systems, in which client requests are forwarded towards servers and are fulfilled either by surrogate servers or by origin server, depending on contents availability and CDN redirection policy. Our main goal is therefore, to propose and to develop simulation-based principled mechanisms for the design of CDN redirection policies which will do and carry out dynamic decisions to reduce CDN energy consumption and then to analyze its impact on user experience constraints to provide services. We started from modeling surrogate server utilization and derived surrogate server energy consumption model based on its utilization. We targeted CDN redirection policies by proposing and developing load-balance and load-unbalance policies using Zipfian distribution, to redirect client requests to servers. We took into account two energy reduction techniques, Dynamic Voltage Frequency Scaling (DVFS) and server consolidation. We applied these energy reduction techniques in the context of a CDN at surrogate server level and injected them in load-balance and load-unbalance policies to have energy savings. In order to evaluate our proposed policies and mechanisms, we have emphasized, how efficiently the CDN resources are utilized, at what energy cost, its impact on user experience and on quality of infrastructure management. For that purpose, we have considered surrogate server's utilization, energy consumption, energy per request, mean response time, hit ratio and failed requests as evaluation metrics. In order to analyze energy reduction and its impact on user experience, energy consumption, mean response time and failed requests are considered more important parameters. We have transformed a discrete event simulator CDNsim into Green CDNsim and evaluated our proposed work in different scenarios of a CDN by changing: CDN surrogate infrastructure (number of surrogate servers), traffic load (number of client requests) and traffic intensity (client requests frequency) by taking into account previously discussed evaluation metrics. We are the first who proposed DVFS and the combination of DVFS and consolidation in a CDN simulation environment, considering load-balance and loadunbalance policies. We have concluded that energy reduction techniques offer considerable energy savings while user experience is degraded. We have exhibited that server consolidation technique performs better in energy reduction while surrogate servers are lightly loaded. While, DVFS impact is more considerable for energy gains when surrogate servers are well loaded. Impact of DVFS on user experience is lesser than that of server consolidation. Combination of both (DVFS and server consolidation) presents more energy savings at higher cost of user experience degradation in comparison when both are used individually.
2

Bedarfsgesteuerte Verteilung von Inhaltsobjekten in Rich Media Collaboration Applications

Schuster, Daniel 29 November 2007 (has links) (PDF)
IP-basierte Konferenz- und Kollaborations-Systeme entwickeln sich mehr und mehr in Richtung Rich Media Collaboration, d.h. vereinigen Audio- und Videokonferenzfunktionalität mit Instant Messaging und kollaborativen Funktionen wie Presentation Sharing und Application Sharing. Dabei müssen neben den Live-Medienströmen auch Inhaltsobjekte wie Präsentationsfolien oder Dokumentseiten in Echtzeit innerhalb einer Session verteilt werden. Im Gegensatz zum klassischen 1:n-push-Schema wird dafür in der Arbeit ein Ansatz für wahlfreien Zugriff auf durch die Teilnehmer selbst gehostete Inhaltsobjekte - also n:m-pull-Verteilung - vorgestellt. Dieser Ansatz hat in Anwendungsszenarien mit gleichberechtigten Teilnehmern, wie zum Beispiel virtuellen Meetings von Projektteams, signifikante Performance-Vorteile gegenüber den traditionellen Ansätzen. Mit dem Content Sharing Protocol (CSP) wurde eine Protokoll-Engine bestehend aus neun Mikroprotokollen entwickelt, implementiert und evaluiert. Sie beinhaltet neben der Kernfunktionalität der Inhaltsauslieferung auch Unterstützung für Caching, Prefetching und Datenadaption, sowie dynamische Priorisierung von Datentransfers und Interaktionsunterstützung.
3

Network Coding in Multihop Wireless Networks: Throughput Analysis and Protocol Design

Yang, Zhenyu 29 April 2011 (has links)
Multi-hop wireless networks have been widely considered as promising approaches to provide more convenient Internet access for their easy deployment, extended coverage, and low deployment cost. However, providing high-speed and reliable services in these networks is challenging due to the unreliable wireless links, broadcast nature of wireless transmissions, and frequent topology changes. On the other hand, network coding (NC) is a technique that could significantly improve the network throughput and the transmission reliability by allowing intermediate nodes to combine received packets. More recently proposed symbol level network coding (SLNC), which combines packets at smaller symbol scale, is a more powerful technique to mitigate the impact of lossy links and packet collisions in wireless networks. NC, especially SLNC, is thus a particular effective approach to providing higher data rate and better transmission reliability for applications such as mobile content distribution in multihop wireless networks. This dissertation focuses on exploiting NC in multihop wireless networks. We studied the unique features of NC and designed a suite of distributed and localized algorithms and protocols for content distribution networks using NC and SLNC. We also carried out a theoretical study on the network capacity and performance bounds achievable by SLNC in mobile wireless networks. We proposed CodeOn and CodePlay for popular content distribution and live multimedia streaming (LMS) in vehicular ad hoc networks (VANETs), respectively, where many important practical factors are taken into consideration, including vehicle distribution, mobility pattern, channel fading and packet collision. Specifically, CodeOn is a novel push based popular content distribution scheme based on SLNC, where contents are actively broadcast to vehicles from road side access points and further distributed among vehicles using a cooperative VANET. In order to fully enjoy the benefits of SLNC, we proposed a suite of techniques to maximize the downloading rate, including a prioritized and localized relay selection mechanism where the selection criteria is based on the usefulness of contents possessed by vehicles, and a lightweight medium access protocol that naturally exploits the abundant concurrent transmission opportunities. CodePlay is designed for LMS applicaitions in VANETs, which could fully take advantage of SLNC through a coordinated local push mechanism. Streaming contents are actively disseminated from dedicated sources to interested vehicles via local coordination of distributively selected relays, each of which will ensure smooth playback for vehicles nearby. CodeOn pursues a single objective of maximizing downloading rate, while CodePlay improves the performance of LMS service in terms of streaming rate, service delivery delay, and bandwidth efficiency simultaneously. CodeOn and CodePlay are among the first works that exploit the features of SLNC to simplify the protocol design whilst achieving better performance. We also developed an analytical framework to compute the expected achievable throughput of mobile content distribution in VANETs using SLNC. We presented a general analytical model for the expected achievable throughput of SLNC in a static wireless network based on flow network theory and queuing theory. Then we further developed the model to derive the expected achievable accumulated throughput of a vehicle driving through the area of interest under a mobility pattern. Our proposed framework captures the effects of multiple practical factors, including vehicle distribution and mobility pattern, channel fading and packet collision, and we characterized the impacts of those factors on the expected achievable throughput. The results from this research are not only of interest from theoretical perspective but also provide insights and guidelines on protocol design in SLNC-based networks.
4

Bedarfsgesteuerte Verteilung von Inhaltsobjekten in Rich Media Collaboration Applications

Schuster, Daniel 05 November 2007 (has links)
IP-basierte Konferenz- und Kollaborations-Systeme entwickeln sich mehr und mehr in Richtung Rich Media Collaboration, d.h. vereinigen Audio- und Videokonferenzfunktionalität mit Instant Messaging und kollaborativen Funktionen wie Presentation Sharing und Application Sharing. Dabei müssen neben den Live-Medienströmen auch Inhaltsobjekte wie Präsentationsfolien oder Dokumentseiten in Echtzeit innerhalb einer Session verteilt werden. Im Gegensatz zum klassischen 1:n-push-Schema wird dafür in der Arbeit ein Ansatz für wahlfreien Zugriff auf durch die Teilnehmer selbst gehostete Inhaltsobjekte - also n:m-pull-Verteilung - vorgestellt. Dieser Ansatz hat in Anwendungsszenarien mit gleichberechtigten Teilnehmern, wie zum Beispiel virtuellen Meetings von Projektteams, signifikante Performance-Vorteile gegenüber den traditionellen Ansätzen. Mit dem Content Sharing Protocol (CSP) wurde eine Protokoll-Engine bestehend aus neun Mikroprotokollen entwickelt, implementiert und evaluiert. Sie beinhaltet neben der Kernfunktionalität der Inhaltsauslieferung auch Unterstützung für Caching, Prefetching und Datenadaption, sowie dynamische Priorisierung von Datentransfers und Interaktionsunterstützung.
5

Reducing the cumulative file download time and variance in a P2P overlay via proximity based peer selection

Carasquilla, Uriel J. 01 January 2013 (has links)
The time it takes to download a file in a peer-to-peer (P2P) overlay network is dependent on several factors. These factors include the quality of the network between peers (e.g. packet loss, latency, and link failures), distance, peer selection technique, and packet loss due to Internet Service Providers (ISPs) engaging in traffic shaping. Recent research shows that P2P download time is adversely impacted by the presence of distant peers, particularly when traffic goes across an ISP that could be engaging in P2P traffic throttling activities. It has also been observed that additional delays are introduced when distant candidate nodes for exchanging data are included during the formation of a P2P network overlay. Researchers have shifted their attention to the mechanism for peer selection. They started questioning the random technique because it ignores the location of nodes in the topology of the underlying physical network. Therefore, selecting nodes for interaction in a distributed system based on their position in the network continues to be an active area of research. The goal of this work was to reduce the cumulative file download time and variance for the majority of participating peers in a P2P network by using a peer selection mechanism that favors nearby nodes. In this proposed proximity strategy, the Internet address space is separated by IP blocks that belong to different Autonomous Systems (AS). IP blocks are further broken up into subsets named zones. Each zone is given a landmark (a.k.a. beacon), for example routers or DNS servers, with a known geographical location. At the time peers joined the network, peers were grouped into zones based on their geographical distance to the selected beacons. Peers that end up in the same zone were put at the top of the list of available nodes for interactions during the formation of the overlay. Experiments were conducted to compare the proposed proximity based peer selection strategy to the random peer selection strategy. The results indicate that the proximity technique outperforms the random approach for peer selection in a network with low packet loss and latency and also in a more realistic network subject to packet loss, traffic shaping and long distances. However, this improved performance came at the cost of additional memory (230 megabytes) and to a lesser extent some additional CPU cycles to run the additional subroutines needed to group peers into zones. The framework and algorithms developed for this work made it possible to implement a fully functioning prototype that implements the proximity strategy. This prototype enabled high fidelity testing with a real client implementation in real networks including the Internet. This made it possible to test without having to rely exclusively on event-driven simulations to prove the hypothesis.
6

Scalable video streaming with prioritised network coding on end-system overlays

Sanna, Michele January 2014 (has links)
Distribution over the internet is destined to become a standard approach for live broadcasting of TV or events of nation-wide interest. The demand for high-quality live video with personal requirements is destined to grow exponentially over the next few years. Endsystem multicast is a desirable option for relieving the content server from bandwidth bottlenecks and computational load by allowing decentralised allocation of resources to the users and distributed service management. Network coding provides innovative solutions for a multitude of issues related to multi-user content distribution, such as the coupon-collection problem, allocation and scheduling procedure. This thesis tackles the problem of streaming scalable video on end-system multicast overlays with prioritised push-based streaming. We analyse the characteristic arising from a random coding process as a linear channel operator, and present a novel error detection and correction system for error-resilient decoding, providing one of the first practical frameworks for Joint Source-Channel-Network coding. Our system outperforms both network error correction and traditional FEC coding when performed separately. We then present a content distribution system based on endsystem multicast. Our data exchange protocol makes use of network coding as a way to collaboratively deliver data to several peers. Prioritised streaming is performed by means of hierarchical network coding and a dynamic chunk selection for optimised rate allocation based on goodput statistics at application layer. We prove, by simulated experiments, the efficient allocation of resources for adaptive video delivery. Finally we describe the implementation of our coding system. We highlighting the use rateless coding properties, discuss the application in collaborative and distributed coding systems, and provide an optimised implementation of the decoding algorithm with advanced CPU instructions. We analyse computational load and packet loss protection via lab tests and simulations, complementing the overall analysis of the video streaming system in all its components.
7

Leveraging relations among objects to improve the performance of information-centric networks / Utilizando relações entre objetos para melhorar o desempenho de redes orientadas a conteúdo

Antunes, Rodolfo Stoffel January 2016 (has links)
Redes Orientadas a Conteúdo (Information-Centric Networks, ICN) são um novo paradigma de comunicação criado para aproximar as infraestruturas de rede às necessidades de sistemas de distribuição de conteúdo. ICN utiliza mecanismos de roteamento e cache projetados para atender requisições por objetos de dados unicamente identificados e desassociados de um localizador fixo. Até o momento, pesquisas sobre ICN focaram principalmente na avaliação de aspectos arquiteturais, tais como o desempenho de diferentes esquemas de roteamento e cache. Entretanto, o método aplicado para distribuir dados utilizando o conceito de objetos também pode impactar a comunicação em uma ICN. Esta tese explora um modelo que permite a distribuição de um conteúdo através de múltiplos objetos de dados. Emprega-se o conceito de relações, definidas como elos entre dois objetos indicando que os dados de um complementam de alguma forma os dados do outro. Tal modelo baseado em relações permite que clientes identifiquem e recuperem os objetos necessários para a reconstrução do conteúdo. Ele é agnóstico ao formato de dados das aplicações, suporta diferentes estruturas de relações e é retrocompatível com especificações atuais de arquiteturas ICN. Também discute-se os principais aspectos de projeto relativos à implementação do modelo na arquitetura NDN. Para avaliar o impacto de relações no desempenho da rede e aplicações, foi realizada uma série de experimentos com dois estudos de caso baseados em cenários relevantes da Internet atual, sendo eles: conteúdo multimídia e páginasWeb. O estudo de caso sobre conteúdo multimídia explora um cenário favorável, no qual relações apresentam uma sobrecarga negligível em contraste ao grande volume de dados dos conteúdos. Os resultados deste estudo de caso mostram que, em comparação com a implementação padrão do NDN, o uso de relações pode reduzir os tempos de download em 34% e o tráfego de rede em 43%. Por sua vez, o estudo de caso sobre páginasWeb explora um cenário no qual relações geram um impacto não negligível na rede e aplicações. A análise deste cenário mostra que, mesmo com a sobrecarga adicional gerada pelas relações, o mecanismo pode reduzir, em média, o tempo de download dos clientes em 28% e o tráfego de rede em 34%. / Information-Centric Networking (ICN) is a communication paradigm created to align the network infrastructures to the needs of content distribution systems. ICN employs routing and caching mechanisms tailored to fulfill requests for uniquely identified data objects not associated to a fixed locator. So far, research about ICN focused primarily on evaluating architectural aspects, such as the performance of di erent routing and caching schemes. However, the method applied to distribute data using the concept of objects can also impact communications in an ICN. In this thesis, we explore a model that enables the distribution of contents as multiple data objects. We employ the concept of relations, defined as links between two objects indicating that the data from one complements in some way the data from the other. Our model based on relations enables clients to identify and retrieve the data pieces required to reconstruct a content. It is application agnostic, supports di erent relation structures, and is backward-compatible with current ICN specifications. We also discuss the main design aspects related to the implementation of the model in the Named Data Networking (NDN) architecture. To evaluate how relations impact network and application performance, we perform a series of experiments with two case studies based on relevant scenarios from the current Internet, namely: multimedia content and Web pages. The multimedia case study explores a favorable scenario in which relations present a negligible overhead in contrast to the high volume of content data. Results from this case study show that, compared to the standard NDN implementation, relations can reduce download times by 34% and network tra c by 43%. In turn, the Web pages case study explores a scenario in which relations generate a non-negligible impact on the network and applications. The analysis of this scenario shows that, even with the additional overhead incurred by relations, the mechanism can reduce on average 28% client download time, and 34%, global network tra c.
8

An analytical model for pedestrian content distribution in a grid of streets

Vukadinovic, Vladimir, Karlsson, Gunnar, Helgason, Ólafur January 2012 (has links)
Mobile communication devices may be used for spreading multimedia data without support of an infrastructure. Such a scheme, where the data is carried by people walking around and relayed from device to device by means of short range radio, could potentially form a public content distribution system that spans vast urban areas. The transport mechanism is the flow of people and it can be studied but not engineered. We study the efficiency of pedestrian content distribution by modeling the mobility of people moving around in a city, constrained by a given topology. The model is supplemented by simulation of similar or related scenarios for validation and extension. The results show that contents spread well with pedestrian speeds already at low arrival rates into a studied region. Our contributions are both the queuing analytic model that captures the flow of people and the results on the feasibility of pedestrian content distribution. / <p>QC 20130109</p>
9

A Security Analysis of Some Physical Content Distribution Systems

Jiayuan, Sui January 2008 (has links)
Content distribution systems are essentially content protection systems that protect premium multimedia content from being illegally distributed. Physical content distribution systems form a subset of content distribution systems with which the content is distributed via physical media such as CDs, Blu-ray discs, etc. This thesis studies physical content distribution systems. Specifically, we concentrate our study on the design and analysis of three key components of the system: broadcast encryption for stateless receivers, mutual authentication with key agreement, and traitor tracing. The context in which we study these components is the Advanced Access Content System (AACS). We identify weaknesses present in AACS, and we also propose improvements to make the original system more secure, flexible and efficient.
10

A Content Delivery Model for Online Video

Yuan, Liang 09 October 2009 (has links)
Online video accounts for a large and growing portion of all Internet traffic. In order to cut bandwidth costs, it is necessary to use the available bandwidth of users to offload video downloads. Assuming that users can only keep and distribute one video at any given time, it is necessary to determine the global user cache distribution with the goal of achieving maximum peer traffic. The system model contains three different parties: viewers, idlers and servers. Viewers are those peers who are currently viewing a video. Idlers are those peers who are currently not viewing a video but are available to upload to others. Finally, servers can upload any video to any user and has infinite capacity. Every video maintains a first-in-first-out viewer queue which contains all the viewers for that video. Each viewer downloads from the peer that arrived before it, with the earliest arriving peer downloading from the server. Thus, the server must upload to one peer whenever the viewer queue is not empty. The aim of the idlers is to act as a server for a particular video, thereby eliminating all server traffic for that video. By using the popularity of videos, the number of idlers and some assumptions on the viewer arrival process, the optimal global video distribution in the user caches can be determined.

Page generated in 0.1358 seconds