• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 22
  • 17
  • 11
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Request Routing In Content Delivery Networks

Hussein, Alzoubi A. 06 February 2015 (has links)
No description available.
12

Scalable Content Delivery Without a Middleman

Xu, Junbo 30 August 2017 (has links)
No description available.
13

Adaptive Multimedia Content Delivery for Scalable Web Servers

Pradhan, Rahul 02 May 2001 (has links)
The phenomenal growth in the use of the World Wide Web often places a heavy load on networks and servers, threatening to increase Web server response time and raising scalability issues for both the network and the server. With the advances in the field of optical networking and the increasing use of broadband technologies like cable modems and DSL, the server and not the network, is more likely to be the bottleneck. Many clients are willing to receive a degraded, less resource intensive version of the requested content as an alternative to connection failures. In this thesis, we present an adaptive content delivery system that transparently switches content depending on the load on the server in order to serve more clients. Our system is designed to work for dynamic Web pages and streaming multimedia traffic, which are not currently supported by other adaptive content approaches. We have designed a system which is capable of quantifying the load on the server and then performing the necessary adaptation. We designed a streaming MPEG server and client which can react to the server load by scaling the quality of frames transmitted. The main benefits of our approach include: transparent content switching for content adaptation, alleviating server load by a graceful degradation of server performance and no requirement of modification to existing server software, browsers or the HTTP protocol. We experimentally evaluate our adaptive server system and compare it with an unadaptive server. We find that adaptive content delivery can support as much as 25% more static requests, 15% more dynamic requests and twice as many multimedia requests as a non-adaptive server. Our, client-side experiments performed on the Internet show that the response time savings from our system are quite significant.
14

Livraison de contenus sur un réseau hybride satellite / terrestre / Content delivery on an hybrid satellite / terrestrial network

Bouttier, Elie Bernard 05 July 2018 (has links)
L’augmentation et le renforcement des usages d’Internet rend nécessaire l’évolution des réseaux existants. Cependant, on constate de fortes inégalités entre les zones urbaines, bien desservies et qui concentrent l’essentiel des investissements, et les zones rurales, mal desservies etdélaissées. Face à cette situation, les utilisateurs de ces zones se tournent vers d’autres moyensd’accès, et notamment vers les accès Internet par satellite. Cependant, ces derniers souffrentd’une limitation qui est le délai important induit par le temps de propagation du signal entre la terre et l’orbite géostationnaire. Dans cette thèse, nous nous intéressons à l’utilisation simultanée d’un réseau d’accès terrestre, caractérisé par un faible débit et un faible délai, et d’un réseau d’accès satellite, caractérisé par une forte latence et un débit plus important. D’autre part, les réseaux dediffusion de contenus ou CDNs, constitués d’un grand nombre de serveurs de cache, apportentune réponse à l’augmentation du trafic et des besoins en termes de latence et de débit.Cependant, localisés dans les réseaux de cœur, les caches restent éloignés des utilisateurs etn’atteignent pas les réseaux d’accès. Ainsi, les fournisseurs d’accès Internet (FAI) se sontintéressés au déploiement de ces serveurs au sein de leur propre réseau, que l’on appelle alorsTelCo CDN. La diffusion des contenus nécessite idéalement l’interconnexion des opérateurs CDNavec les TelCo CDNs, permettant ainsi la délégation de la diffusion à ces derniers. Ils sont alorsen mesure d’optimiser la diffusion des contenus sur leur réseau dont ils ont une meilleureconnaissance. Ainsi, nous nous intéresserons à l’optimisation de la livraison de contenus sur unréseau hybride satellite / terrestre intégré à une chaîne de livraison CDN. Nous nous attacheronsdans un premier temps à décrire une architecture permettant, grâce à l’interconnexion de CDNs,de prendre en charge la diffusion des contenus sur le réseau hybride. Dans un second temps,nous étudierons l’intérêt de la connaissance des informations apportées par le contexte CDN pour le routage sur une telle architecture. Dans ce cadre, nous proposerons un mécanisme de routage fondé sur la taille des contenus. Finalement, nous montrerons la supériorité de notre approche sur l’utilisation du protocole de transport multichemin MP-TCP / The increase and reinforcement of Internet uses make necessary to improve existing networks.However, we observe strong inequalities between urban areas, well served and which concentratethe major part of investments, and rural areas, underserved and forkasen. To face this situation,users in underserved areas are moving to others Internet access, and in particular satellite Internetaccess. However, the latter suffer from a limitation which is the long delay induced by thepropagation time between the earth and the geostationnary orbit. In this thesis, we are interresedin the simultaneous use of a terrestrial access network, characterized by a low delay and a lowthroughput, and a satellite access network, characterized by a high throughput and an long delay.Elsewhere, Content Delivery Networks (CDNs), consisting of a large number of cache servers,bring an answer to the increase in trafic and needs in terms of latency and throughput. However,located in core networks, cache servers stay far from end users and do not reach accessnetworks. Thus, Internet Service Providers (ISPs) have taken an interest in deploying their ownCDNs, which will be referred to as TelCo CDNs. The content delivery ideally needs theinterconnection between CDN operators and TelCo CDNS, allowing the delegation of the contentdelivery to the TelCo CDNs. The latter are then able to optimize the content delivery on theirnetwork, for which they have a better knowledge. Thus, we will study the optimization of thecontents delivery on a hybrid satellite / terrestrial network, integrated in a CDN delivery chain. Wewill initially focus on the description of a architecture allowing, thanks to a CDN interconnection,handling contents delivery on the hybrid network. In a second stage, we will study the value of theinformation provided by the CDN context in the routing on such architecture. In this framework, wewill propose a routing mechanism based on contents size. Finally, we will show the superiority ofour approach over the multipath transport protocol MP-TCP
15

Making Coding Practical: From Servers to Smartphones

Shojania, Hassan 01 September 2010 (has links)
The fundamental insight of use of coding in computer networks is that information to be transmitted from the source in a session can be inferred, or decoded, by the intended receivers, and does not have to be transmitted verbatim. Several coding techniques have gained popularity over the recent years. Among them is random network coding with random linear codes, in which a node in a network topology transmits a linear combination of incoming, or source, packets to its outgoing links. Theoretically, the high computational complexity of random linear codes (RLC) is well known, and is used to motivate the application of more efficient codes, such as the traditional Reed-Solomon (RS) codes and, more recently, fountain codes (LT codes). Factors like computational complexity, network overhead, and deployment flexibility can make one coding schemes more attractive for one application than the others. While there is no one-fit-all coding solution, random linear coding is very flexible, well known to be able to achieve optimal flow rates in multicast sessions, and universally adopted in all proposed protocols using network coding. However, its practicality has been questioned, due to its high computational complexity. Unfortunately, to date, there has been no commercial real-world system reported in the literature that take advantage of the power of network coding. This research represents the first attempt towards a high-performance design and implementation of network coding. The objective of this work is to explore the computational limits of network coding in off-the-shelf modern processors, and to provide a solid reference implementation to facilitate commercial deployment of network coding. We promote the development of new coding-based systems and protocols through a comprehensive toolkit with coding implementations that are not just reference implementations. Instead, they have attained high-performance and flexibility to find widespread adoption. The final work, packaged as a toolkit code-named Tenor, includes high-performance implementations of a number of coding techniques: random linear network coding (RLC), fountain codes (LT codes), and Reed-Solomon (RS) codes in CPUs (single and multi core(s) for both Intel x86 and IBM POWER families), GPUs (single and multiple), and mobile/embedded devices based on ARMv6 and ARMv7 cores. Tenor is cross-platform with support on Linux, Windows, Mac OS X, and iPhone OS, and supports both 32-bit and 64-bit platforms. The toolkit includes some 23K lines of C++ code. In order to validate the effectiveness of the Tenor toolkit, we build coding-based on-demand media streaming systems with GPU-based servers, thousands of clients emulated on a cluster of computers, and a small number of actual iPhone devices. To facilitate deployment of such large experiments, we develop Blizzard, a high-performance framework, with the main goals of: 1) emulating hundreds of client/peer applications on each physical node; 2) facilitating scalable servers that can efficiently communicate with thousands of clients. Our experiences offer an illustration of Tenor components in action, and their benefits in rapid system development. With Tenor, it is trivial to switch from one coding technique to another, scale up to thousands of clients, and deliver actual video to be played back even on mobile devices.
16

Making Coding Practical: From Servers to Smartphones

Shojania, Hassan 01 September 2010 (has links)
The fundamental insight of use of coding in computer networks is that information to be transmitted from the source in a session can be inferred, or decoded, by the intended receivers, and does not have to be transmitted verbatim. Several coding techniques have gained popularity over the recent years. Among them is random network coding with random linear codes, in which a node in a network topology transmits a linear combination of incoming, or source, packets to its outgoing links. Theoretically, the high computational complexity of random linear codes (RLC) is well known, and is used to motivate the application of more efficient codes, such as the traditional Reed-Solomon (RS) codes and, more recently, fountain codes (LT codes). Factors like computational complexity, network overhead, and deployment flexibility can make one coding schemes more attractive for one application than the others. While there is no one-fit-all coding solution, random linear coding is very flexible, well known to be able to achieve optimal flow rates in multicast sessions, and universally adopted in all proposed protocols using network coding. However, its practicality has been questioned, due to its high computational complexity. Unfortunately, to date, there has been no commercial real-world system reported in the literature that take advantage of the power of network coding. This research represents the first attempt towards a high-performance design and implementation of network coding. The objective of this work is to explore the computational limits of network coding in off-the-shelf modern processors, and to provide a solid reference implementation to facilitate commercial deployment of network coding. We promote the development of new coding-based systems and protocols through a comprehensive toolkit with coding implementations that are not just reference implementations. Instead, they have attained high-performance and flexibility to find widespread adoption. The final work, packaged as a toolkit code-named Tenor, includes high-performance implementations of a number of coding techniques: random linear network coding (RLC), fountain codes (LT codes), and Reed-Solomon (RS) codes in CPUs (single and multi core(s) for both Intel x86 and IBM POWER families), GPUs (single and multiple), and mobile/embedded devices based on ARMv6 and ARMv7 cores. Tenor is cross-platform with support on Linux, Windows, Mac OS X, and iPhone OS, and supports both 32-bit and 64-bit platforms. The toolkit includes some 23K lines of C++ code. In order to validate the effectiveness of the Tenor toolkit, we build coding-based on-demand media streaming systems with GPU-based servers, thousands of clients emulated on a cluster of computers, and a small number of actual iPhone devices. To facilitate deployment of such large experiments, we develop Blizzard, a high-performance framework, with the main goals of: 1) emulating hundreds of client/peer applications on each physical node; 2) facilitating scalable servers that can efficiently communicate with thousands of clients. Our experiences offer an illustration of Tenor components in action, and their benefits in rapid system development. With Tenor, it is trivial to switch from one coding technique to another, scale up to thousands of clients, and deliver actual video to be played back even on mobile devices.
17

Study on Adaptive Learning based on Short-term Memory Capacity in Mobile Learning Environment

Hsieh, Sheng-wen 10 March 2006 (has links)
In this new era of mobile society and information explosion, people are continuously receiving different kinds of information representation at anytime and everywhere, how to quickly learn and absorb different kinds of information to become one¡¦s own knowledge is an important challenge for modern people. Due to the rapid advancement of mobile communication & wireless transmission technology, many scholars in academia were believed that these new technologies will have a great impact on the way of learning in the future. As a matter of fact, by effectively applying short-message services as learning content delivery (LCD) methods, including SMS and MMS, provided by mobile phone system to deliver different learning content representation (LCR) types, Mobile Learning (M-learning) can be implemented accordingly. However, the most important issue is whether M-learning based on these LCD methods and LCR types can really achieve good learning outcomes and be accepted by mobile learners. In this research we will explore the restraint of short-term memory (STM) ability of psychological learning process through technology-mediated learning theory on assessing learning outcomes in M-learning environment. The finding of this study is to match different LCR types with different LCD methods to fit learners¡¦ different STM abilities would cause higher learning outcomes in M-learning environment. Therefore, we suggest that Learners with lower verbal and lower nonverbal STM capacity, the most suitable way to help their learning is just providing them the basic learning materials; learners with higher verbal and lower nonverbal STM capacity, providing them additional written annotations will help them learn better; learners with lower verbal and higher nonverbal STM capacity, providing them additional pictorial annotations will help them learn better; and Learners with higher verbal and higher nonverbal STM capacity, the best way is to cater them both written annotations and pictorial annotations in M-learning environment.
18

A Study Report on Content Distribution Network’s Technology & Financial Market

Mughal, Muhammad Irfan Younas, Khan, Mustafa January 2009 (has links)
With the advancement of the Internet age, the need for more and more data distribution to different users on different types of networks in short time and at a nominal cost has also increased significantly. To achieve these objectives several technologies have been used with different sorts of implementations but only few survive in today’s very competitive financial market. The objective of our thesis is to study the technology and the financial market of the Content Distribution Network, which has up till now proven to be a very good and effective way to meet the always increasing demands of the rapidly developing Internet age. In this thesis, we will not only discuss the taxonomies of the Content Distribution Network or CDN, its different types and implementations but we will also focus on its financial issues and its performance in the financial market. The aim of our project is to study and understand the technology of the CDN, the problems related to its implementations, research work and its money matters.
19

Inteligentní distribuce souborů v CDN / Intelligent File Distribution in CDN

Kaleta, Marek January 2014 (has links)
This work deals with algorithms for distributing and mapping content on nodes in CDN system. Compares local and global algorithms for loading files on origin and edge servers. A high level CDN simulator is made. A matrix based approach for mapping content on CDN servers is proposed along with tranformation for solution of mapping optimalisation through genetic algorithms.
20

Evolution du plan de commande pour les futurs services de distribution de contenu / Evolution of the control plane for future content distribution services

Ibrahim, Ghida 18 June 2014 (has links)
Les services de distribution de contenus évoluent rapidement. Un axe majeur d’évolution concerne la fédération de distributeurs de contenus distincts mettant ensemble leurs ressources respectives et agissant en tant qu’entité unique par rapport aux fournisseurs de contenus (Content Providers). En particulier, nous proposons une solution technique basée sur une architecture centralisée qui permet de prendre des décisions statiques d’établissement et de provisionnement de fédérations ainsi que des décisions de contrôle dynamique de fédérations établies. Nous adressons les aspects statiques de prise de décision en introduisant un modèle d’optimisation que nous appliquons à différents scénarios de fédération d’intérêt pour le marché. Nous démontrons que, quand la demande sur le marché de distribution de contenu est élevée, les distributeurs de contenus ont intérêt, d’un point de vue économique, à fédérer. Dans le contexte de contrôle dynamique de fédérations, nous nous focalisons sur le contrôle d’événements de pointe (peak events) dans une fédération de distributeurs de contenus. Différentes approches de contrôle sont valables à ce niveau. Nous effectuons des simulations basées sur des traces de trafic réelles dans le but de comparer les différentes approches. Nous démontrons que, quand une approche jointe de contrôle d’événements de pointe est adoptée au sein d’une fédération, la fédération réagit mieux à ces événements. Ceci se traduit en un moindre volume de sessions rejetées et en une meilleure résolution vidéo ressentie par les internautes. Notre travail sur la fédération nous conduit à se focaliser sur le rôle d’un Telco dans un contexte fédéré. / Content Distribution Services are evolving fast in various directions. One of them is the federation of CDNs, referring to a number of CDN providers putting together their assets and acting as a single entity with regards to content providers. We introduce a technical solution based on a centralized architecture that allows taking static decisions of federation establishment and provisioning and dynamic decisions of federation control. Static decision-making is enabled through an optimization model that we apply to concrete use cases of federation. We demonstrate that, in case of high market demand, CDN providers always have an interest in federating. In particular, some CDN providers can double their economic gains through federating. In the context of federation dynamic control, we focus on the control of peak events within a federation of CDNs and we introduce different control frameworks at this level. We conduct trace-driven simulations in order to assess different frameworks. We demonstrate that, when a joint approach for events control is adopted within a federation of CDNs, the federation is better resilient to peak events. This translates into a higher hit ratio of the federation and a better video resolution witnessed by end users. Our work on CDN federation leads us to focus on the role of a Telco in this context. In particular, we identify three added-value services that can be proposed by a Telco to a federation of CDNs or to individual Over the Tops (OTTs).We suggest enhancements of the Telco control infrastructure and new Telco APIs in order to enable the proposed services.

Page generated in 0.0883 seconds