• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Energy management in content distribution network servers / Gestion d'énergie dans les serveurs de réseau de distribution de contenu

Islam, Saif Ul 15 January 2015 (has links)
Les infrastructures Internet et l'installation d'appareils très gourmands en énergie (en raison de l'explosion du nombre d'internautes et de la concurrence entre les services efficaces offerts par Internet) se développent de manière exponentielle. Cela entraîne une augmentation importante de la consommation d'énergie. La gestion de l'énergie dans les systèmes de distribution de contenus à grande échelle joue un rôle déterminant dans la diminution de l'empreinte énergétique globale de l'industrie des TIC (Technologies de l'information et de la communication). Elle permet également de diminuer les coûts énergétiques d'un produit ou d'un service. Les CDN (Content Delivery Networks) sont parmi les systèmes de distribution à grande échelle les plus populaires, dans lesquels les requêtes des clients sont transférées vers des serveurs et traitées par des serveurs proxy ou le serveur d'origine, selon la disponibilité des contenus et la politique de redirection des CDN. Par conséquent, notre objectif principal est de proposer et de développer des mécanismes basés sur la simulation afin de concevoir des politiques de redirection des CDN. Ces politiques prendront la décision dynamique de réduire la consommation d'énergie des CDN. Enfin, nous analyserons son impact sur l'expérience utilisateur. Nous commencerons par une modélisation de l'utilisation des serveurs proxy et un modèle de consommation d'énergie des serveurs proxy basé sur leur utilisation. Nous ciblerons les politiques de redirection des CDN en proposant et en développant des politiques d'équilibre et de déséquilibre des charges (en utilisant la loi de Zipf) pour rediriger les requêtes des clients vers les serveurs. Nous avons pris en compte deux techniques de réduction de la consommation d'énergie : le DVFS (Dynamic Voltage Frequency Scaling) et la consolidation de serveurs. Nous avons appliqué ces techniques de réduction de la consommation d'énergie au contexte d'un CDN (au niveau d'un serveur proxy), mais aussi aux politiques d'équilibre et de déséquilibre des charges afin d'économiser l'énergie. Afin d'évaluer les politiques et les mécanismes que nous proposons, nous avons mis l'accent sur la manière de rendre l'utilisation des ressources des CDN plus efficace, mais nous nous sommes également intéressés à leur coût en énergie, à leur impact sur l'expérience utilisateur et sur la qualité de la gestion des infrastructures. Dans ce but, nous avons défini comme métriques d'évaluation l'utilisation des serveurs proxy, d'échec des requêtes comme les paramètres les plus importants. Nous avons transformé un simulateur d'événements discrets CDNsim en Green CDNsim, et évalué notre travail selon différents scénarios de CDN en modifiant : les infrastructures proxy des CDN (nombre de serveurs proxy), le trafic (nombre de requêtes clients) et l'intensité du trafic (fréquence des requêtes client) en prenant d'abord en compte les métriques d'évaluation mentionnées précédemment. Nous sommes les premiers à proposer un DVFS et la combinaison d'un DVFS avec la consolidation d'un environnement de simulation de CDN en prenant en compte les politiques d'équilibre et de déséquilibre des charges. Nous avons conclu que les techniques d'économie d'énergie permettent de réduire considérablement la consommation d'énergie mais dégradent l'expérience utilisateur. Nous avons montré que la technique de consolidation des serveurs est plus efficace dans la réduction d'énergie lorsque les serveurs proxy ne sont pas beaucoup chargés. Dans le même temps, il apparaît que l'impact du DVFS sur l'économie d'énergie est plus important lorsque les serveurs proxy sont bien chargés. La combinaison des deux (DVFS et consolidation des serveurs) permet de consommer moins d'énergie mais dégrade davantage l'expérience utilisateur que lorsque ces deux techniques sont utilisées séparément. / Explosive increase in Internet infrastructure and installation of energy hungry devices because of huge increase in Internet users and competition of efficient Internet services causing a great increase in energy consumption. Energy management in large scale distributed systems has an important role to minimize the contribution of Information and Communication Technology (ICT) industry in global CO2 (Carbon Dioxide) footprint and to decrease the energy cost of a product or service. Content distribution Networks (CDNs) are one of the popular large scale distributed systems, in which client requests are forwarded towards servers and are fulfilled either by surrogate servers or by origin server, depending on contents availability and CDN redirection policy. Our main goal is therefore, to propose and to develop simulation-based principled mechanisms for the design of CDN redirection policies which will do and carry out dynamic decisions to reduce CDN energy consumption and then to analyze its impact on user experience constraints to provide services. We started from modeling surrogate server utilization and derived surrogate server energy consumption model based on its utilization. We targeted CDN redirection policies by proposing and developing load-balance and load-unbalance policies using Zipfian distribution, to redirect client requests to servers. We took into account two energy reduction techniques, Dynamic Voltage Frequency Scaling (DVFS) and server consolidation. We applied these energy reduction techniques in the context of a CDN at surrogate server level and injected them in load-balance and load-unbalance policies to have energy savings. In order to evaluate our proposed policies and mechanisms, we have emphasized, how efficiently the CDN resources are utilized, at what energy cost, its impact on user experience and on quality of infrastructure management. For that purpose, we have considered surrogate server's utilization, energy consumption, energy per request, mean response time, hit ratio and failed requests as evaluation metrics. In order to analyze energy reduction and its impact on user experience, energy consumption, mean response time and failed requests are considered more important parameters. We have transformed a discrete event simulator CDNsim into Green CDNsim and evaluated our proposed work in different scenarios of a CDN by changing: CDN surrogate infrastructure (number of surrogate servers), traffic load (number of client requests) and traffic intensity (client requests frequency) by taking into account previously discussed evaluation metrics. We are the first who proposed DVFS and the combination of DVFS and consolidation in a CDN simulation environment, considering load-balance and loadunbalance policies. We have concluded that energy reduction techniques offer considerable energy savings while user experience is degraded. We have exhibited that server consolidation technique performs better in energy reduction while surrogate servers are lightly loaded. While, DVFS impact is more considerable for energy gains when surrogate servers are well loaded. Impact of DVFS on user experience is lesser than that of server consolidation. Combination of both (DVFS and server consolidation) presents more energy savings at higher cost of user experience degradation in comparison when both are used individually.
2

Reducing the cumulative file download time and variance in a P2P overlay via proximity based peer selection

Carasquilla, Uriel J. 01 January 2013 (has links)
The time it takes to download a file in a peer-to-peer (P2P) overlay network is dependent on several factors. These factors include the quality of the network between peers (e.g. packet loss, latency, and link failures), distance, peer selection technique, and packet loss due to Internet Service Providers (ISPs) engaging in traffic shaping. Recent research shows that P2P download time is adversely impacted by the presence of distant peers, particularly when traffic goes across an ISP that could be engaging in P2P traffic throttling activities. It has also been observed that additional delays are introduced when distant candidate nodes for exchanging data are included during the formation of a P2P network overlay. Researchers have shifted their attention to the mechanism for peer selection. They started questioning the random technique because it ignores the location of nodes in the topology of the underlying physical network. Therefore, selecting nodes for interaction in a distributed system based on their position in the network continues to be an active area of research. The goal of this work was to reduce the cumulative file download time and variance for the majority of participating peers in a P2P network by using a peer selection mechanism that favors nearby nodes. In this proposed proximity strategy, the Internet address space is separated by IP blocks that belong to different Autonomous Systems (AS). IP blocks are further broken up into subsets named zones. Each zone is given a landmark (a.k.a. beacon), for example routers or DNS servers, with a known geographical location. At the time peers joined the network, peers were grouped into zones based on their geographical distance to the selected beacons. Peers that end up in the same zone were put at the top of the list of available nodes for interactions during the formation of the overlay. Experiments were conducted to compare the proposed proximity based peer selection strategy to the random peer selection strategy. The results indicate that the proximity technique outperforms the random approach for peer selection in a network with low packet loss and latency and also in a more realistic network subject to packet loss, traffic shaping and long distances. However, this improved performance came at the cost of additional memory (230 megabytes) and to a lesser extent some additional CPU cycles to run the additional subroutines needed to group peers into zones. The framework and algorithms developed for this work made it possible to implement a fully functioning prototype that implements the proximity strategy. This prototype enabled high fidelity testing with a real client implementation in real networks including the Internet. This made it possible to test without having to rely exclusively on event-driven simulations to prove the hypothesis.
3

Content-aware Caching and Traffic Management in Content Distribution Networks

Amble, Meghana Mukund 2010 December 1900 (has links)
The rapid increase of content delivery over the Internet has lead to the proliferation of content distribution networks (CDNs). Management of CDNs requires algorithms for request routing, content placement, and eviction in such a way that user delays are small. Our objective in this work is to design feasible algorithms that solve this trio of problems. We abstract the system of front-end source nodes and back-end caches of the CDN in the likeness of the input and output nodes of a switch. In this model, queues of requests for different pieces of content build up at the source nodes, which route these requests to a cache that contains the content. For each request that is routed to a cache, a corresponding data file is transmitted back to the source across links of finite capacity. Caches are of finite size, and the content of the caches can be refreshed periodically. A requested but missing item is fetched to the cache from the media vault of the CDN. In case of a lack of adequate space at the cache, an existing, unrequested item may be evicted from the cache in order to accommodate a new item. Every such cache refresh or media vault access incurs a finite cost. Hence the refresh periodicity allowed to the system represents our system cost. In order to obtain small user delays, our algorithms must consider the lengths of the request queues that build up at the nodes. Stable policies ensure the finiteness of the request queues, while good polices also lead to short queue lengths. We first design a throughput-optimal algorithm that solves the routing-placement eviction problem using instantaneous system state information. The design yields insight into the impact of different cache refresh and eviction policies on queue length. We use this and construct throughput optimal algorithms that engender short queue lengths. We then propose a regime of algorithms which remedies the inherent problem of wastage of capacity. We also develop heuristic variants, and we study their performance. We illustrate the potential of our approach and validate all our claims and results through simulations on different CDN topologies.
4

Modeling performance of internet-based services using causal reasoning

Tariq, Muhammad Mukarram Bin 06 April 2010 (has links)
The performance of Internet-based services depends on many server-side, client-side, and network related factors. Often, the interaction among the factors or their effect on service performance is not known or well-understood. The complexity of these services makes it difficult to develop analytical models. Lack of models impedes network management tasks, such as predicting performance while planning for changes to service infrastructure, or diagnosing causes of poor performance. We posit that we can use statistical causal methods to model performance for Internet-based services and facilitate performance related network management tasks. Internet-based services are well-suited for statistical learning because the inherent variability in many factors that affect performance allows us to collect comprehensive datasets that cover service performance under a wide variety of conditions. These conditional distributions represent the functions that govern service performance and dependencies that are inherent in the service infrastructure. These functions and dependencies are accurate and can be used in lieu of analytical models to reason about system performance, such as predicting performance of a service when changing some factors, finding causes of poor performance, or isolating contribution of individual factors in observed performance. We present three systems, What-if Scenario Evaluator (WISE), How to Improve Performance (HIP), and Network Access Neutrality Observatory (NANO), that use statistical causal methods to facilitate network management tasks. WISE predicts performance for what-if configurations and deployment questions for content distribution networks. For this, WISE learns the causal dependency structure among the latency-causing factors, and when one or more factors is changed, WISE estimates effect on other factors using the dependency structure. HIP extends WISE and uses the causal dependency structure to invert the performance function, find causes of poor performance, and help answers questions about how to improve performance or achieve performance goals. NANO uses causal inference to quantify the impact of discrimination policies of ISPs on service performance. NANO is the only tool to date for detecting destination-based discrimination techniques that ISPs may use. We have evaluated these tools by application to large-scale Internet-based services and by experiments on wide-area Internet. WISE is actively used at Google for predicting network-level and browser-level response time for Web search for new datacenter deployments. We have used HIP to find causes of high-latency Web search transactions in Google, and identified many cases where high-latency transactions can be significantly mitigated with simple infrastructure changes. We have evaluated NANO using experiments on wide-area Internet and also made the tool publicly available to recruit users and deploy NANO at a global scale.
5

Disponibilidade de conteúdo em sistemas CDN assistidos por redes P2P

Oliveira, Jhonathan Araújo 24 September 2013 (has links)
Submitted by Geyciane Santos (geyciane_thamires@hotmail.com) on 2015-06-18T14:20:29Z No. of bitstreams: 1 Dissertação- Jhonathan Araújo Oliveira.pdf: 17407325 bytes, checksum: 9ed1cb282822c8dd666684f5cc5e0219 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-19T21:10:55Z (GMT) No. of bitstreams: 1 Dissertação- Jhonathan Araújo Oliveira.pdf: 17407325 bytes, checksum: 9ed1cb282822c8dd666684f5cc5e0219 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-19T21:12:04Z (GMT) No. of bitstreams: 1 Dissertação- Jhonathan Araújo Oliveira.pdf: 17407325 bytes, checksum: 9ed1cb282822c8dd666684f5cc5e0219 (MD5) / Made available in DSpace on 2015-06-19T21:12:04Z (GMT). No. of bitstreams: 1 Dissertação- Jhonathan Araújo Oliveira.pdf: 17407325 bytes, checksum: 9ed1cb282822c8dd666684f5cc5e0219 (MD5) Previous issue date: 2013-09-24 / Scalability and high demand for resources are the main challenges that content providers face in multimedia applications based on networks. For instance, YouTube is one of the most popular delivery systems video on demand, the users send 100 hours of video every minute to its servers and more than four billion hours of video are watched every month. The CDN-P2P systems is widely recognized as a scalable alternative for multimedia content delivery in the Internet. In these systems, the peers from a peerto- peer network (P2P) share their resources thus reducing the demands on the network infrastructure for content delivery (CDN). Moreover, the CDN server to guarantee the availability of content when the peers contributions are limited by the churn, or when the content is is unprecedented to the peers of the P2P network. However, CDN-P2P systems alone do not guarantee the effectiveness of the services, since the peers output that are the only holders of a particular contents can generate congestion around the CDN server and degrade the quality of users experience. This dissertation investigates the contribution of stable peers for availability content on the on the P2P slice form a CDN-P2P system designed to distribute videos similar to those distributed by YouTube. In this way, the data were collected from the real YouTube Web site, exploring the potential of the users that have been access the playlists to characterize the stability of the peers in the system. The assumption about the effectiveness of playlists viewers int the availability content, due to the increased of the stay connnected of these users in the system and the possible popularity of contents shared by them. It was found that when a large number of the peers players pairs of the playlists spend long sessions connected, the improvement of the availability of content was in 60%. Additionally, in scenarios of low participation of players in playlists, its was the improvement outperformed in 20%. Furthermore, we evaluated how the build policies of the mesh impact the distribution system when the peers are grouped and identified as ordinaries and stable. These policies structure the portion of the P2P system through the criterias that are employed on arrival, maintenance and management of the connections of the peers, thus reducing the demands on the CDN server. / A escalabilidade e a alta demanda por recursos são os principais desafios que os provedores de conteúdo enfrentam na viabilização de aplicações multimídia baseadas em redes. No YouTube, por exemplo, um dos mais populares sistemas de distribuição de vídeo sob demanda, são enviadas 100 horas de vídeo a cada minuto aos seus servidores e mais de quatro bilhões de horas de vídeo são assistidas a cada mês. Sistemas CDN-P2P têm sido apontados como uma alternativa escalável para distribuição de conteúdo multimídia na Internet. Nesses sistemas, os pares da rede par-a-par (P2P) compartilham seus recursos, diminuindo as demandas sobre a infraestrutura da rede de distribuição de conteúdo (CDN). Por outro lado, os servidores da CDN garantem a disponibilidade de conteúdo quando as contribuições dos pares são limitadas pelo churn, ou quando o conteúdo for inédito aos pares da rede P2P. Contudo, sistemas CDN-P2P, por si só, não garantem a efetividade dos serviços, visto que a saída de pares que são os únicos detentores de um determinado conteúdo pode gerar congestionamento ao redor do servidor da CDN e degradar a qualidade de experiência dos usuários. Nesta dissertação investiga-se a contribuição de pares estáveis para disponibilidade de conteúdo na parte P2P de um sistema CDN-P2P concebido para distribuir vídeos similares aos distribuídos pelo YouTube. Para isso, dados reais foram coletados do site YouTube, explorando-se o potencial de usuários que acessam playlists para caracterizar a estabilidade dos pares no sistema. A suposição acerca da efetividade dos tocadores de playlists na disponibilidade de conteúdo deve-se Ao maior tempo de permanência desses usuários no sistema e á possível popularidade dos conteúdos por eles compartilhados. Verificou-se que quando um número grande de pares tocadores de playlists passam longas sessões conectados, a melhoria na disponibilidade de conteúdo foi de 60%. Adicionalmente em cenários de baixa participação dos tocadores de playlists a melhoria superou 20%. Em seguida, avaliou-se de que forma políticas de formação da malha impactam o sistema de distribuição estudado quando os pares são agrupados e identificados como estáveis e comuns. Estas políticas estruturam a porçãoP2P do sistema através de critérios que são empregados na chegada, na manutenção e gerência das conexões dos pares, diminuindo assim as exigências sobre o servidor da CDN.
6

Gerenciamento de conteúdo multimídia em redes cdn-p2p.

Libório Filho, João da Mata 22 March 2012 (has links)
Made available in DSpace on 2015-04-11T14:03:13Z (GMT). No. of bitstreams: 1 DISSERTACAO JOAO DA MATA.pdf: 5833486 bytes, checksum: fd81987d9b1e9ac8f9bb648058db273b (MD5) Previous issue date: 2012-03-22 / Fundação de Amparo à Pesquisa do Estado do Amazonas / Scalability and high demand for resources are the main challenges that content providers face to deploy Video-on-Demand applications. The most popular site for sharing videos, YouTube, has over 4 billion videos viewed a day and 60 hours of video are uploaded every minute. Hybrid systems (CDN-P2P) have been proposed as a scalable and cost effective solution for VoD distribution. In these systems, peers share their resources decreasing demand on the content distribution network infrastructure (CDN). On the other hand, the CDN s servers guarantee the availability of content when peers contributions are limited by churn. However, in these systems, the content distributed must be managed so that the CDN servers workload is minimized. An issue to be investigated is the impact of churn, i.e the effect of cycle of peers join and leave, on management policies. Carried studies showed that the performance of policies improves as the storage capacity of peers increases. However, this increasement does not impact proportionately the performance of policies. Later on, we proposed, implemented and evaluated four object management policies derived from data in YouTube video collections. These policies use information left by users or generated by the video distribution system to measure the value of objects. We found that the proposed policies were able to improve the availability of content in more than 70%, compared to the LFU policy, and more than 50% compared to GDSP policy. / A escalabilidade e a alta demanda por recursos s ao os principais desafios que os provedores de conte´udo enfrentam na viabiliza¸c ao de aplica¸c oes de v´ıdeos sob demanda (VoD). O site mais popular de compartilhamento de v´ıdeos, o YouTube, tem mais de 4 bilh oes de visualiza¸c oes por dia e 60 horas de v´ıdeo s ao armazenadas a cada minuto em seus servidores. Sistemas h´ıbridos (CDN-P2P) t em sido apresentado como uma solu¸c ao escal´avel para distribui¸c ao de VoD. Nesses sistemas, pares compartilham seus recursos diminuindo a demanda sobre a infraestrutura da rede de distribui¸c ao de conte´udo (CDN). Por outro lado, os servidores da CDN garantem a disponibilidade de conte´udo quando as contribui¸c oes dos pares s ao limitadas pelo churn, ou quando o conte´udo for in´edito aos pares da rede par a par (P2P). No entanto, o conte´udo distribu´ıdo nesses sistemas precisa ser gerenciado, de forma que a carga de trabalho submetida aos servidores da CDN seja minimizada. Uma quest ao a ser investigada nesse sistema ´e o impacto do churn, isto ´e, o efeito criado pelo ciclo de entrada e sa´ıda dos pares sobre os mecanismos de ger encia do conte´udo distribu´ıdo pelo sistema. Nesta disserta¸c ao avalia-se o impacto do churn no desempenho de pol´ıticas de gerenciamento de objetos em sistemas CDN-P2P; verificou-se que o impacto no desempenho das pol´ıticas diminui com o aumento da capacidade de armazenamento dos pares, no entanto, esse aumento n ao impacta proporcionalmente a performance das pol´ıticas. Em seguida, s ao propostas, implementadas e avaliadas quatro pol´ıticas de gerenciamento de objetos derivadas a partir de dados reais obtidos de cole¸c oes de v´ıdeos do YouTube. Essas pol´ıticas exploram informa¸c oes deixadas pelos usu´arios ou geradas pelo sistema de distribui¸c ao dos v´ıdeos para mensurar o valor de um objeto. A suposi¸c ao acerca da efetividade dessas informa¸c oes na valora¸c ao dos objetos ´e devido `a influ encia do sistema de recomenda¸c ao do YouTube no acesso a seu conte´udo, pois esse sistema utiliza-se dessas informa¸c oes para indicar v´ıdeos aos usu´arios. As pol´ıticas propostas foram capazes de melhorar a disponibilidade do conte´udo em mais de 70%, comparada `a disponibilidade proporcionada pela pol´ıtica LFU e mais de 50%, comparada `a pol´ıtica GDSP.
7

Content Distribution in Social Groups

Aggarwal, Saurabh January 2014 (has links) (PDF)
We study Social Groups consisting of self-interested inter-connected nodes looking for common content. We can observe Social Groups in various socio-technological networks, such as Cellular Network assisted Device-to-Device communications, Cloud assisted Peer-to-Peer Networks, hybrid Peer-to-Peer Content Distribution Networks and Direct Connect Networks. Each node wants to acquire a universe of segments at least cost. Nodes can either access an expensive link to the content distributor for downloading data segments, or use the well-connected low cost inter-node network for exchanging segments among themselves. Activation of an inter-node link requires cooperation among the participating nodes and reduces the cost of downloading for the nodes. However, due to uploading costs, Non-Reciprocating Nodes are reluctant to upload segments, in spite of their interest in downloading segments from others. We define the Give-and-Take (GT) criterion, which prohibits non-reciprocating behaviour in Social Groups for all nodes at all instants. In the “Full Exchange” case studied, two nodes can exchange copies of their entire segment sets, if each node gains at least one new segment from the other. Incorporating the GT criterion in the Social Group, we study the problem of downloading the universe at least cost, from the perspective of a new node having no data segments. We analyze this NP-hard problem, and propose algorithms for choosing the initial segments to be downloaded from the content distributor and the sequence of nodes for exchange. We compare the performance of these algorithms with a few existing P2P downloading strategies in terms of cost and running time. In the second problem, we attempt to reduce the load on the content distributor by choosing a schedule of inter-node link activations such that the number of nodes with the universe is maximized. Link activation decisions are taken by a central entity, the facilitator, for achieving the social optimum. We present the asymptotically optimal Randomized algorithm. We also present other algorithms, such as the Greedy Links algorithm and the Polygon algorithm, which are optimal under special scenarios of interest. We compare the performances of all proposed algorithms with the optimal value of the objective. We observe that computationally intensive algorithms exhibit better performance. Further, we consider the problem of decentralized scheduling of links. The decisions of link activations are made by the participating nodes in a distributed manner. While conforming to the GT criterion for inter-node exchanges, each node's objective is to maximize its utility. Each node tries to find a pairing partner by preferentially exploring nodes for link formation. Unpaired nodes choose to download a segment using the expensive link with Segment Aggressiveness Probability (SAP). We present linear complexity decentralized algorithms for nodes to choose their best strategy. We present a decentralized randomized algorithm that works in the absence of the facilitator and performs close to optimal for large number of nodes. We define the Price of Choice to benchmark performance of Social Groups (consisting of non-aggressive nodes) with the optimal. We evaluate the performance of various algorithms and characterize the behavioural regime that will yield best results for node and Social Group as well.

Page generated in 0.1511 seconds