• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 9
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Utility-oriented internetworking of content delivery networks

Pathan, Al-Mukaddim Khan January 2009 (has links)
Today’s Internet content providers primarily use Content Delivery Networks (CDNs) to deliver content to end-users with the aim to enhance their Web access experience. Yet the prevalent commercial CDNs, operating in isolation, often face resource over-provisioning, degraded performance, and Service Level Agreement (SLA) violations, thus incurring high operational costs and limiting the scope and scale of their services. / To move beyond these shortcomings, this thesis sets out to establish the basis for developing advanced and efficient content delivery solutions that are scalable, high performance, and cost-effective. It introduces techniques to enable coordination and cooperation between multiple content delivery services, which is termed as “CDN peering”. In this context, this thesis addresses five key issues ― when to peer (triggering circumstances), how to peer (interaction strategies), whom to peer with (resource discovery), how to manage and enforce operational policies (re-quest-redirection and load sharing), and how to demonstrate peering applicability (measurement study and proof-of-concept implementation). / Thesis Contributions: To support the thesis that the resource over-provisioning and degraded performance problems of existing CDNs can be overcome, thus improving Web access experience of Internet end-users, we have: / - identified the key research challenges and core technical issues for CDN peering, along with a systematic understanding of the CDN space by covering relevant applications, features and implementation techniques, captured in a comprehensive taxonomy of CDNs; / - developed a novel architectural framework, which provides the basis for CDN peering, formed by a set of autonomous CDNs that cooperate through an interconnection mechanism, providing the infrastructure and facilities to virtualize the service of multiple providers; / - devised Quality-of-Service (QoS)-oriented analytical performance models to demonstrate the effects of CDN peering and predict end-user perceived performance, thus facilitating to make concrete QoS performance guarantees for a CDN provider; / - developed enabling techniques, i.e. resource discovery, server selection, and request-redirection algorithms, for CDN peering to achieve service responsiveness. These techniques are exercised to alleviate imbalanced load conditions, while minimizing redirection cost; / - introduced a utility model for CDN peering to measure its content-serving ability by capturing the traffic activities in the system and evaluated through extensive discrete-event simulation analysis. The findings of this study provide incentive for the exploitation of critical parameters for a better CDN peering system design; and / - demonstrated a proof-of-concept implementation of the utility model and an empirical measurement study on MetaCDN, which is a global overlay for Cloud-based content delivery. It is aided with a utility-based redirection scheme to improve the traffic activities in the world-wide distributed network of MetaCDN.
2

Resource allocation in large-scale multi-server systems

Moharir, Sharayu Arun 09 February 2015 (has links)
The focus of this dissertation is the task of resource allocation in multi- server systems arising from two applications – multi-channel wireless com- munication networks and large-scale content delivery networks. The unifying theme behind all the problems studied in this dissertation is the large-scale nature of the underlying networks, which necessitate the design of algorithms which are simple/greedy and therefore scalable, and yet, have good perfor- mance guarantees. For the multi-channel multi-hop wireless communication networks we consider, the goal is to design scalable routing and scheduling policies which stabilize the system and perform well from a queue-length and end-to-end delay perspective. We first focus on relay assisted downlink networks where it is well understood that the BackPressure algorithm is stabilizing, but, its delay performance can be poor. We propose an alternative algorithm - an iterative MaxWeight algorithm and show that it stabilizes the system and outperforms the BackPressure algorithm. Next, we focus on wireless networks which serve mobile users via a wide-area base-station and multiple densely deployed short- range access nodes (e.g., small cells). We show that traditional algorithms that forward each packet at most once, either to a single access node or a mobile user, do not have good delay performance and propose an algorithm (a distributed scheduler - DIST) and show that it can stabilize the system and performs well from a queue-length/delay perspective. In content delivery networks, each arriving job can only be served by servers storing the requested content piece. Motivated by this, we consider two settings. In the first setting, each job, on arrival, reveals a deadline and a subset of servers that can serve it and the goal is to maximize the fraction of jobs that are served before their deadlines. We propose an online load balanc- ing algorithm which uses correlated randomness and prove its optimality. In the second setting, we study content placement in a content delivery network where a large number of servers, serve a correspondingly large volume of con- tent requests arriving according to an unknown stochastic process. The main takeaway from our results for this setting is that separating the estimation of demands and the subsequent use of the estimations to design optimal content placement policies (learn-and-optimize approach) is suboptimal. In addition, we study two simple adaptive content replication policies and show that they outperform all learning-based static storage policies. / text
3

A Study Report on Content Distribution Network’s Technology & Financial Market

Mughal, Muhammad Irfan Younas, Khan, Mustafa January 2009 (has links)
With the advancement of the Internet age, the need for more and more data distribution to different users on different types of networks in short time and at a nominal cost has also increased significantly. To achieve these objectives several technologies have been used with different sorts of implementations but only few survive in today’s very competitive financial market. The objective of our thesis is to study the technology and the financial market of the Content Distribution Network, which has up till now proven to be a very good and effective way to meet the always increasing demands of the rapidly developing Internet age. In this thesis, we will not only discuss the taxonomies of the Content Distribution Network or CDN, its different types and implementations but we will also focus on its financial issues and its performance in the financial market. The aim of our project is to study and understand the technology of the CDN, the problems related to its implementations, research work and its money matters.
4

Analyse, Modellierung und Verfahren zur Kompensation von CDN-bedingten Verkehrslastverschiebungen in ISP-Netzen

Windisch, Gerd 17 March 2017 (has links) (PDF)
Ein großer Anteil des Datenverkehrs in „Internet Service Provider“ (ISP)-Netzen wird heutzutage von „Content Delivery Networks“ (CDNs) verursacht. Betreiber von CDNs verwenden Lastverteilungsmechanismen um die Auslastung ihrer CDN-Infrastruktur zu vergleichmäßigen (Load Balancing). Dies geschieht ohne Abstimmung mit den ISP-Betreibern. Es können daher große Verkehrslastverschiebungen sowohl innerhalb eines ISP-Netzes, als auch auf den Verbindungsleitungen zwischen ISP-Netz und CDNs auftreten. In der vorliegenden Arbeit wird untersucht, welche nicht-kooperativen Möglichkeiten ein ISP hat, um Verkehrslastverschiebungen, welche durch Lastverteilungsmechanismen innerhalb eines CDNs verursacht werden, entgegenzuwirken bzw. abzumildern. Die Grundlage für diese Untersuchung bildet die Analyse des Serverauswahlverhaltens des YouTube-CDNs. Hierzu ist ein aktives Messverfahren entwickelt worden, um das räumliche und zeitliche Verhalten der YouTube-Serverauswahl bestimmen zu können. In zwei Messstudien wird die Serverauswahl in deutschen und europäischen ISP-Netzen untersucht. Auf Basis dieser Studien wird ein Verkehrsmodell entwickelt, welches die durch Änderungen der YouTube-Serverauswahl verursachten Verkehrslastverschiebungen abbildet. Das Verkehrsmodell wiederum bildet die Grundlage für die Bestimmung optimaler Routen im ISP-Netz, welche hohe Robustheit gegenüber CDN-bedingte Verkehrslastverschiebungen aufweisen (Alpha-robuste Routingoptimierung). Für die Lösung des robusten Routing-Optimierungsproblems wird ein iteratives Verfahren entwickelt sowie eine kompakte Reformulierung vorgestellt. Die Leistungsfähigkeit des Alpha-robusten Routings wird anhand von drei Beispielnetztopologien untersucht. Das neue Verfahren wird mit alternativen robusten Routingverfahren und einem nicht-robusten Verfahren verglichen. Neben der robusten Routingoptimierung werden in der Arbeit drei weitere Ideen für nicht-kooperative Methoden vorgestellt (BGP-, IP-Präix- und DNS-basierte Methode), um CDN-bedingten Verkehrslastverschiebungen entgegenzuwirken.
5

Analyse, Modellierung und Verfahren zur Kompensation von CDN-bedingten Verkehrslastverschiebungen in ISP-Netzen

Windisch, Gerd 02 February 2017 (has links)
Ein großer Anteil des Datenverkehrs in „Internet Service Provider“ (ISP)-Netzen wird heutzutage von „Content Delivery Networks“ (CDNs) verursacht. Betreiber von CDNs verwenden Lastverteilungsmechanismen um die Auslastung ihrer CDN-Infrastruktur zu vergleichmäßigen (Load Balancing). Dies geschieht ohne Abstimmung mit den ISP-Betreibern. Es können daher große Verkehrslastverschiebungen sowohl innerhalb eines ISP-Netzes, als auch auf den Verbindungsleitungen zwischen ISP-Netz und CDNs auftreten. In der vorliegenden Arbeit wird untersucht, welche nicht-kooperativen Möglichkeiten ein ISP hat, um Verkehrslastverschiebungen, welche durch Lastverteilungsmechanismen innerhalb eines CDNs verursacht werden, entgegenzuwirken bzw. abzumildern. Die Grundlage für diese Untersuchung bildet die Analyse des Serverauswahlverhaltens des YouTube-CDNs. Hierzu ist ein aktives Messverfahren entwickelt worden, um das räumliche und zeitliche Verhalten der YouTube-Serverauswahl bestimmen zu können. In zwei Messstudien wird die Serverauswahl in deutschen und europäischen ISP-Netzen untersucht. Auf Basis dieser Studien wird ein Verkehrsmodell entwickelt, welches die durch Änderungen der YouTube-Serverauswahl verursachten Verkehrslastverschiebungen abbildet. Das Verkehrsmodell wiederum bildet die Grundlage für die Bestimmung optimaler Routen im ISP-Netz, welche hohe Robustheit gegenüber CDN-bedingte Verkehrslastverschiebungen aufweisen (Alpha-robuste Routingoptimierung). Für die Lösung des robusten Routing-Optimierungsproblems wird ein iteratives Verfahren entwickelt sowie eine kompakte Reformulierung vorgestellt. Die Leistungsfähigkeit des Alpha-robusten Routings wird anhand von drei Beispielnetztopologien untersucht. Das neue Verfahren wird mit alternativen robusten Routingverfahren und einem nicht-robusten Verfahren verglichen. Neben der robusten Routingoptimierung werden in der Arbeit drei weitere Ideen für nicht-kooperative Methoden vorgestellt (BGP-, IP-Präix- und DNS-basierte Methode), um CDN-bedingten Verkehrslastverschiebungen entgegenzuwirken.
6

Social network support for data delivery infrastructures

Sastry, Nishanth Ramakrishna January 2011 (has links)
Network infrastructures often need to stage content so that it is accessible to consumers. The standard solution, deploying the content on a centralised server, can be inadequate in several situations. Our thesis is that information encoded in social networks can be used to tailor content staging decisions to the user base and thereby build better data delivery infrastructures. This claim is supported by two case studies, which apply social information in challenging situations where traditional content staging is infeasible. Our approach works by examining empirical traces to identify relevant social properties, and then exploits them. The first study looks at cost-effectively serving the ``Long Tail'' of rich-media user-generated content, which need to be staged close to viewers to control latency and jitter. Our traces show that a preference for the unpopular tail items often spreads virally and is localised to some part of the social network. Exploiting this, we propose Buzztraq, which decreases replication costs by selectively copying items to locations favoured by viral spread. We also design SpinThrift, which separates popular and unpopular content based on the relative proportion of viral accesses, and opportunistically spins down disks containing unpopular content, thereby saving energy. The second study examines whether human face-to-face contacts can efficiently create paths over time between arbitrary users. Here, content is staged by spreading it through intermediate users until the destination is reached. Flooding every node minimises delivery times but is not scalable. We show that the human contact network is resilient to individual path failures, and for unicast paths, can efficiently approximate flooding in delivery time distribution simply by randomly sampling a handful of paths found by it. Multicast by contained flooding within a community is also efficient. However, connectivity relies on rare contacts and frequent contacts are often not useful for data delivery. Also, periods of similar duration could achieve different levels of connectivity; we devise a test to identify good periods. We finish by discussing how these properties influence routing algorithms.
7

Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming

Molina Moreno, Benjamin 02 September 2013 (has links)
Esta tesis se ha creado en el marco de la línea de investigación de Mecanismos de Distribución de Contenidos en Redes IP, que ha desarrollado su actividad en diferentes proyectos de investigación y en la asignatura ¿Mecanismos de Distribución de Contenidos en Redes IP¿ del programa de doctorado ¿Telecomunicaciones¿ impartido por el Departamento de Comunicaciones de la UPV y, actualmente en el Máster Universitario en Tecnologías, Sistemas y Redes de Comunicación. El crecimiento de Internet es ampliamente conocido, tanto en número de clientes como en tráfico generado. Esto permite acercar a los clientes una interfaz multimedia, donde pueden concurrir datos, voz, video, música, etc. Si bien esto representa una oportunidad de negocio desde múltiples dimensiones, se debe abordar seriamente el aspecto de la escalabilidad, que pretende que el rendimiento medio de un sistema no se vea afectado conforme aumenta el número de clientes o el volumen de información solicitada. El estudio y análisis de la distribución de contenido web y streaming empleando CDNs es el objeto de este proyecto. El enfoque se hará desde una perspectiva generalista, ignorando soluciones de capa de red como IP multicast, así como la reserva de recursos, al no estar disponibles de forma nativa en la infraestructura de Internet. Esto conduce a la introducción de la capa de aplicación como marco coordinador en la distribución de contenido. Entre estas redes, también denominadas overlay networks, se ha escogido el empleo de una Red de Distribución de Contenido (CDN, Content Delivery Network). Este tipo de redes de nivel de aplicación son altamente escalables y permiten un control total sobre los recursos y funcionalidad de todos los elementos de su arquitectura. Esto permite evaluar las prestaciones de una CDN que distribuya contenidos multimedia en términos de: ancho de banda necesario, tiempo de respuesta obtenido por los clientes, calidad percibida, mecanismos de distribución, tiempo de vida al utilizar caching, etc. Las CDNs nacieron a finales de la década de los noventa y tenían como objetivo principal la eliminación o atenuación del denominado efecto flash-crowd, originado por una afluencia masiva de clientes. Actualmente, este tipo de redes está orientando la mayor parte de sus esfuerzos a la capacidad de ofrecer streaming media sobre Internet. Para un análisis minucioso, esta tesis propone un modelo inicial de CDN simplificado, tanto a nivel teórico como práctico. En el aspecto teórico se expone un modelo matemático que permite evaluar analíticamente una CDN. Este modelo introduce una complejidad considerable conforme se introducen nuevas funcionalidades, por lo que se plantea y desarrolla un modelo de simulación que permite por un lado, comprobar la validez del entorno matemático y, por otro lado, establecer un marco comparativo para la implementación práctica de la CDN, tarea que se realiza en la fase final de la tesis. De esta forma, los resultados obtenidos abarcan el ámbito de la teoría, la simulación y la práctica. / Molina Moreno, B. (2013). Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31637
8

SUPPORTING DATA CENTER AND INTERNET VIDEO APPLICATIONS WITH STRINGENT PERFORMANCE NEEDS: MEASUREMENTS AND DESIGN

Ehab Mohammad Ghabashneh (18257911) 28 March 2024 (has links)
<p dir="ltr">Ensuring a high quality of experience for Internet applications is challenging owing to the significant variability (e.g., of traffic patterns) inherent to both cloud data-center networks and wide area networks. This thesis focuses on optimizing application performance by both conducting measurements to characterize traffic variability, and designing applications that can perform well in the face of variability. On the data center side, a key aspect that impacts performance is traffic burstiness at fine granular time scales. Yet, little is know about traffic burstiness and how it impacts application loss. On the wide area side, we focus on video applications as a major traffic driver. While optimizing traditional videos traffic remains a challenge, new forms of video such as 360◦ introduce additional challenges such as respon- siveness in addition to the bandwidth uncertainty challenge. In this thesis, we make three contributions.</p><p dir="ltr"><b>First</b>, for data center networks, we present Millisampler, a lightweight network traffic char- acterization tool for continual monitoring which operates at fine configurable time scales, and deployed across all servers in a large real-world data center networks. Millisampler takes a host-centric perspective to characterize traffic across all servers within a data center rack at the same time. Next, we present data-center-scale joint analysis of burstiness, contention, and loss. Our results show (i) bursts are likely to encounter contention; (ii) contention varies significantly over short timescales; and (iii) higher contention need not lead to more loss, and the interplay with workload and burst properties matters.</p><p dir="ltr"><b>Second</b>, we consider challenges with traditional video in wide area networks. We take a step towards understanding the interplay between Content-Delivery-Networks (CDNs), and video performance through end-to-end measurements. Our results show that (i) video traffic in a session can be sourced from multiple CDN layers, and (ii) throughput can vary signifi- cantly based on the traffic source. Next we evaluate the potential benefits of exposing CDN information to the client Adaptive-Bit-Rate (ABR) algorithm. Emulation experiments show the approach has the potential to reduce prediction inaccuracies, and enhance video quality of experience (QoE).</p><p dir="ltr"><b>Third</b>, for 360◦ videos, we argue for a new streaming model which is explicitly designed for continuous, rather than stalling, playback to preserve interactivity. Next, we propose Dragonfly, a new 360° system that leverages the additional degrees of freedom provided by this design point. Dragonfly proactively skips tiles (i.e., spatial segment of the video) using a model that defines an overall utility function that captures factors relevant to user experience. We conduct a user study which shows that majority of interactivity feedback indicating Dragonfly being highly reactive, while the majority of state-of-the-art’s feedback indicates the systems are slow to react. Further, extensive emulations show Dragonfly improves the image quality significantly without stalling playback.</p>
9

Towards Improvements in resource management for content delivert networks

RODRIGUES, Moisés Bezerra Estrela 03 March 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-03-02T14:56:30Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) moises.rodrigues-phd.thesis-final-v3.pdf: 4286662 bytes, checksum: 9e67a238c996afd5b50b91cf3c59c86a (MD5) / Made available in DSpace on 2017-03-02T14:56:30Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) moises.rodrigues-phd.thesis-final-v3.pdf: 4286662 bytes, checksum: 9e67a238c996afd5b50b91cf3c59c86a (MD5) Previous issue date: 2016-10-03 / During the last decades, the world web went from a way to connect a handful of nodes to the means with which people cooperate in search of knowledge, social interaction, and entertainment. Furthermore, our homes and workstations are not the only places where we are connected, the mobile broadband market is present and changing the way we interact with the web. According to Cisco, global network traffic will be three times higher in 2018 than it was in 2013. Real-time entertainment has been and will remain an important part of this growth. However, the internet was not designed to handle such demand and, therefore, there is a need for new technologies to overcome those challenges. Content Delivery Networks (CDN) prove to be an alternative to overcome those challenges. The basic concept is to distribute replica servers scattered geographically, keeping content close to end users. Following CDN’s popularity an increasing number of CDNs, most of them extremely localized, began to be deployed. Furthermore, Cloud Computing emerged, making software and hardware accessible as resources through well-defined interfaces. Using Cloud services, such as distributed IaaS, one could deploy complex CDNs. Despite being the best technology to scale content distribution, there are some scenarios where CDNs may perform poorly, such as flash crowd events. Therefore, we need to study content delivery techniques to efficiently accompany the ever increasing need for content contemplating new possibilities, such as growing the number of smaller localized CDNs and Cloud Computing. Examining given issues this work presents strategies towards improvements in Content Delivery Networks (CDN). We do so by proposing and evaluating algorithms, models and a prototype demonstrating possible uses of such new technologies to improve CDN’s resource management. We present P2PCDNSim, a comprehensive CDN simulator designed to assist researchers in the process of planning and evaluating new strategies. Furthermore, we propose a new dynamic Replica Placement Algorithm (RPA), based on the count of data flows through network nodes, that maintains similar Quality of Experience (QoE) while decreasing cross traffic during flash crowd events. Also, we propose a solution to improve the mobile backhaul’s replica placement flexibility based on SDN. Our experimental results show that the delay introduced by the developed module is less than 5ms for 99% of the packets, which is negligible in today’s LTE networks, and the slight negative impact on streaming rate selection is easily outweighed by the increased flexibility / Durante a última década, a rede mundial de computadores evoluiu de um meio de conexão para um pequeno grupo de nós para o meio de pelo qual pessoas obtém conhecimento, interação social e entretenimento. Além disso, nossas casas e estações de trabalho não são nossos únicos pontos de acesso à rede. De acordo com a Cisco, o tráfego global da rede em 2018 será três vezes maior do que era em 2013. Entretenimento em tempo real tem sido e continuará sendo uma parte importante nesse crescimento. No entanto, a rede não foi projetada para lidar com essa demanda, portanto, existe a necessidade de novas tecnologias para superar tais desafios. Content Delivery Networks (CDN) se mostram como uma boa alternativa para superar esses desafios. Seu conceito básico é distribuir servidores de réplica geograficamente, mantendo assim o conteúdo próximo aos usuários. Seguindo sua popularidade, um número crescente de CDNs, em sua maioria locais, começaram a ser implementadas. Além disso, computação em nuvem surgiu, tornando software e hardware recursos acessíveis através de interfaces bem definidas. Os serviços na nuvem, tais como Infrastructure as a Service (IaaS) distribuídos, tornam possível a implementação de CDNs complexas. Apesar de ser a melhor tecnologia para entrega de conteúdo em termos de escalabilidade, existem cenários que ainda desafiam as CDNs, como eventos de flash crowd. Portanto, precisamos estudar estratégias de entrega de conteúdo para acompanhar de maneira eficiente o constante crescimento na necessidade por conteúdo, aproveitando também as novas possibilidade como, o crescimento de CDNs localizadas e popularização da computação em nuvem. Examinando os problemas levantados, essa tese apresenta estratégias no sentido de melhorar Content Delivery Networks (CDN). Fazemos isso propondo e avaliando algoritmos, modelos e um protótipo demonstrando possíveis usos de tais tecnologias para melhorar o gerenciamento de recursos das CDNs. Apresentamos o P2PCDNSim, um simulador de CDNs planejado para auxiliar pesquisadores no processo de planejamento e avaliação de novas estratégias. Além disso, propomos uma nova estratégia de posicionamento de réplicas dinâmica, baseada na contagem de fluxos de dados passando pelos nós, que mantém uma Quality of Experience (QoE) similar enquanto diminui tráfego entre Autonomous System (AS). Ademais, propomos uma solução baseada em Software Defined Networks (SDN) que aumenta a flexibilidade de posicionamento de servidores réplica dentro do backhaul móvel. Nossos resultados experimentais mostram que o atraso introduzido pelo nosso módulo é menor que 5ms em 99% dos pacotes transmitidos, atraso mínimo nas redes Long-Term Evolution (LTE) atuais.

Page generated in 0.0689 seconds