• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 22
  • 17
  • 11
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Implementace CDN a clusteringu v prostředí GNU/Linux s testy výkonnosti. / CDN and clustering in GNU/Linux with performance testing

Mikulka, Pavel January 2008 (has links)
Fault tolerance is essential in a production-grade service delivery network. One of the solution is build a clustered environment to keep system failure to a minimum. This thesis examines the use of high availability and load balancing services using open source tools in GNU/Linux. The thesis discusses some general technologies of high availability computing as virtualization, synchronization and mirroring. To build relatively cheap high availability clusters is suitable DRDB tool. DRDB is tool for build synchronized Linux block devices. This paper also examines Linux-HA project, Redhat Cluster Suite, LVS, etc. Content Delivery Networks (CDN) replicate content over several mirrored web servers strategically placed at various locations in order to deal with the flash crowds. A CDN has some combination a request-routing and replication mechanism. Thus CDNs offer fast and reliable applications and services by distributing content to cache servers located close to end-users. This work examines open-source CDNs Globule and CoralCDN and test performance of this CDNs in global deployment.
32

Stabilisation et optimisation des réseaux de diffusion de contenu / Stabilizing and optimizing content delivery networks

Benchaita, Walid 09 December 2016 (has links)
Un content delivery network (CDN), ou réseau de diffusion de contenu, Sont considérés comme la solution potentielle pour délivrer le volume de contenu croissant. Bien que les solutions CDN soient progressivement intégrées à l'infrastructure réseau, elles montrent toujours des limites technologiques pour faire face au nombre croissant d'applications exigeantes et gourmande en bande passante. Dans cette thèse, la principale cible de nos contributions est le routage des requêtes, qui est un mécanisme de livraison de contenu qui a un impact clé sur l'échelle et la performance du CDN, ainsi que sur la qualité de l'expérience perçue par l'utilisateur.Nous présentons tout d'abord un schéma flexible et un algorithme d'optimisation, basé sur la théorie de Lyapunov, pour le routage des requêtes dans les CDN. Notre approche en ligne fournit une qualité de service stable aux clients, tout en améliorant les délais de livraison de contenu. Elle réduit également les coûts de transport des données pour les opérateurs et surpasse les techniques existantes en termes de gestion du trafic de pointe.Deuxièmement, pour surmonter les limites du mécanisme de redirection utilisé dans les solutions de routage de demandes, nous introduisons une nouvelle approche de diffusion de contenu intégrant des principes de réseau centré sur l'information ou Information-centric networking (ICN) sans nécessiter de changement dans le réseau sous-jacent. Cette solution améliore les performances de diffusion de contenu et permet la mise en œuvre de stratégies de routage de demandes rentables. / Today, many devices are capable to capture full HD videos and use their network connections to access the Internet. The popularization of these devices and continuous efforts to increase network quality has brought a proper environment for the rise of live streaming. Associated with the large scale of Users Generated Content (UGC), live streaming presents new challenges. Content Delivery Networks (CDN)are considered as the potential solution to deliver this rising content volume. Although CDN solutions are progressively integrated with the network infrastructure, they still show technological limitations in dealing with the increasing amount of bandwidth-hungry and demanding applications. In this thesis, the main target of our contributions is request routing, which is a content delivery mechanism that has a key impact on scale and performance of the CDN, as well as on the perceived Quality of Experience (QoE). First, we present a flexible scheme and an optimization algorithm, based on Lyapunov theory, for request routing in CDNs. Our online approach provides a stable quality of service to clients, while improving content delivery delays. It also reduces data transport costs for operators and outperforms existing techniques in terms of peak traffic management.Second, to overcome the limitations of the redirection mechanism used in current request routing solutions, we introduce a new approach to content delivery incorporating Information-Centric Networking (ICN) principles without requiring any change in the underlying network. This solution improves content delivery performance and enables the implementation of cost efficient request routing strategies.
33

Scalable download protocols

Carlsson, Niklas 15 December 2006
Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.<p>Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.<p>In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only local (rather than global) request information.<p>Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence.
34

Scalable download protocols

Carlsson, Niklas 15 December 2006 (has links)
Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.<p>Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.<p>In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only local (rather than global) request information.<p>Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence.
35

Service quality assurance for the IPTV networks

Azgin, Aytac 17 September 2013 (has links)
The objective of the proposed research is to design and evaluate end-to-end solutions to support the Quality of Experience (QoE) for the Internet Protocol Television (IPTV) service. IPTV is a system that integrates voice, video, and data delivery into a single Internet Protocol (IP) framework to enable interactive broadcasting services at the subscribers. It promises significant advantages for both service providers and subscribers. For instance, unlike conventional broadcasting systems, IPTV broadcasts will not be restricted by the limited number of channels in the broadcast/radio spectrum. Furthermore, IPTV will provide its subscribers with the opportunity to access and interact with a wide variety of high-quality on-demand video content over the Internet. However, these advantages come at the expense of stricter quality of service (QoS) requirements than traditional Internet applications. Since IPTV is considered as a real-time broadcast service over the Internet, the success of the IPTV service depends on the QoE perceived by the end-users. The characteristics of the video traffic as well as the high-quality requirements of the IPTV broadcast impose strict requirements on transmission delay. IPTV framework has to provide mechanisms to satisfy the stringent delay, jitter, and packet loss requirements of the IPTV service over lossy transmission channels with varying characteristics. The proposed research focuses on error recovery and channel change latency problems in IPTV networks. Our specific aim is to develop a content delivery framework that integrates content features, IPTV application requirements, and network characteristics in such a way that the network resource utilization can be optimized for the given constraints on the user perceived service quality. To achieve the desired QoE levels, the proposed research focuses on the design of resource optimal server-based and peer-assisted delivery techniques. First, by analyzing the tradeoffs on the use of proactive and reactive repair techniques, a solution that optimizes the error recovery overhead is proposed. Further analysis on the proposed solution is performed by also focusing on the use of multicast error recovery techniques. By investigating the tradeoffs on the use of network-assisted and client-based channel change solutions, distributed content delivery frameworks are proposed to optimize the error recovery performance. Next, bandwidth and latency tradeoffs associated with the use of concurrent delivery streams to support the IPTV channel change are analyzed, and the results are used to develop a resource-optimal channel change framework that greatly improves the latency performance in the network. For both problems studied in this research, scalability concerns for the IPTV service are addressed by properly integrating peer-based delivery techniques into server-based solutions.
36

Resource allocation in cloud and Content Delivery Network (CDN) / Allocation des ressources dans le cloud et les réseaux de diffusion de contenu

Ahvar, Shohreh 10 July 2018 (has links)
L’objectif de cette thèse est de présenter de nouveaux algorithmes de répartition des ressources sous la forme de machines virtuelles (VMs) et fonction de réseau virtuel (VNFs) dans les Clouds et réseaux de diffusion de contenu (CDNs). La thèse comprend deux principales parties: la première se concentre sur la rentabilité des Clouds distribués, et développe ensuite les raisons d’optimiser les coûts ainsi que les émissions de carbone. Cette partie comprend quatre contributions. La première contribution est une étude de l’état de l’art sur la répartition des coûts et des émissions de carbone dans les environnements de clouds distribués. La deuxième contribution propose une méthode d’allocation des ressources, appelée NACER, pour les clouds distribués. La troisième contribution présente une méthode de placement VM efficace en termes de coûts et de carbone (appelée CACEV) pour les clouds distribués verts. Pour obtenir une meilleure performance, la quatrième contribution propose une méthode dynamique de placement VM (D-CACEV) pour les clouds distribués. La deuxième partie propose des algorithmes de placement de VNFs dans les Clouds et réseaux de CDNs pour optimiser les coûts. Cette partie comprend cinq contributions. Une étude de l’état de l’art sur les solutions proposées est le but de la première contribition. La deuxième contribution propose une méthode d’allocation des ressources, appelée CCVP, pour le provisionnement de service réseau dans les clouds et réseaux de ISP. La troisième contribution implémente le résultat de l’algorithme CCVP dans une plateforme réelle. La quatrième contribution considère l’effet de la permutation de VNFs dans les chaîne de services et la cinquième contribution explique le placement de VNFs pour les services à valeur ajoutée dans les CDNs / High energy costs and carbon emissions are two significant problems in distributed computing domain, such as distributed clouds and Content Delivery Networks (CDNs). Resource allocation methods (e.g., in form of Virtual Machine (VM) or Virtual Network Function (VNF) placement algorithms) have a direct effect on cost, carbon emission and Quality of Service (QoS). This thesis includes three related parts. First, it targets the problem of resource allocation (i.e., in the form of network aware VM placement algorithms) for distributed clouds and proposes cost and carbon emission efficient resource allocation algorithms for green distributed clouds. Due to the similarity of the network-aware VM placement problem in distributed clouds with a VNF placement problem, the second part of the thesis, getting experience from the first part, proposes a new cost efficient resource allocation algorithm (i.e., VNF placement) for network service provision in data centers and Internet Service Provider (ISP) network. Finally, the last part of the thesis presents new cost efficient resource allocation algorithms (i.e., VNF placement) for value-added service provisioning in NFV-based CDNs
37

SUPPORTING DATA CENTER AND INTERNET VIDEO APPLICATIONS WITH STRINGENT PERFORMANCE NEEDS: MEASUREMENTS AND DESIGN

Ehab Mohammad Ghabashneh (18257911) 28 March 2024 (has links)
<p dir="ltr">Ensuring a high quality of experience for Internet applications is challenging owing to the significant variability (e.g., of traffic patterns) inherent to both cloud data-center networks and wide area networks. This thesis focuses on optimizing application performance by both conducting measurements to characterize traffic variability, and designing applications that can perform well in the face of variability. On the data center side, a key aspect that impacts performance is traffic burstiness at fine granular time scales. Yet, little is know about traffic burstiness and how it impacts application loss. On the wide area side, we focus on video applications as a major traffic driver. While optimizing traditional videos traffic remains a challenge, new forms of video such as 360◦ introduce additional challenges such as respon- siveness in addition to the bandwidth uncertainty challenge. In this thesis, we make three contributions.</p><p dir="ltr"><b>First</b>, for data center networks, we present Millisampler, a lightweight network traffic char- acterization tool for continual monitoring which operates at fine configurable time scales, and deployed across all servers in a large real-world data center networks. Millisampler takes a host-centric perspective to characterize traffic across all servers within a data center rack at the same time. Next, we present data-center-scale joint analysis of burstiness, contention, and loss. Our results show (i) bursts are likely to encounter contention; (ii) contention varies significantly over short timescales; and (iii) higher contention need not lead to more loss, and the interplay with workload and burst properties matters.</p><p dir="ltr"><b>Second</b>, we consider challenges with traditional video in wide area networks. We take a step towards understanding the interplay between Content-Delivery-Networks (CDNs), and video performance through end-to-end measurements. Our results show that (i) video traffic in a session can be sourced from multiple CDN layers, and (ii) throughput can vary signifi- cantly based on the traffic source. Next we evaluate the potential benefits of exposing CDN information to the client Adaptive-Bit-Rate (ABR) algorithm. Emulation experiments show the approach has the potential to reduce prediction inaccuracies, and enhance video quality of experience (QoE).</p><p dir="ltr"><b>Third</b>, for 360◦ videos, we argue for a new streaming model which is explicitly designed for continuous, rather than stalling, playback to preserve interactivity. Next, we propose Dragonfly, a new 360° system that leverages the additional degrees of freedom provided by this design point. Dragonfly proactively skips tiles (i.e., spatial segment of the video) using a model that defines an overall utility function that captures factors relevant to user experience. We conduct a user study which shows that majority of interactivity feedback indicating Dragonfly being highly reactive, while the majority of state-of-the-art’s feedback indicates the systems are slow to react. Further, extensive emulations show Dragonfly improves the image quality significantly without stalling playback.</p>
38

Amélioration de la qualité d'expérience vidéo en combinant streaming adaptif, caching réseau et multipath / Combining in-network caching, HTTP adaptive streaming and multipath to improve video quality of experience

Poliakov, Vitalii 11 December 2018 (has links)
Le trafic vidéo s’est considérablement accru et est prévu de doubler pour représenter 82% du trafic Internet d’ici 2021. Une telle croissance surcharge les fournisseurs de services Internet (ISP), nuisant à la Qualité d’Expérience (QoE) perçue par les utilisateurs. Cette thèse vise à améliorer la QoE des utilisateurs de streaming vidéo sans hypothèse de changement d’infrastructure physique des opérateurs. Pour cela, nous combinons les technologies de caching réseau, de streaming HTTP adaptatif (HAS), et de transport multipath. Nous explorons d’abord l’interaction entre HAS et caching, pour montrer que les algorithmes d’adaptation de qualité vidéo ont besoin de savoir qu’il y a un cache et ce qui y est stocké, et proposons des algorithmes bénéficiant de cette connaissance. Concluant sur la difficulté d’obtenir la connaissance de l’état du cache, nous étudions ensuite un système de distribution vidéo à large échelle, où les caches sont représentés par un réseau de distribution du contenu (CDN). Un CDN déploie des caches à l’intérieur des réseaux des ISP, et dispose de ses propres serveurs externes. L’originalité du problème vient de l’hypothèse que nous faisons que l’utilisateur est simultanément connecté à 2 ISP. Ceci lui permet d’accéder en multipath aux serveurs externes aux ISP (pouvant ainsi accroître le débit mais chargeant plus les ISP), ou streamer le contenu depuis un cache plus proche mais avec un seul chemin. Ce désaccord entre les objectifs du CDN et de l’ISP conduit à des performances sous-optimales. Nous développons un schéma de collaboration entre ISP et CDN qui permet de nous rapprocher de l’optimal dans certains cas, et discutons l’implémentation pratique. / Video traffic volume grew considerably in recent years and is forecasted to reach 82% of the total Internet traffic by 2021, doubling its net volume as compared to today. Such growth overloads Internet Service Providers' networks (ISPs), which negatively impacts users' Quality of Experience (QoE). This thesis attempts to tackle the problem of improving users' video QoE without relying on network upgrades. For this, we have chosen to combine such technologies as in-network caching, HTTP Adaptive Streaming (HAS), and multipath data transport. We start with exploration of interaction between HAS and caching; we confirm the need of cache-awareness in quality adaptation algorithms and propose such an extension to a state-of-the-art optimisation-based algorithm. Concluding on the difficulty of achieving cache-awareness, we take a step back to study a video delivery system on a large scale, where in-network caches are represented by Content Delivery Networks (CDNs). They deploy caches inside ISPs and dispose of their own outside video servers. As a novelty, we consider users to have a simultaneous connectivity to several ISP networks. This allows video clients either to access outside multipath servers with aggregate bandwidth (which may increase their QoE, but will also bring more traffic into ISP), or stream their content from a closer cache through only single connectivity (bringing less traffic into ISP). This disagreement in ISP and CDN objectives leads to suboptimal system performance. In response to this, we develop a collaboration scheme between two actors, performance of which can approach optimal boundary for certain settings, and discuss its practical implementation.
39

Towards Improvements in resource management for content delivert networks

RODRIGUES, Moisés Bezerra Estrela 03 March 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-03-02T14:56:30Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) moises.rodrigues-phd.thesis-final-v3.pdf: 4286662 bytes, checksum: 9e67a238c996afd5b50b91cf3c59c86a (MD5) / Made available in DSpace on 2017-03-02T14:56:30Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) moises.rodrigues-phd.thesis-final-v3.pdf: 4286662 bytes, checksum: 9e67a238c996afd5b50b91cf3c59c86a (MD5) Previous issue date: 2016-10-03 / During the last decades, the world web went from a way to connect a handful of nodes to the means with which people cooperate in search of knowledge, social interaction, and entertainment. Furthermore, our homes and workstations are not the only places where we are connected, the mobile broadband market is present and changing the way we interact with the web. According to Cisco, global network traffic will be three times higher in 2018 than it was in 2013. Real-time entertainment has been and will remain an important part of this growth. However, the internet was not designed to handle such demand and, therefore, there is a need for new technologies to overcome those challenges. Content Delivery Networks (CDN) prove to be an alternative to overcome those challenges. The basic concept is to distribute replica servers scattered geographically, keeping content close to end users. Following CDN’s popularity an increasing number of CDNs, most of them extremely localized, began to be deployed. Furthermore, Cloud Computing emerged, making software and hardware accessible as resources through well-defined interfaces. Using Cloud services, such as distributed IaaS, one could deploy complex CDNs. Despite being the best technology to scale content distribution, there are some scenarios where CDNs may perform poorly, such as flash crowd events. Therefore, we need to study content delivery techniques to efficiently accompany the ever increasing need for content contemplating new possibilities, such as growing the number of smaller localized CDNs and Cloud Computing. Examining given issues this work presents strategies towards improvements in Content Delivery Networks (CDN). We do so by proposing and evaluating algorithms, models and a prototype demonstrating possible uses of such new technologies to improve CDN’s resource management. We present P2PCDNSim, a comprehensive CDN simulator designed to assist researchers in the process of planning and evaluating new strategies. Furthermore, we propose a new dynamic Replica Placement Algorithm (RPA), based on the count of data flows through network nodes, that maintains similar Quality of Experience (QoE) while decreasing cross traffic during flash crowd events. Also, we propose a solution to improve the mobile backhaul’s replica placement flexibility based on SDN. Our experimental results show that the delay introduced by the developed module is less than 5ms for 99% of the packets, which is negligible in today’s LTE networks, and the slight negative impact on streaming rate selection is easily outweighed by the increased flexibility / Durante a última década, a rede mundial de computadores evoluiu de um meio de conexão para um pequeno grupo de nós para o meio de pelo qual pessoas obtém conhecimento, interação social e entretenimento. Além disso, nossas casas e estações de trabalho não são nossos únicos pontos de acesso à rede. De acordo com a Cisco, o tráfego global da rede em 2018 será três vezes maior do que era em 2013. Entretenimento em tempo real tem sido e continuará sendo uma parte importante nesse crescimento. No entanto, a rede não foi projetada para lidar com essa demanda, portanto, existe a necessidade de novas tecnologias para superar tais desafios. Content Delivery Networks (CDN) se mostram como uma boa alternativa para superar esses desafios. Seu conceito básico é distribuir servidores de réplica geograficamente, mantendo assim o conteúdo próximo aos usuários. Seguindo sua popularidade, um número crescente de CDNs, em sua maioria locais, começaram a ser implementadas. Além disso, computação em nuvem surgiu, tornando software e hardware recursos acessíveis através de interfaces bem definidas. Os serviços na nuvem, tais como Infrastructure as a Service (IaaS) distribuídos, tornam possível a implementação de CDNs complexas. Apesar de ser a melhor tecnologia para entrega de conteúdo em termos de escalabilidade, existem cenários que ainda desafiam as CDNs, como eventos de flash crowd. Portanto, precisamos estudar estratégias de entrega de conteúdo para acompanhar de maneira eficiente o constante crescimento na necessidade por conteúdo, aproveitando também as novas possibilidade como, o crescimento de CDNs localizadas e popularização da computação em nuvem. Examinando os problemas levantados, essa tese apresenta estratégias no sentido de melhorar Content Delivery Networks (CDN). Fazemos isso propondo e avaliando algoritmos, modelos e um protótipo demonstrando possíveis usos de tais tecnologias para melhorar o gerenciamento de recursos das CDNs. Apresentamos o P2PCDNSim, um simulador de CDNs planejado para auxiliar pesquisadores no processo de planejamento e avaliação de novas estratégias. Além disso, propomos uma nova estratégia de posicionamento de réplicas dinâmica, baseada na contagem de fluxos de dados passando pelos nós, que mantém uma Quality of Experience (QoE) similar enquanto diminui tráfego entre Autonomous System (AS). Ademais, propomos uma solução baseada em Software Defined Networks (SDN) que aumenta a flexibilidade de posicionamento de servidores réplica dentro do backhaul móvel. Nossos resultados experimentais mostram que o atraso introduzido pelo nosso módulo é menor que 5ms em 99% dos pacotes transmitidos, atraso mínimo nas redes Long-Term Evolution (LTE) atuais.
40

Incentivizing user participation in cooperative content delivery for wireless networks

Barua, B. (Bidushi) 04 May 2018 (has links)
Abstract The aim of this thesis is to propose an array of novel cooperative content delivery (CCD) methods and related incentive mechanisms for future fifth-generation (5G) and beyond networks. CCD using multiple air interfaces is a powerful solution to mitigate the problem of congestion in wireless networks, in which the available multiple air interfaces on smart devices are utilized intelligently to distribute data content among a group of users that are in the vicinity of one another. The requirements for higher capacity, reliability, and energy efficiency in the 5G networks have warranted the development of methods focusing on CCD. Moreover, critical to the efficiency of a CCD process are incentive mechanisms to induce cooperation among the mobile users engaged in CCD. The first part of the thesis studies an ideal condition of reliable and error-free distribution of content using cellular and short-range links. The main contribution is to introduce different device selection CCD methods that take into account only the link quality of the devices’ primary (cellular) interfaces. The proposed methods provide frequency carrier savings for the operator while allowing users to enjoy higher downlink rates. The second part of the thesis studies a more realistic CCD situation where users with low data rate wireless links can be a bottleneck in terms of CCD performance. The main contribution is to propose a novel device selection CCD method that considers the link quality of both primary (cellular) and secondary (short-range) interfaces of the devices. Additionally, a carrier aggregation-based incentive mechanism for the proposed method is introduced to address the challenge of selfish deviating users. The proposed mechanism maximizes individual and network payoffs, and is an equilibrium against unilateral selfish deviations. The third part of the thesis addresses the adverse selection problem in CCD scenarios. The operator is assumed to have incomplete information about the willingness of the users to participate in CCD. The main contribution is to introduce contract-based methods through which the operator could motivate users to reveal their true willingness towards participation. The proposed methods incentivize users according to their willingness and improve system performance in terms of the utility of the operator and the users. / Tiivistelmä Tämän väitöskirjan tavoitteena on kehittää menetelmiä yhteistyössä tapahtuvaan sisällön jakamiseen (cooperative content delivery, CCD) sekä siihen liittyviä kannustinmekanismeja viidennen sukupolven (5G) ja sen jälkeisille matkaviestinverkoille. CCD:n käyttö hyödyntämällä älylaitteessa olevia useita ilmarajapintoja on tehokas ratkaisu välttää langattomien verkkojen ruuhkautumista. CCD-menetelmissä laiteen ilmarajapintoja käytetään älykkäästi datan jakamiseen käyttäjäryhmälle, kun käyttäjät ovat lähellä toisiaan. 5G-verkkojen vaatimukset korkeammalle kapasiteetille, luotettavuudelle ja energiatehokkuudelle ovat motivoineet CCD-menetelmien kehitystyötä. Erityisen tärkeää CCD-menetelmien tehokkuudelle on kannustinmekanismien kehittäminen mahdollistamaan yhteistyö mobiilikäyttäjien välillä. Väitöskirjatyön ensimmäinen osuus käsittelee ideaalista tilannetta luotettavalle ja virheettömälle sisällön jakamiselle hyödyntämällä solukkoverkkoa ja lyhyen kantaman linkkejä. Tässä osuudessa päätuloksena on kehitetty käyttäjien valinnalle menetelmiä, jotka huomioivat linkin laadun solukkoverkon ilmarajapinnassa. Ehdotetut menetelmät tuovat operaattorille säästöjä taajuusresurssien käytön osalta ja käyttäjät saavuttavat korkeampia laskevan siirtotien datanopeuksia. Työn toinen osuus tutkii todenmukaisempaa CCD-tilannetta, jossa alhaisen datanopeuden linkkien käyttäjät voivat olla pullonkaula CCD:n suorituskyvylle. Päätulos tässä on uusi käyttäjien valintamenetelmä, joka ottaa huomioon linkkien laadun sekä solukkoverkossa että lyhyen kantaman linkeissä. Lisäksi esitellään eri taajuuksien yhdistämistä hyödyntävä kannustinmenetelmä, joka ottaa huomioon itsekkäiden käyttäjien aiheuttamat ongelmat. Ehdotettu mekanismi maksimoi yksittäisen käyttäjän ja verkon hyödyt ja saavuttaa tasapainotilan käyttäjien yksipuolista itsekkyyttä vastaan. Väitöskirjan kolmannessa osuudessa tutkitaan haitallisen valikoitumisen mahdollisuutta CCD:ssä. Operaattorilla oletetaan olevan epätäydellistä tietoa käyttäjien halukkuudesta osallistua yhteistyöhön CCD:ssä. Tämän osuuden päätulos on esitellä sopimuksiin perustuvia kannustinmenetelmiä, joiden avulla operaattori voi motivoida käyttäjiä paljastamaan heidän todellinen tahtotilansa osallistua yhteistyöhön. Ehdotetut menetelmä kannustavat käyttäjiä heidän todellisen tahtotilan perusteella ja parantavat järjestelmän suorituskykyä operaattorin ja käyttäjien saavuttamien hyötyjen osalta.

Page generated in 0.092 seconds