• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 807
  • 129
  • 55
  • 8
  • 4
  • 1
  • Tagged with
  • 1004
  • 570
  • 264
  • 233
  • 214
  • 200
  • 199
  • 138
  • 128
  • 107
  • 103
  • 97
  • 82
  • 72
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Signal Processing Aspects of Cell-Free Massive MIMO

Interdonato, Giovanni January 2018 (has links)
The fifth generation of mobile communication systems (5G) promises unprecedented levels of connectivity and quality of service (QoS) to satisfy the incessant growth in the number of mobile smart devices and the huge increase in data demand. One of the primary ways 5G network technology will be accomplished is through network densification, namely increasing the number of antennas per site and deploying smaller and smaller cells. Massive MIMO, where MIMO stands for multiple-input multiple-output, is widely expected to be a key enabler of 5G. This technology leverages an aggressive spatial multiplexing, from using a large number of transmitting/receiving antennas, to multiply the capacity of a wireless channel. A massive MIMO base station (BS) is equipped with a large number of antennas, much larger than the number of active users. The users are coherently served by all the antennas, in the same time-frequency resources but separated in the spatial domain by receiving very directive signals. By supporting such a highly spatially-focused transmission (precoding), massive MIMO provides higher spectral and energy efficiency, and reduces the inter-cell interference compared to existing mobile systems. The inter-cell interference is however becoming the major bottleneck as we densify the networks. It cannot be removed as long as we rely on a network-centric implementation, since the inter-cell interference concept is inherent to the cellular paradigm. Cell-free massive MIMO refers to a massive MIMO system where the BS antennas, herein referred to as access points (APs), are geographically spread out. The APs are connected, through a fronthaul network, to a central processing unit (CPU) which is responsible for coordinating the coherent joint transmission. Such a distributed architecture provides additional macro-diversity, and the co-processing at multiple APs entirely suppresses the inter-cell interference. Each user is surrounded by serving APs and experiences no cell boundaries. This user-centric approach, combined with the system scalability that characterizes the massive MIMO design, constitutes a paradigm shift compared to the conventional centralized and distributed wireless communication systems. On the other hand, such a distributed system requires higher capacity of back/front-haul connections, and the signal co-processing increases the signaling overhead. In this thesis, we focus on some signal processing aspects of cell-free massive MIMO. More specifically, we firstly investigate if the downlink channel estimation, via downlink pilots, brings gains to cell-free massive MIMO or the statistical channel state information (CSI) knowledge at the users is enough to reliably perform data decoding, as in conventional co-located massive MIMO. Allocating downlink pilots is costly resource-wise, thus we also propose resource saving-oriented strategies for downlink pilot assignment. Secondly, we study further fully distributed and scalable precoding schemes in order to outperform cell-free massive MIMO in its canonical form, which consists in single-antenna APs implementing conjugate beamforming (also known as maximum ratio transmission).
102

Block Diagonalization Based Beamforming

Patil, Darshan January 2017 (has links)
With increasing mobile penetration multi-user multi-antenna wireless communication systems are needed. This is to ensure higher per-user data rates along with higher system capacities by exploiting the excess degree of freedom due to additional antennas at the receiver with spatial multiplexing. The rising popularity of "Gigabit-LTE" and "Massive-MIMO" or "FD-MIMO" is an illustration of this demand for high data rates, especially in the forward link. In this thesis we study the MU-MIMO communication setup and attempt to solve the problem of system sumrate maximization in the downlink data transmission (also known as forward link) under a limited availability of transmit power at the base station. Contrast to uplink, in the downlink, every user in the system is required to perform interference cancellation due to signals intended to other co-users. As the mobile terminals have strict restrictions on power availability and physical dimensions, processing capabilities are extremely narrow (relative to the base station). Therefore, we study the solutions from literature in which most of the interference cancellation can also be performed by the base station (precoding). While doing so we maximize the sumrate and also consider the restrictions on the total transmit power available at the base station. In this thesis, we also study and evaluate different conventional linear precoding schemes and how they relate to the optimal structure of the solution which maximize the effective Signal to Noise Ratio (SINR) at every receiver output. We also study one of the suboptimal precoding solutions known as Block-diagonalization (BD) applicable in the case where a receiver has multiple receive antennas and compare their performance. Finally, we notice that in spite of the promising results in terms of system sumrate performance, they are not deployed in practice. The reason for this is that classic BD schemes are computationally heavy. In this thesis we attempt to reduce the complexity of the BD schemes by exploiting the principle of coherence and using perturbation theory. We make use of OFDM technology and efficient linear algebra methods to update the beamforming weights in a smart way rather than entirely computing them again such that the overall complexity of the BD technique is reduced by at least an order of magnitude. The results are simulated using the exponential correlation channel model and the LTE 3D spatial channel model which is standardized by 3GPP. The simulated environment consists of single cell MU-MIMO in a standardized urban macro environment with up to 100 transmit antennas at the BS and 2 receive antennas per user. We observe that with the increase in spatial correlations and in high SNR regions, BD outperforms other precoding schemes discussed in this thesis and the developed low complex BD precoding solution can be considered as an alternative in a more general framework with multiple antennas at the receiver. / För att klara det ökade mobilanvändandet krävs trådlösa kommunikationssystemmed multipla antenner. Detta, för att kunna garantera högre datatakt peranvändare och högre systemkapacitet, genom att utnyttja att extra antennerpå basstationen ger extra frihetsgrader som kan nyttjas för spatiell multiplexing.Den ökande populariteten hos “Gigabit-LTE”, “Massive-MIMO” och “FDMIMO”illustrerar detta behov av höga datatakter, framför allt i framåtlänken.I denna avhandling studerar vi MU-MIMO-kommunikation och försöker lösaproblemet att maximera summadatatakten i nedlänkskommunikation (ävenkallat framåtlänken), med begränsad tillgänglig sändeffekt hos basstationen.In nedlänken, till skillnad från upplänken, så måste varje användare hanterainterferens från signaler som är avsedda för andra mottagare. Eftersom mobilterminalerär begränsade i storlek och batteristyrka, så har de små möjligheteratt utföra sådan signalbehandling (jämfört med basstationen). Därför studerarvi lösningar från litteraturen där det mesta av interferensundertryckningenockså kan utföras vid basstationen (förkodning). Detta görs för att maximerasummadatatakten och även ta hänsyn till begränsningar i basstationens totalasändeffekt.I denna avhandling studerar vi även olika konventionella linjära förkodningsmetoderoch utvärderar hur de relaterar till den optimala strukturen hos lösningensom maximerar signal till brus-förhållandet (SINR) hos varje mottagare.Vi studerar även en suboptimal förkodningslösning kallad blockdiagonalisering(BD) som är användbar när en mottagare har multipla mottagarantenner, ochjämför dess prestanda.Slutligen noterar vi att dessa förkodningsmetoder inte har implementeratsi praktiska system, trots deras lovande prestanda. Anledningen är att klassiskaBD-metoder är beräkningskrävande. I denna avhandling försöker vi minskaberäkningskomplexiteten genom att utnyttja kanalens koherens och användaperturbationsteori. Vi utnyttjar OFDM-teknologi och effektiva metoder i linjäralgebra för att uppdatera förkodarna på ett intelligent sätt istället för att beräknadem på nytt, så att den totala komplexiteten för BD-tekniken reducerasåtminstone en storleksordning.Resultaten simuleras med både en kanalmodell baserad på exponentiell korrelationoch med den spatiella LTE 3D-kanalmodellen som är standardiseradav 3GPP. Simuleringsmiljön består av en ensam makrocell i en standardiseradstadsmiljö med MU-MIMO med upp till 100 sändantenner vid basstationenoch 2 mottagarantenner per användare. Vi observerar att BD utklassar övrigaförkodningsmetoder som diskuteras i avhandlingen när spatiella korrelationenökar och för höga SNR, och att den föreslagna lågkomplexa BD-förkodaren kanvara ett alternativ i ett mer generellt scenario med multipla antenner hos mottagarna.
103

Real-Time Search in Large Networks and Clouds

Uddin, Misbah January 2013 (has links)
Networked systems, such as telecom networks and cloud infrastructures, hold and generate vast amounts of conguration and operational data, only a small portion of which is used today by management applications. The overall goal of this work is to make all this data available through a real-time search process named network search , where queries are invoked, without giving the location or the format of the data, similar to web search. Such a capability will simplify many management applications and enable new classes of realtime management solutions. The fundamental problems in network search relate to search in a vast and dynamic information space and the fact that the information is distributed across a very large system. The thesis contains several contributions towards engineering a network search system. We present a weakly-structured information model, which enables representation of heterogeneous network data, a keyword-based search language, which supports location- and schema-oblivious search queries, and a distributed search mechanism, which is based on an echo protocol and supports a range of matching and ranking options. The search is performed in a peer-to-peer fashion in a network of search nodes. Each search node maintains a local real-time database of locally sensed conguration and operational information. Many of the concepts we developed for network search are based on results from the elds of information retrieval, web search, and very large databases. The key feature of our solution is that the search process and the computation of the query results is performed on local data inside the network or the cloud. We have build a prototype of the system on a cloud testbed and developed applications that use network search functionality. The performance measurements suggest that it is feasible to engineer a network search system that processes queries at low latency and low overhead, and that can scale to a very large system in the order of 100,000 nodes. / <p>QC 20130930</p>
104

Real-Time Monitoring of Global Variables in Large-Scale Dynamic Systems

Wuhib, Fetahi Zebenigus January 2007 (has links)
Large-scale dynamic systems, such as the Internet, as well as emerging peer-to-peer networks and computational grids, require a high level of awareness of the system state in real-time for proper and reliable operation. A key challenge is to develop monitoring functions that are efficient, scalable, robust and controllable. The thesis addresses this challenge by focusing on engineering protocols for distributed monitoring of global state variables. The global variables are network-wide aggregates, computed from local device variables using aggregation functions such as SUM, MAX, AVERAGE, etc. Furthermore, it addresses the problem of detecting threshold crossing of such aggregates. The design goals for the protocols are efficiency, quality, scalability, robustness and controllability. The work presented in this thesis has resulted in two novel protocols: a gossip-based protocol for continuous monitoring of aggregates called G-GAP, and a tree-based protocol for detecting thresh old crossings of aggregates called TCA-GAP. The protocols have been evaluated against the design goals through three complementing evaluation methods: theoretical analysis, simulation study and testbed implementation. / QC 20101122
105

Design and evaluation of network processor systems and forwarding applications

Fu, Jing January 2006 (has links)
During recent years, both the Internet traffic and packet transmission rates have been growing rapidly, and new Internet services such as VPNs, QoS and IPTV have emerged. To meet increasing line speed requirements and to support current and future Internet services, improvements and changes are needed in current routers both with respect to hardware architectures and forwarding applications. High speed routers are nowadays mainly based on application specific integrated circuits (ASICs), which are custom made and not flexible enough to support diverse services. Generalpurpose processors offer flexibility, but have difficulties to in handling high data rates. A number of software IP-address lookup algorithms have therefore been developed to enable fast packet processing in general-purpose processors. Network processors have recently emerged to provide the performance of ASICs combined with the programmability of general-purpose processors. This thesis provides an evaluation of router design including both hardware architectures and software applications. The first part of the thesis contains an evaluation of various network processor system designs. We introduce a model for network processor systems which is used as a basis for a simulation tool. Thereafter, we study two ways to organize processing elements (PEs) inside a network processor to achieve parallelism: a pipelined and a pooled organization. The impact of using multiple threads inside a single PE is also studied. In addition, we study the queueing behavior and packet delays in such systems. The results show that parallelism is crucial to achieving high performance,but both the pipelined and the pooled processing-element topologies achieve comparable performances. The detailed queueing behavior and packet delay results have been used to dimension queues, which can be used as guidelines for designing memory subsystems and queueing disciplines. The second part of the thesis contains a performance evaluation of an IP-address lookup algorithm, the LC-trie. The study considers trie search depth, prefix vector access behavior, cache behavior, and packet lookup service time. For the packet lookup service time, the evaluation contains both experimental results and results obtained from a model. The results show that the LC-trie is an efficient route lookup algorithm for general-purpose processors, capable of performing 20 million packet lookups per second on a Pentium 4, 2.8 GHz computer, which corresponds to a 40 Gb/s link for average sized packets. Furthermore, the results show the importance of choosing packet traces when evaluating IP-address lookup algorithms: real-world and synthetically generated traces may have very different behaviors. The results presented in the thesis are obtained through studies of both hardware architectures and software applications. They could be used to guide the design of next-generation routers. / QC 20101112
106

Access selection in multi-system architectures : cooperative and competitive contexts

Hultell, Johan January 2007 (has links)
Future wireless networks will be composed of multiple radio access technologies (RATs). To benefit from these, users must utilize the appropriate RAT, and access points (APs). In this thesis we evaluate the efficiency of selection criteria that, in addition to path-loss and system bandwidth, also consider load. The problem is studied for closed as well as open systems. In the former both terminals and infrastructure are controlled by a single actor (e.g., mobile operator), while the latter refers to situations where terminals, selfishly, decide which AP it wants to use (as in a common market-place). We divide the overall problem into the prioritization between available RATs and, within a RAT, between the APs. The results from our studies suggest that data users, in general, should be served by the RAT offering highest peak data rate. As this can be estimated by terminals, the benefits from centralized RAT selection is limited. Within a subsystem, however, load-sensitive AP selection criteria can increase data-rates. Highest gains are obtained when the subsystem is noise-limited, deployment unplanned, and the relative difference in number of users per AP significant. Under these circumstances the maximum supported load can be increased by an order of magnitude. However, also decentralized AP selection, where greedy autonomous terminal-based agents are in charge of the selection, were shown to give these gains as long they accounted for load. We also developed a game-theoretic framework, where users competed for wireless resources by bidding in a proportionally fair divisible auction. The framework was applied to a scenario where revenue-seeking APs competed for traffic by selecting an appropriate price. Compared to when APs cooperated, modelled by the Nash bargaining solution, our results suggest that a competitive access market, where infrastructure is shared implicitly, generally, offers users better service at a lower cost. Although AP revenues reduce, this reduction is, relatively, small and were shown to decrease with the concavity of demand. Lastly we studied whether data services could be offered in a discontinuous high-capacity network by letting a terminal-based agent pre-fetch information that its user potentially may request at some future time-instant. This decouples the period where the information is transferred, from the time-instant when it is consumed. Our results show that above some critical AP density, considerably lower than that required for continuous coverage, services start to perform well. / QC 20101109
107

Cost efficient provisioning of wireless access : infrastructure cost modeling and multi-operator resource sharing

Johansson, Klas January 2005 (has links)
QC 20101206
108

Communication-Computation Efficient Federated Learning over Wireless Networks

Mahmoudi, Afsaneh January 2023 (has links)
With the introduction of the Internet of Things (IoT) and 5G cellular networks, edge computing will substantially alleviate the quality of service shortcomings of cloud computing. With the advancements in edge computing, machine learning (ML) has performed a significant role in analyzing the data produced by IoT devices. Such advancements have mainly enabled ML proliferation in distributed optimization algorithms. These algorithms aim to improve training and testing performance for prediction and inference tasks, such as image classification. However, state-of-the-art ML algorithms demand massive communication and computation resources that are not readily available on wireless devices. Accordingly, a significant need is to extend ML algorithms to wireless communication scenarios to cope with the resource limitations of the devices and the networks.  Federated learning (FL) is one of the most prominent algorithms with data distributed across devices. FL reduces communication overhead by avoiding data exchange between wireless devices and the server. Instead, each wireless device executes some local computations and communicates the local parameters to the server using wireless communications. Accordingly, every communication iteration of FL experiences costs such as computation, latency, communication resource utilization, bandwidth, and energy. Since the devices' communication and computation resources are limited, it may hinder completing the training of the FL due to the resource shortage. The main goal of this thesis is to develop cost-efficient approaches to alleviate the resource constraints of devices in FL training. In the first chapter of the thesis, we overview ML and discuss the relevant communication and computation efficient works for training FL models. Next, a comprehensive literature review of cost efficient FL methods is conducted, and the limitations of existing literature in this area are identified. We then present the central focus of our research, which is a causal approach that eliminates the need for future FL information in the design of communication and computation efficient FL. Finally, we summarize the key contributions of each paper within the thesis. In the second chapter, the thesis presents the articles on which it is based in their original format of publication or submission. A multi-objective optimization problem, incorporating FL loss and iteration cost functions, is proposed where communication between devices and the server is regulated by the slotted-ALOHA wireless protocol. The effect of contention level in the CSMA/CA on the causal solution of the proposed optimization is also investigated. Furthermore, the multi-objective optimization problem is extended to cover general scenarios in wireless communication, including convex and non-convex loss functions. Novel results are compared with well-known communication-efficient methods, such as the lazily aggregated quantized gradients (LAQ), to further improve the communication efficiency in FL over wireless networks. / Med introduktionen av Internet of Things~(IoT) och 5G~cellulära nätverk, kommer edge computing avsevärt att lindra bristerna på tjänstekvaliteten hos molnberäkningar. Med framstegen inom edge computing har maskininlärning~(ML) spelat en betydande roll i att analysera data som produceras av IoT-enheter. Sådana framsteg har huvudsakligen möjliggjort ML-proliferation i distribuerade optimeringsalgoritmer. Dessa algoritmer syftar till att förbättra tränings- och testprestanda för förutsägelse- och slutledningsuppgifter, såsom bildklassificering. Men de senaste ML-algoritmerna kräver enorma kommunikations- och beräkningsresurser som inte är lätt tillgängliga på trådlösa enheter. Följaktligen är ett betydande behov att utöka ML-algoritmer till scenarier för trådlös kommunikation för att klara av resursbegränsningarna hos enheterna och nätverken. Federated learning~(FL) är en av de mest framträdande algoritmerna med data fördelade över enheter. FL minskar kommunikationskostnader genom att undvika datautbyte mellan trådlösa enheter och servern. Istället utför varje trådlös enhet några lokala beräkningar och kommunicerar de lokala parametrarna till servern med hjälp av trådlös kommunikation. Följaktligen upplever varje kommunikationsiteration av FL kostnader som beräkning, latens, kommunikationsresursanvändning, bandbredd och energi. Eftersom enheternas kommunikations- och beräkningsresurser är begränsade kan det på grund av resursbristen hindra att fullfölja utbildningen av FL. Huvudmålet med denna avhandling är att utveckla kostnadseffektiva metoder för att lindra resursbegränsningarna för enheter i FL-träning. I det första kapitlet av avhandlingen överblickar vi ML och diskuterar relevanta kommunikations- och beräkningseffektiva arbeten för att träna FL-modeller. Därefter genomförs en omfattande litteraturgenomgång av kostnadseffektiva FL-metoder, och begränsningarna för befintlig litteratur inom detta område identifieras. Vi presenterar sedan det centrala fokuset i vår forskning, vilket är ett kausalt synsätt som eliminerar behovet av framtida FL-information vid utformning av kommunikations- och beräkningseffektiv FL. Slutligen sammanfattar vi de viktigaste bidragen från varje artikel i avhandlingen. I det andra kapitlet presenterar avhandlingen de artiklar som den bygger på i deras ursprungliga publicerings- eller inlämningsformat. Ett multi-objektiv optimeringsproblem, som inkluderar FL-förlust- och iterationskostnadsfunktioner, föreslås där det trådlösa ALOHA-protokollet med slitsar reglerar kommunikationen mellan enheter och servern. Effekten av konfliktnivån i CSMA/CA på den kausala lösningen av den föreslagna optimeringen undersöks också. Dessutom utökas problemet med optimering av flera mål till att täcka allmänna scenarier inom trådlös kommunikation, inklusive konvexa och icke-konvexa förlustfunktioner. Nya resultat jämförs med välkända kommunikationseffektiva metoder som LAQ för att ytterligare förbättra kommunikationseffektiviteten i FL över trådlösa nätverk. Med introduktionen av Internet of Things~(IoT) och 5G~cellulära nätverk, kommer edge computing avsevärt att lindra bristerna på tjänstekvaliteten hos molnberäkningar. Med framstegen inom edge computing har maskininlärning~(ML) spelat en betydande roll i att analysera data som produceras av IoT-enheter. Sådana framsteg har huvudsakligen möjliggjort ML-proliferation i distribuerade optimeringsalgoritmer. Dessa algoritmer syftar till att förbättra tränings- och testprestanda för förutsägelse- och slutledningsuppgifter, såsom bildklassificering. Men de senaste ML-algoritmerna kräver enorma kommunikations- och beräkningsresurser som inte är lätt tillgängliga på trådlösa enheter. Följaktligen är ett betydande behov att utöka ML-algoritmer till scenarier för trådlös kommunikation för att klara av resursbegränsningarna hos enheterna och nätverken. Federated learning~(FL) är en av de mest framträdande algoritmerna med data fördelade över enheter. FL minskar kommunikationskostnader genom att undvika datautbyte mellan trådlösa enheter och servern. Istället utför varje trådlös enhet några lokala beräkningar och kommunicerar de lokala parametrarna till servern med hjälp av trådlös kommunikation. Följaktligen upplever varje kommunikationsiteration av FL kostnader som beräkning, latens, kommunikationsresursanvändning, bandbredd och energi. Eftersom enheternas kommunikations- och beräkningsresurser är begränsade kan det på grund av resursbristen hindra att fullfölja utbildningen av FL. Huvudmålet med denna avhandling är att utveckla kostnadseffektiva metoder för att lindra resursbegränsningarna för enheter i FL-träning. I det första kapitlet av avhandlingen överblickar vi ML och diskuterar relevanta kommunikations- och beräkningseffektiva arbeten för att träna FL-modeller. Därefter genomförs en omfattande litteraturgenomgång av kostnadseffektiva FL-metoder, och begränsningarna för befintlig litteratur inom detta område identifieras. Vi presenterar sedan det centrala fokuset i vår forskning, vilket är ett kausalt synsätt som eliminerar behovet av framtida FL-information vid utformning av kommunikations- och beräkningseffektiv FL. Slutligen sammanfattar vi de viktigaste bidragen från varje artikel i avhandlingen. I det andra kapitlet presenterar avhandlingen de artiklar som den bygger på i deras ursprungliga publicerings- eller inlämningsformat. Ett multi-objektiv optimeringsproblem, som inkluderar FL-förlust- och iterationskostnadsfunktioner, föreslås där det trådlösa ALOHA-protokollet med slitsar reglerar kommunikationen mellan enheter och servern. Effekten av konfliktnivån i CSMA/CA på den kausala lösningen av den föreslagna optimeringen undersöks också. Dessutom utökas problemet med optimering av flera mål till att täcka allmänna scenarier inom trådlös kommunikation, inklusive konvexa och icke-konvexa förlustfunktioner. Nya resultat jämförs med välkända kommunikationseffektiva metoder som LAQ för att ytterligare förbättra kommunikationseffektiviteten i FL över trådlösa nätverk. / <p>QC 20230310</p>
109

Distributed resource allocation in networked systems using decomposition techniques

Johansson, Björn January 2006 (has links)
The Internet and power distribution grids are examples of ubiquitous systems that are composed of subsystems that cooperate using a communication network. We loosely define such systems as networked systems. These systems are usually designed by using trial and error. With this thesis, we aim to fill some of the many gaps in the diverse theory of networked systems. Therefore, we cast resource allocation in networked systems as optimization problems, and we investigate a versatile class of optimization problems. We then use decomposition methods to devise decentralized algorithms that solve these optimization problems. The thesis consists of four main contributions: First, we review decomposition methods that can be used to devise decentralized algorithms for solving the posed optimization problems. Second, we consider cross-layer optimization of communication networks. Network performance can be increased if the traditionally separated network layers are jointly optimized. We investigate the interplay between the data sending rates and the allocation of resources for the communication links. The communication networks we consider have links where the data transferring capacity can be controlled. Decomposition methods are applied to the design of fully distributed protocols for two wireless network technologies: networks with orthogonal channels and network-wide resource constraints, as well as wireless networks using spatial-reuse time division multiple access. Third, we consider the problem of designing a distributed control strategy such that a linear combination of the states of a number of vehicles coincide at a given time. The vehicles are described by linear difference equations and are subject to convex input constraints. It is demonstrated how primal decomposition techniques and incremental subgradient methods allow us to find a solution in which each vehicle performs individual planning of its trajectory and exchanges critical information with neighbors only. We explore various communication, computation, and control structures. Fourth, we investigate the resource allocation problem for large-scale server clusters with quality-of-service objectives, in which key functions are decentralized. Specifically, the problem of selecting which services the servers should provide is posed as a discrete utility maximization problem. We develop an efficient centralized algorithm that solves this problem, and we propose three suboptimal schemes that operate with local information. / QC 20101117
110

Hybrid cellular-broadcasting infrastructure systems : radio resource management issues

Bria, Aurelian January 2006 (has links)
This thesis addresses the problem of low-cost multicast delivery of multimedia content in future mobile networks. The trend towards reuse of existing infrastructure for cellular and broadcasting for building new systems is challenged, with respect to the opportunities for low cost service provision and scalable deployment of networks. The studies outline significant potential of hybrid cellular-broadcasting infrastructure to deliver lower-cost mobile multimedia, compared to conventional telecom or broadcasting systems. Even with simple interworking techniques the achievable cost savings can be significant, at least under some specific settings. The work starts with a foresight study shaped around four scenarios of the future, and continues with the introduction of a high-level framework for radio resource management in Ambient Networks. Two approaches on the hybrid system architecture are considered. The first one assumes different degrees of interworking between conventional cellular and broadcasting systems, in single and multi-operator environments. Second, is a broadcast only system where cellular sites are used for synchronized, complementary transmitters for the broadcasting site. In the first approach, the key issue is the multi-radio resource management, which is strongly affected by the degree of integration between the two networks. Two case studies deal with the problem of delivering, for lowest cost, a data item to a certain number of recipient users. A flexible broadcasting air interface, which offers several transmission data rates that can be dynamically changed, is demonstrated to significantly increase cost efficiency under certain conditions. An interesting result is that real-time monitoring of the user reception conditions is not needed, at least when multicast group is large. This indicates a high degree of integration between cellular and broadcasting networks may not by generally justified by visible cost savings. Scalability of the hybrid infrastructure deployment is the main topic in the second approach. For a DVB-H type of network, the numerical evaluations show that achievement of economies of scale while increasing network capacity and coverage, by employing higher modulation and coding rate or installing new transmission sites, is difficult. Therefore, a technique based on application-layer forward error correction with Raptor codingA is suggested for enabling a flexible trading between system capacity, perceived coverage and delay, in the case of mobile users. / QC 20101110

Page generated in 0.1177 seconds