Spelling suggestions: "subject:"telecommunications."" "subject:"elecommunications.""
61 |
Opportunistic Content Distribution : Feasibility, Design and Performance EvaluationHelgason, Ólafur January 2010 (has links)
Originally, the main purpose of the wired Internet was to interconnect hosts on different networks in order to allow them to communicate. Although this is still an important service, the Internet has evolved and today its predominant application is to provide its users with access to contents. This evolution is also taking place in the wireless domain and is to a large extent being pushed by advances in mobile platforms which now commonly have advanced multimedia capabilities. Today, contents are therefore both being consumed and produced by users on the move which calls for efficient dissemination of information to and from the mobile devices. This thesis considers content-centric networking, particularly in the context of mobile wireless networks. The main focus is on opportunistic distribution of content where mobile nodes directly exchange content items when they are within communication range. This opportunistic communication mode allows for networking in the absence of fixed infrastructure and it has several other benefits in terms of scalability, network-neutrality, locality and more. The contributions of this thesis lie in three areas. Firstly we study the feasibility of opportunistic content distribution among mobile nodes in urban areas using both analytic models and simulations. Our findings show that if nodes cooperate by sharing, even in a limited manner, content can spread efficiently in a number of common case scenarios. Secondly we present our design of PodNet: a middleware architecture for a mobile peer-to-peer content distribution. On the Internet, PodNet uses single source multicast to implement scalable and efficient delivery of published content to subscribers over the current unchanged Internet architecture. In the wireless domain, PodNet uses a decentralized content solicitation scheme that allows the distribution of content between mobile devices without requiring Internet connectivity and infrastructure support. Key components of the design are the content structure, multicast distribution, solicitation protocol, service discovery and an API based on the publish/subscribe paradigm. Thirdly we perform a thorough system evaluation to dimension important system parameters and to assess system performance. Our evaluation uses both experimental connectivity traces and a detailed simulator implementation that utilizes a realistic mobility model of pedestrians in an urban area. / QC 20100507
|
62 |
Design of Reliable Communication Solutions for Wireless Sensor Networks : Managing Interference in Unlicensed BandsStabellini, Luca January 2009 (has links)
Recent surveys conducted in the context of industrial automation have outlined that reliability concerns represent today one of the major barriers to the diffusion of wireless communications for sensing and control applications: this limits the potential of wireless sensor networks and slows down the adoption of this new technology. Overcoming these limitations requires that awareness on the causes of unreliability and on the possible solutions to this problem is created. With this respect, the main factor responsible for the perceived unreliability is radio interference: low-power communications of sensor nodes are in fact very sensitive to bad channel conditions and can be easily corrupted by transmissions of other co-located devices. In this thesis we investigate different techniques that can be exploited to avoid interference or mitigate its effects.We first consider interference avoidance through dynamic spectrum access: more specifically we focus on the idea of channel surfing and design algorithms that allow sensor nodes to identify interfered channels, discover their neighbors and maintain a connected topology in multi-channel environments. Our investigation shows that detecting and thus avoiding interference is a feasible task that can be performed by complexity and power constrained devices. In the context of spectrum sharing, we further consider the case of networked estimation and aim at quantifying the effects of intranetwork interference, induced by contention-based medium access, over the performance of an estimation system. We show that by choosing in an opportune manner their probability of transmitting, sensors belonging to a networked control system can minimize the average distortion of state estimates.In the second part of this thesis we focus on frequency hopping techniques and propose a new adaptive hopping algorithm. This implements a new approach for frequency hopping: in particular rather than aiming at removing bad channels from the adopted hopset our algorithm uses all the available frequencies but with probabilities that depend on the experienced channel conditions. Our performance evaluation shows that this approach outperforms traditional frequency hopping schemes as well as the adaptive implementation included in the IEEE 802.15.1 radio standard leading to a lower packet error rate.Finally, we consider the problem of sensor networks reprogramming and propose a way for ingineering a coding solution based on fountain codes and suitable for this challenging task. Using an original genetic approach we optimize the degree distribution of the used codes so as to achieve both low overhead and low decoding complexity. We further engineer the implementation of fountain codes in order to allow the recovery of corrupted information through overhearing and improve the resilience of the considered reprogramming protocol to channel errors.
|
63 |
Microwave substrate filter implementationfor an advanced 5G antenna systemM Bashar, Arhayem January 2021 (has links)
This master thesis has been done at Ericsson AB (2018) in collaboration with LeonardoPadial Torán. Microwave filters for the future 5G have become an increasingly interesting researchtopic. The conventional design theory of microwave filters is no longer valid. Newmodels have to be implemented that meet the challenges imposed by demands ofcompact size, low insertion losses, low cost and high performance.In this thesis work, stripline band pass filters with a 28 GHz center frequency havebeen designed and simulated for 5G wireless communication systems. Many designtechniques have been investigated in order to achieve the goal requirements and toovercome the design rules limitation. In particular two different filter categories havebeen designed and simulated:Uniform Impedance Resonator (UIR) configurations. There are various possibletopologies to implement UIR using stripline technology such as parallel-coupled,hairpin, interdigital and combined filters.Here, a parallel-coupled λ/2-line band pass filter was chosen and investigated by theinsertion loss method. This filter, however, displayed an undesired pass band at twicethe designed central frequency.Step Impedance Resonator (SIR) configurations. In order to suppress the secondharmonic, three different types of (SIR) band pass filters have been investigated: theparallel-coupled SIR filter, the hairpin SIR filter and, finally, a filter comprising fourcross-coupled hairpin SIR filters. Advanced Design System (ADS) has been used for the initial filter design and AnsysHFSS has been used for the electromagnetic (EM) simulation part, includingparameter optimization.
|
64 |
Wave Propagation Models in the Troposphere for Long-Range UHF/SHF Radio ConnectionsLindquist, Tim January 2020 (has links)
No description available.
|
65 |
Signal Processing Aspects of Cell-Free Massive MIMOInterdonato, Giovanni January 2018 (has links)
The fifth generation of mobile communication systems (5G) promises unprecedented levels of connectivity and quality of service (QoS) to satisfy the incessant growth in the number of mobile smart devices and the huge increase in data demand. One of the primary ways 5G network technology will be accomplished is through network densification, namely increasing the number of antennas per site and deploying smaller and smaller cells. Massive MIMO, where MIMO stands for multiple-input multiple-output, is widely expected to be a key enabler of 5G. This technology leverages an aggressive spatial multiplexing, from using a large number of transmitting/receiving antennas, to multiply the capacity of a wireless channel. A massive MIMO base station (BS) is equipped with a large number of antennas, much larger than the number of active users. The users are coherently served by all the antennas, in the same time-frequency resources but separated in the spatial domain by receiving very directive signals. By supporting such a highly spatially-focused transmission (precoding), massive MIMO provides higher spectral and energy efficiency, and reduces the inter-cell interference compared to existing mobile systems. The inter-cell interference is however becoming the major bottleneck as we densify the networks. It cannot be removed as long as we rely on a network-centric implementation, since the inter-cell interference concept is inherent to the cellular paradigm. Cell-free massive MIMO refers to a massive MIMO system where the BS antennas, herein referred to as access points (APs), are geographically spread out. The APs are connected, through a fronthaul network, to a central processing unit (CPU) which is responsible for coordinating the coherent joint transmission. Such a distributed architecture provides additional macro-diversity, and the co-processing at multiple APs entirely suppresses the inter-cell interference. Each user is surrounded by serving APs and experiences no cell boundaries. This user-centric approach, combined with the system scalability that characterizes the massive MIMO design, constitutes a paradigm shift compared to the conventional centralized and distributed wireless communication systems. On the other hand, such a distributed system requires higher capacity of back/front-haul connections, and the signal co-processing increases the signaling overhead. In this thesis, we focus on some signal processing aspects of cell-free massive MIMO. More specifically, we firstly investigate if the downlink channel estimation, via downlink pilots, brings gains to cell-free massive MIMO or the statistical channel state information (CSI) knowledge at the users is enough to reliably perform data decoding, as in conventional co-located massive MIMO. Allocating downlink pilots is costly resource-wise, thus we also propose resource saving-oriented strategies for downlink pilot assignment. Secondly, we study further fully distributed and scalable precoding schemes in order to outperform cell-free massive MIMO in its canonical form, which consists in single-antenna APs implementing conjugate beamforming (also known as maximum ratio transmission).
|
66 |
Block Diagonalization Based BeamformingPatil, Darshan January 2017 (has links)
With increasing mobile penetration multi-user multi-antenna wireless communication systems are needed. This is to ensure higher per-user data rates along with higher system capacities by exploiting the excess degree of freedom due to additional antennas at the receiver with spatial multiplexing. The rising popularity of "Gigabit-LTE" and "Massive-MIMO" or "FD-MIMO" is an illustration of this demand for high data rates, especially in the forward link. In this thesis we study the MU-MIMO communication setup and attempt to solve the problem of system sumrate maximization in the downlink data transmission (also known as forward link) under a limited availability of transmit power at the base station. Contrast to uplink, in the downlink, every user in the system is required to perform interference cancellation due to signals intended to other co-users. As the mobile terminals have strict restrictions on power availability and physical dimensions, processing capabilities are extremely narrow (relative to the base station). Therefore, we study the solutions from literature in which most of the interference cancellation can also be performed by the base station (precoding). While doing so we maximize the sumrate and also consider the restrictions on the total transmit power available at the base station. In this thesis, we also study and evaluate different conventional linear precoding schemes and how they relate to the optimal structure of the solution which maximize the effective Signal to Noise Ratio (SINR) at every receiver output. We also study one of the suboptimal precoding solutions known as Block-diagonalization (BD) applicable in the case where a receiver has multiple receive antennas and compare their performance. Finally, we notice that in spite of the promising results in terms of system sumrate performance, they are not deployed in practice. The reason for this is that classic BD schemes are computationally heavy. In this thesis we attempt to reduce the complexity of the BD schemes by exploiting the principle of coherence and using perturbation theory. We make use of OFDM technology and efficient linear algebra methods to update the beamforming weights in a smart way rather than entirely computing them again such that the overall complexity of the BD technique is reduced by at least an order of magnitude. The results are simulated using the exponential correlation channel model and the LTE 3D spatial channel model which is standardized by 3GPP. The simulated environment consists of single cell MU-MIMO in a standardized urban macro environment with up to 100 transmit antennas at the BS and 2 receive antennas per user. We observe that with the increase in spatial correlations and in high SNR regions, BD outperforms other precoding schemes discussed in this thesis and the developed low complex BD precoding solution can be considered as an alternative in a more general framework with multiple antennas at the receiver. / För att klara det ökade mobilanvändandet krävs trådlösa kommunikationssystemmed multipla antenner. Detta, för att kunna garantera högre datatakt peranvändare och högre systemkapacitet, genom att utnyttja att extra antennerpå basstationen ger extra frihetsgrader som kan nyttjas för spatiell multiplexing.Den ökande populariteten hos “Gigabit-LTE”, “Massive-MIMO” och “FDMIMO”illustrerar detta behov av höga datatakter, framför allt i framåtlänken.I denna avhandling studerar vi MU-MIMO-kommunikation och försöker lösaproblemet att maximera summadatatakten i nedlänkskommunikation (ävenkallat framåtlänken), med begränsad tillgänglig sändeffekt hos basstationen.In nedlänken, till skillnad från upplänken, så måste varje användare hanterainterferens från signaler som är avsedda för andra mottagare. Eftersom mobilterminalerär begränsade i storlek och batteristyrka, så har de små möjligheteratt utföra sådan signalbehandling (jämfört med basstationen). Därför studerarvi lösningar från litteraturen där det mesta av interferensundertryckningenockså kan utföras vid basstationen (förkodning). Detta görs för att maximerasummadatatakten och även ta hänsyn till begränsningar i basstationens totalasändeffekt.I denna avhandling studerar vi även olika konventionella linjära förkodningsmetoderoch utvärderar hur de relaterar till den optimala strukturen hos lösningensom maximerar signal till brus-förhållandet (SINR) hos varje mottagare.Vi studerar även en suboptimal förkodningslösning kallad blockdiagonalisering(BD) som är användbar när en mottagare har multipla mottagarantenner, ochjämför dess prestanda.Slutligen noterar vi att dessa förkodningsmetoder inte har implementeratsi praktiska system, trots deras lovande prestanda. Anledningen är att klassiskaBD-metoder är beräkningskrävande. I denna avhandling försöker vi minskaberäkningskomplexiteten genom att utnyttja kanalens koherens och användaperturbationsteori. Vi utnyttjar OFDM-teknologi och effektiva metoder i linjäralgebra för att uppdatera förkodarna på ett intelligent sätt istället för att beräknadem på nytt, så att den totala komplexiteten för BD-tekniken reducerasåtminstone en storleksordning.Resultaten simuleras med både en kanalmodell baserad på exponentiell korrelationoch med den spatiella LTE 3D-kanalmodellen som är standardiseradav 3GPP. Simuleringsmiljön består av en ensam makrocell i en standardiseradstadsmiljö med MU-MIMO med upp till 100 sändantenner vid basstationenoch 2 mottagarantenner per användare. Vi observerar att BD utklassar övrigaförkodningsmetoder som diskuteras i avhandlingen när spatiella korrelationenökar och för höga SNR, och att den föreslagna lågkomplexa BD-förkodaren kanvara ett alternativ i ett mer generellt scenario med multipla antenner hos mottagarna.
|
67 |
Real-Time Search in Large Networks and CloudsUddin, Misbah January 2013 (has links)
Networked systems, such as telecom networks and cloud infrastructures, hold and generate vast amounts of conguration and operational data, only a small portion of which is used today by management applications. The overall goal of this work is to make all this data available through a real-time search process named network search , where queries are invoked, without giving the location or the format of the data, similar to web search. Such a capability will simplify many management applications and enable new classes of realtime management solutions. The fundamental problems in network search relate to search in a vast and dynamic information space and the fact that the information is distributed across a very large system. The thesis contains several contributions towards engineering a network search system. We present a weakly-structured information model, which enables representation of heterogeneous network data, a keyword-based search language, which supports location- and schema-oblivious search queries, and a distributed search mechanism, which is based on an echo protocol and supports a range of matching and ranking options. The search is performed in a peer-to-peer fashion in a network of search nodes. Each search node maintains a local real-time database of locally sensed conguration and operational information. Many of the concepts we developed for network search are based on results from the elds of information retrieval, web search, and very large databases. The key feature of our solution is that the search process and the computation of the query results is performed on local data inside the network or the cloud. We have build a prototype of the system on a cloud testbed and developed applications that use network search functionality. The performance measurements suggest that it is feasible to engineer a network search system that processes queries at low latency and low overhead, and that can scale to a very large system in the order of 100,000 nodes. / <p>QC 20130930</p>
|
68 |
Real-Time Monitoring of Global Variables in Large-Scale Dynamic SystemsWuhib, Fetahi Zebenigus January 2007 (has links)
Large-scale dynamic systems, such as the Internet, as well as emerging peer-to-peer networks and computational grids, require a high level of awareness of the system state in real-time for proper and reliable operation. A key challenge is to develop monitoring functions that are efficient, scalable, robust and controllable. The thesis addresses this challenge by focusing on engineering protocols for distributed monitoring of global state variables. The global variables are network-wide aggregates, computed from local device variables using aggregation functions such as SUM, MAX, AVERAGE, etc. Furthermore, it addresses the problem of detecting threshold crossing of such aggregates. The design goals for the protocols are efficiency, quality, scalability, robustness and controllability. The work presented in this thesis has resulted in two novel protocols: a gossip-based protocol for continuous monitoring of aggregates called G-GAP, and a tree-based protocol for detecting thresh old crossings of aggregates called TCA-GAP. The protocols have been evaluated against the design goals through three complementing evaluation methods: theoretical analysis, simulation study and testbed implementation. / QC 20101122
|
69 |
Design and evaluation of network processor systems and forwarding applicationsFu, Jing January 2006 (has links)
During recent years, both the Internet traffic and packet transmission rates have been growing rapidly, and new Internet services such as VPNs, QoS and IPTV have emerged. To meet increasing line speed requirements and to support current and future Internet services, improvements and changes are needed in current routers both with respect to hardware architectures and forwarding applications. High speed routers are nowadays mainly based on application specific integrated circuits (ASICs), which are custom made and not flexible enough to support diverse services. Generalpurpose processors offer flexibility, but have difficulties to in handling high data rates. A number of software IP-address lookup algorithms have therefore been developed to enable fast packet processing in general-purpose processors. Network processors have recently emerged to provide the performance of ASICs combined with the programmability of general-purpose processors. This thesis provides an evaluation of router design including both hardware architectures and software applications. The first part of the thesis contains an evaluation of various network processor system designs. We introduce a model for network processor systems which is used as a basis for a simulation tool. Thereafter, we study two ways to organize processing elements (PEs) inside a network processor to achieve parallelism: a pipelined and a pooled organization. The impact of using multiple threads inside a single PE is also studied. In addition, we study the queueing behavior and packet delays in such systems. The results show that parallelism is crucial to achieving high performance,but both the pipelined and the pooled processing-element topologies achieve comparable performances. The detailed queueing behavior and packet delay results have been used to dimension queues, which can be used as guidelines for designing memory subsystems and queueing disciplines. The second part of the thesis contains a performance evaluation of an IP-address lookup algorithm, the LC-trie. The study considers trie search depth, prefix vector access behavior, cache behavior, and packet lookup service time. For the packet lookup service time, the evaluation contains both experimental results and results obtained from a model. The results show that the LC-trie is an efficient route lookup algorithm for general-purpose processors, capable of performing 20 million packet lookups per second on a Pentium 4, 2.8 GHz computer, which corresponds to a 40 Gb/s link for average sized packets. Furthermore, the results show the importance of choosing packet traces when evaluating IP-address lookup algorithms: real-world and synthetically generated traces may have very different behaviors. The results presented in the thesis are obtained through studies of both hardware architectures and software applications. They could be used to guide the design of next-generation routers. / QC 20101112
|
70 |
Access selection in multi-system architectures : cooperative and competitive contextsHultell, Johan January 2007 (has links)
Future wireless networks will be composed of multiple radio access technologies (RATs). To benefit from these, users must utilize the appropriate RAT, and access points (APs). In this thesis we evaluate the efficiency of selection criteria that, in addition to path-loss and system bandwidth, also consider load. The problem is studied for closed as well as open systems. In the former both terminals and infrastructure are controlled by a single actor (e.g., mobile operator), while the latter refers to situations where terminals, selfishly, decide which AP it wants to use (as in a common market-place). We divide the overall problem into the prioritization between available RATs and, within a RAT, between the APs. The results from our studies suggest that data users, in general, should be served by the RAT offering highest peak data rate. As this can be estimated by terminals, the benefits from centralized RAT selection is limited. Within a subsystem, however, load-sensitive AP selection criteria can increase data-rates. Highest gains are obtained when the subsystem is noise-limited, deployment unplanned, and the relative difference in number of users per AP significant. Under these circumstances the maximum supported load can be increased by an order of magnitude. However, also decentralized AP selection, where greedy autonomous terminal-based agents are in charge of the selection, were shown to give these gains as long they accounted for load. We also developed a game-theoretic framework, where users competed for wireless resources by bidding in a proportionally fair divisible auction. The framework was applied to a scenario where revenue-seeking APs competed for traffic by selecting an appropriate price. Compared to when APs cooperated, modelled by the Nash bargaining solution, our results suggest that a competitive access market, where infrastructure is shared implicitly, generally, offers users better service at a lower cost. Although AP revenues reduce, this reduction is, relatively, small and were shown to decrease with the concavity of demand. Lastly we studied whether data services could be offered in a discontinuous high-capacity network by letting a terminal-based agent pre-fetch information that its user potentially may request at some future time-instant. This decouples the period where the information is transferred, from the time-instant when it is consumed. Our results show that above some critical AP density, considerably lower than that required for continuous coverage, services start to perform well. / QC 20101109
|
Page generated in 0.1208 seconds