• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 26
  • 6
  • 6
  • 4
  • 4
  • 3
  • Tagged with
  • 145
  • 145
  • 70
  • 57
  • 26
  • 26
  • 24
  • 23
  • 22
  • 21
  • 21
  • 21
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

On the interaction of cooperation techniques with channel coding and ARQ in wireless communications / Interactions de la coopération, des techniques ARQ et du codage canal dans le contexte des communications sans fil

Maliqi, Faton 19 December 2017 (has links)
De nos jours, les communications mobiles sont caractérisées par une demande croissante de services basés sur Internet. Les services vidéo représentent une grande partie du trafic Internet aujourd'hui. Selon Cisco, 75% du trafic mondial de données mobiles sera constitué par données vidéo d'ici 2020. Cette demande toujours croissante a été le principal moteur du développement du réseau cellulaire numérique 4G, où les services numériques à commutation de paquet sont la principale brique de conception. En particulier, le système global doit assurer à la fois hauts et bas débit de transmission, et fournir des garanties de temps réel, par exemple dans le cas du streaming vidéo ou des jeux en ligne. Cela a motivé, dans la dernière décennie, un intérêt renouvelé dans la technologie d'accès radio. Le canal sans fil est affecté par divers phénomènes physiques, comme les Chemins multiples, le shadowing, l'évanouissement, l'interférence, etc. Dans les technologies les plus récentes, ces effets sont contrastés en utilisant le protocole ARQ (Automatic Repeat reQuest), qui consiste à retransmettre le même signal depuis la source. Le protocole ARQ est généralement combiné avec des codes de canal au niveau de la couche physique, qui est connu comme HARQ (Hybrid ARQ). Une autre technique pour améliorer la communication entre une source et une destination est la communication coopérative, où un relais est utilisé comme nœud intermédiaire. La communication coopérative et le HARQ, si appliquées individuellement, améliorent considérablement les performances du système de communication. Une question ouverte est de savoir si leur combinaison apporterait la somme des améliorations singulières, ou si ne serait que marginalement bénéfique. Dans la littérature on peut trouver de nombreuses études sur la combinaison de ces deux techniques, mais dans notre thèse, nous nous concentrons principalement sur cette interaction à niveau de la couche physique (PHY) et de la couche de contrôle d'accès (MAC). Nous utilisons des exemples de protocoles sur un réseau composé de trois noeuds (source, destination et relais). Pour l'analyse théorique nous nous concentrons sur les Chaînes de Markov à états finis (FSMC). Nous abordons le cas où le relais fonctionne en mode Decode-and-Forward (DCF), très commun dans la littérature, mais notre analyse se concentre de manière plus accentuée sur le cas où le relais fonctionne en mode Demodulate-and-Forward (DMF), en raison de sa simplicité d'implémentation et de son efficacité. Ce cas est beaucoup plus rarement abordé dans la littérature disponible, à cause de la complexité supérieure demandée par son analyse. Habituellement, l'interaction entre les deux techniques a été étudiée dans le cas de protocoles déterministes, mais dans notre analyse, nous nous concentrerons sur les protocoles déterministes et probabilistes. Jusqu'à présent, les protocoles probabilistes, où le noeud retransmetteur est choisi selon un modèle probabiliste, ont été principalement proposés pour des couches supérieures du système de communication. Au contraire, cette thèse étudie des protocoles probabilistes sur la couche PHY et sur la couche MAC, qui permet de mieux analyser et optimiser les performances. Le protocole probabiliste ne contient que deux paramètres, qui peut être optimisé pour de meilleures performances. Ces paramètres peuvent être calculés pour imiter le comportement d'un protocole déterministe donné, et ses performances optimisées ne peuvent que s'améliorer par rapport à celui-ci. De plus, les performances du protocole probabiliste est comparées aux résultats présent en littérature, et la comparaison montre que notre protocole fonctionne mieux. Enfin, la question de la sélection des relais est également abordée. Nous proposons un critère pour opérer le choix du relais à utiliser, en cas de plusieurs candidats. La performance obtenue par ce critère est comparée à celle obtenue avec les critères de référence dans la littérature. / Nowadays, mobile communications are characterized by a fast-increasing demand for internet-based services (voice, video data). Video services constitutes a large fraction of the internet traffic today. According to a report by Cisco, 75% of the world's mobile data traffic will be video-based by 2020. This ever-increasing demand in delivering internet-based services, has been the main driver for the development of the 4G digital cellular network, where packet- switched services are the primary design target. In particular, the overall system needs to ensure high peak data rates to the user and low delay in the delivery of the content, in order to support real time applications such as video streaming and gaming. This has motivated, in the last decade, a renewed and raising interest and research in wireless radio access technology. Wireless channel suffers from various physical phenomena like path-loss, shadowing, fading, interference, etc. In the most recent technologies, these effects are contrasted using Automatic Repeat re-Quest (ARQ) protocol, which consist on the retransmission of the same signal from the same node. ARQ protocol is usually combined with channel codes at the physical layer, which is known as Hybrid Automatic Repeat re-Quest (HARQ) protocol. Another improvement for communications over wireless channels is achieved when Relays are used as intermediate nodes for helping the communication between a Source and a Destination, which is known as cooperative communication. Both techniques, cooperation and HARQ, if individually applied, significantly improve the performance of the communication system. One open question is whether their combination would bring the sum of the singular improvements, or be only marginally beneficial. In the literature we can find many studies for the combination of these two techniques, but in our thesis we focus mainly on this interaction at the level of the physical layer (PHY) and the medium access control layer (MAC). We use example protocols on a network of three nodes (Source, Destination and Relay). For the theoretical analysis of these systems we focus on Finite State Markov Chains (FSMC). We discuss the case where Relay works in Decode-and-Forward (DCF) mode, which is very common in the literature, but our analysis focuses more strongly on the case where the Relay works in Demodulate-and-Forward (DMF) mode, because of its simplicity of implementation and its efficiency. This case is much more rarely addressed in the available literature, because of the higher complexity required by its analysis. Usually, the interaction between the two techniques has been studied using deterministic protocols, but in our analysis we will focus on both, deterministic and probabilistic protocols. So far, probabilistic protocols, where the retransmitting node is chosen with a given probability, have been mainly proposed for higher layers of communication systems, but, in contrast, this thesis studies probabilistic protocols on the physical layer and MAC layer, which give more insight on the analysis and performance optimization. The probabilistic protocols contains very few parameters (only 2) that can be optimized for best performance. Note that these parameters can be computed to mimic the behavior of a given deterministic protocol, and the result of the probabilistic protocol after optimization can only improve over this one. Moreover, the performance of our optimized probabilistic protocol is checked against results of the literature, and the comparison shows that our protocol performs better. In the end, there is also discussed the issue of relay selection. In a scenario of several candidate Relays, we propose a criterion for choosing the best Relay. The performance obtained by this criterion is compared to that obtained with the reference criteria in the literature.
72

Throughput improvements for FHMA wireless data networks employing variable rate channel coding

Park, Andrew S. 02 February 2010 (has links)
Master of Science
73

Integer-forcing in multiterminal coding: uplink-downlink duality and source-channel duality

He, Wenbo 05 November 2016 (has links)
Interference is considered to be a major obstacle to wireless communication. Popular approaches, such as the zero-forcing receiver in MIMO (multiple-input and multiple-output) multiple-access channel (MAC) and zero-forcing (ZF) beamforming in MIMO broadcast channel (BC), eliminate the interference first and decode each codeword separately using a conventional single-user decoder. Recently, a transceiver architecture called integer-forcing (IF) has been proposed in the context of the MIMO Gaussian multiple-access channel to exploit integer-linear combinations of the codewords. Instead of treating other codewords as interference, the integer-forcing approach decodes linear combinations of the codewords from different users and solves for desired codewords. Integer-forcing can closely approach the performance of the optimal joint maximum likelihood decoder. An advanced version called successive integer-forcing can achieve the sum capacity of the MIMO MAC channel. Several extensions of integer-forcing have been developed in various scenarios, such as integer-forcing for the Gaussian MIMO broadcast channel, integer-forcing for Gaussian distributed source coding and integer-forcing interference alignment for the Gaussian interference channel. This dissertation demonstrates duality relationships for integer-forcing among three different channel models. We explore in detail two distinct duality types in this thesis: uplink-downlink duality and source-channel duality. Uplink-downlink duality is established for integer-forcing between the Gaussian MIMO multiple-access channel and its dual Gaussian MIMO broadcast channel. We show that under a total power constraint, integer-forcing can achieve the same sum rate in both cases. We further develop a dirty-paper integer-forcing scheme for the Gaussian MIMO BC and show an uplink-downlink duality with successive integer-forcing for the Gaussian MIMO MAC. The source-channel duality is established for integer-forcing between the Gaussian MIMO multiple-access channel and its dual Gaussian distributed source coding problem. We extend previous results for integer-forcing source coding to allow for successive cancellation. For integer-forcing without successive cancellation in both channel coding and source coding, we show the rates in two scenarios lie within a constant gap of one another. We further show that there exists a successive cancellation scheme such that both integer-forcing channel coding and integer-forcing source coding achieve the same rate tuple.
74

Efficient Resource Allocation for Wireless Networks

Eric J Ruzomberka (13145559) 26 July 2022 (has links)
<p>The complex and distributed nature of wireless networks have traditionally made allocation of network resources between network stakeholders a challenging task. In the next generation of wireless networks, allocation mechanisms must be able to address these traditional challenges while also addressing new challenges. New challenges arise as networks adopt changing business relationships between existing stakeholders, introduce new stakeholders with diverse interests, integrate intelligent and autonomous systems, and contend with emerging security threats. To address these new challenges, wireless network engineers will require a fundamental understanding of systems consisting of strategic decision makers with competing interests. Our contribution to this understanding is threefold: First, we study a novel moral hazard that that can occur when payment mechanisms are used to incentivize cooperation between multi-hop network nodes. Second, we introduce a network sharing framework that enables 5G/beyond-5G mobile operators to split shared infrastructure costs subject to a regulatory constraint on the cost structure of the shared network. Lastly, we study reliable communication over an adversarial channel in which the adversary can compute side-information subject to a practical computational bound. For each of the above three topics, we provide both analytical and numerical studies from which we derive insights into the design of allocation mechanisms.</p>
75

Low-delay sensing and transmission in wireless sensor networks

Karlsson, Johannes January 2008 (has links)
With the increasing popularity and relevance of ad-hoc wireless sensor networks, cooperative transmission is more relevant than ever. In this thesis, we consider methods for optimization of cooperative transmission schemes in wireless sensor networks. We are in particular interested in communication schemes that can be used in applications that are critical to low-delays, such as networked control, and propose suitable candidates of joint source-channel coding schemes. We show that, in many cases, there are significant gains if the parts of the system are jointly optimized for the current source and channel. We especially focus on two means of cooperative transmission, namely distributed source coding and relaying. In the distributed source coding case, we consider transmission of correlated continuous sources and propose an algorithm for designing simple and energy-efficient sensor nodes. In particular the cases of the binary symmetric channel as well as the additive white Gaussian noise channel are studied. The system works on a sample by sample basis yielding a very low encoding complexity, at an insignificant delay. Due to the source correlation, the resulting quantizers use the same indices for several separated intervals in order to reduce the quantization distortion. For the case of relaying, we study the transmission of a continuous Gaussian source and the transmission of an uniformly distributed discrete source. In both situations, we propose design algorithms to design low-delay source-channel and relay mappings. We show that there can be significant power savings if the optimized systems are used instead of more traditional systems. By studying the structure of the optimized source-channel and relay mappings, we provide useful insights on how the optimized systems work. Interestingly, the design algorithm generally produces relay mappings with a structure that resembles Wyner-Ziv compression.
76

A Practical Coding Scheme For Broadcast Channel

Sun, Wenbo 10 1900 (has links)
<p>In this thesis, a practical superposition coding scheme based on multilevel low-density parity-check (LDPC) codes is proposed for discrete memoryless broadcast channels. The simulation results show that the performance of the proposed scheme approaches the information-theoretic limits. We also propose a method for optimizing the degree distribution of multilevel LDPC codes based on the analysis of EXIT functions.</p> / Master of Applied Science (MASc)
77

A Source-Channel Separation Theorem with Application to the Source Broadcast Problem

Khezeli, Kia 11 1900 (has links)
A converse method is developed for the source broadcast problem. Specifically, it is shown that the separation architecture is optimal for a variant of the source broadcast problem and the associated source-channel separation theorem can be leveraged, via a reduction argument, to establish a necessary condition for the original problem, which uni es several existing results in the literature. Somewhat surprisingly, this method, albeit based on the source-channel separation theorem, can be used to prove the optimality of non-separation based schemes and determine the performance limits in certain scenarios where the separation architecture is suboptimal. / Thesis / Master of Applied Science (MASc)
78

Implementation of Parallel and Serial Concatenated Convolutional Codes

Wu, Yufei 27 April 2000 (has links)
Parallel concatenated convolutional codes (PCCCs), called "turbo codes" by their discoverers, have been shown to perform close to the Shannon bound at bit error rates (BERs) between 1e-4 and 1e-6. Serial concatenated convolutional codes (SCCCs), which perform better than PCCCs at BERs lower than 1e-6, were developed borrowing the same principles as PCCCs, including code concatenation, pseudorandom interleaving and iterative decoding. The first part of this dissertation introduces the fundamentals of concatenated convolutional codes. The theoretical and simulated BER performance of PCCC and SCCC are discussed. Encoding and decoding structures are explained, with emphasis on the Log-MAP decoding algorithm and the general soft-input soft-output (SISO) decoding module. Sliding window techniques, which can be employed to reduce memory requirements, are also briefly discussed. The second part of this dissertation presents four major contributions to the field of concatenated convolutional coding developed through this research. First, the effects of quantization and fixed point arithmetic on the decoding performance are studied. Analytic bounds and modular renormalization techniques are developed to improve the efficiency of SISO module implementation without compromising the performance. Second, a new stopping criterion, SDR, is discovered. It is found to perform well with lowest cost when evaluating its complexity and performance in comparison with existing criteria. Third, a new type-II code combining automatic repeat request (ARQ) technique is introduced which makes use of the related PCCC and SCCC. Fourth, a new code-assisted synchronization technique is presented, which uses a list approach to leverage the simplicity of the correlation technique and the soft information of the decoder. In particular, the variant that uses SDR criterion achieves superb performance with low complexity. Finally, the third part of this dissertation discusses the FPGA-based implementation of the turbo decoder, which is the fruit of cooperation with fellow researchers. / Ph. D.
79

Error Correction and Concealment of Bock Based, Motion-Compensated Temporal Predition, Transform Coded Video

Robie, David Lee 30 March 2005 (has links)
Error Correction and Concealment of Block Based, Motion-Compensated Temporal Prediction, Transform Coded Video David L. Robie 133 Pages Directed by Dr. Russell M. Mersereau The use of the Internet and wireless networks to bring multimedia to the consumer continues to expand. The transmission of these products is always subject to corruption due to errors such as bit errors or lost and ill-timed packets; however, in many cases, such as real time video transmission, retransmission request (ARQ) is not practical. Therefore receivers must be capable of recovering from corrupted data. Errors can be mitigated using forward error correction in the encoder or error concealment techniques in the decoder. This thesis investigates the use of forward error correction (FEC) techniques in the encoder and error concealment in the decoder in block-based, motion-compensated, temporal prediction, transform codecs. It will show improvement over standard FEC applications and improvements in error concealment relative to the Motion Picture Experts Group (MPEG) standard. To this end, this dissertation will describe the following contributions and proofs-of-concept in the area of error concealment and correction in block-based video transmission. A temporal error concealment algorithm which uses motion-compensated macroblocks from previous frames. A spatial error concealment algorithm which uses the Hough transform to detect edges in both foreground and background colors and using directional interpolation or directional filtering to provide improved edge reproduction. A codec which uses data hiding to transmit error correction information. An enhanced codec which builds upon the last by improving the performance of the codec in the error-free environment while maintaining excellent error recovery capabilities. A method to allocate Reed-Solomon (R-S) packet-based forward error correction that will decrease distortion (using a PSNR metric) at the receiver compared to standard FEC techniques. Finally, under the constraints of a constant bit rate, the tradeoff between traditional R-S FEC and alternate forward concealment information (FCI) is evaluated. Each of these developments is compared and contrasted to state of the art techniques and are able to show improvements using widely accepted metrics. The dissertation concludes with a discussion of future work.
80

Coding with side information

Cheng, Szeming 01 November 2005 (has links)
Source coding and channel coding are two important problems in communications. Although side information exists in everyday scenario, the e&#64256;ect of side information is not taken into account in the conventional setups. In this thesis, we focus on the practical designs of two interesting coding problems with side information: Wyner-Ziv coding (source coding with side information at the decoder) and Gel??fand-Pinsker coding (channel coding with side information at the encoder). For WZC, we split the design problem into the two cases when the distortion of the reconstructed source is zero and when it is not. We review that the &#64257;rst case, which is commonly called Slepian-Wolf coding (SWC), can be implemented using conventional channel coding. Then, we detail the SWC design using the low-density parity-check (LDPC) code. To facilitate SWC design, we justify a necessary requirement that the SWC performance should be independent of the input source. We show that a su&#64259;cient condition of this requirement is that the hypothetical channel between the source and the side information satis&#64257;es a symmetry condition dubbed dual symmetry. Furthermore, under that dual symmetry condition, SWC design problem can be simply treated as LDPC coding design over the hypothetical channel. When the distortion of the reconstructed source is non-zero, we propose a practical WZC paradigm called Slepian-Wolf coded quantization (SWCQ) by combining SWC and nested lattice quantization. We point out an interesting analogy between SWCQ and entropy coded quantization in classic source coding. Furthermore, a practical scheme of SWCQ using 1-D nested lattice quantization and LDPC is implemented. For GPC, since the actual design procedure relies on the more precise setting of the problem, we choose to investigate the design of GPC as the form of a digital watermarking problem as digital watermarking is the precise dual of WZC. We then introduce an enhanced version of the well-known spread spectrum watermarking technique. Two applications related to digital watermarking are presented.

Page generated in 0.0528 seconds