• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 9
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evaluation of Word Length Effects on Multistandard Soft Decision Viterbi Decoding

Salim, Ahmed January 2011 (has links)
There have been proposals of many parity inducing techniques like Forward ErrorCorrection (FEC) which try to cope the problem of channel induced errors to alarge extent if not completely eradicate. The convolutional codes have been widelyidentified to be very efficient among the known channel coding techniques. Theprocess of decoding the convolutionally encoded data stream at the receiving nodecan be quite complex, time consuming and memory inefficient.This thesis outlines the implementation of multistandard soft decision viterbidecoder and word length effects on it. Classic Viterbi algorithm and its variantsoft decision viterbi algorithm, Zero-tail termination and Tail-Biting terminationfor the trellis are discussed. For the final implementation in C language, the "Zero-Tail Termination" approach with soft decision Viterbi decoding is adopted. Thismemory efficient implementation approach is flexible for any code rate and anyconstraint length.The results obtained are compared with MATLAB reference decoder. Simulationresults have been provided which show the performance of the decoderand reveal the interesting trade-off of finite word length with system performance.Such investigation can be very beneficial for the hardware design of communicationsystems. This is of high interest for Viterbi algorithm as convolutional codes havebeen selected in several famous standards like WiMAX, EDGE, IEEE 802.11a,GPRS, WCDMA, GSM, CDMA 2000 and 3GPP-LTE.
12

Identification aveugle de codes correcteurs d'erreurs basés sur des grands corps de Galois et recherche d'algorithmes de type décision souple pour les codes convolutifs / No

Zrelli, Yasamine 10 December 2013 (has links)
La première partie de ce mémoire porte sur l’identification aveugle des codes correcteurs d’erreurs non-binaires, travaillant dans le corps de Galois GF(2m). Une étude sur les propriétés des corps de Galois et des codes non-binaires a été conduite afin d’obtenir les éléments indispensables à la mise en oeuvre des méthodes d’identification aveugle. A partir de la seule connaissance des symboles reçus, nous avons développé des méthodes permettant d’identifier les paramètres des codes non-binaires lors d’une transmission non-bruitée et nous avons mis en évidence la pertinence de cette approche lorsque les paramètres de GF(2m) utilisés à l’émission sont connus à la réception. Nous avons aussi mené une étude théorique approfondie pour justifier l’utilisation du critère du rang par la plupart des méthodes d’identification existantes. Dans le cas d’une transmission bruitée, nous avons développé trois algorithmes dédiés à l’identification en aveugle de la taille des mots de code pour des codes binaires et non-binaires. Pour identifier une base du code dual, nous avons généralisé une technique existante pour les codes binaires, basée sur l’utilisation d’un démodulateur à décision ferme, au cas des codes non-binaires. Puis, nous avons amélioré les performances de détection de cette technique en introduisant un processus itératif basé sur l’utilisation conjointe d’un démodulateur à décision souple et d’un algorithme de décodage à décision souple. Dans la deuxième partie de ce mémoire, nous avons tout d’abord proposé un formalisme mathématique pour étudier l’impact des fonctions de mapping-demapping sur la manipulation des données d’un corps de Galois dans le cas des codes non-binaires. Ensuite, nous avons exploité ce formalisme pour détecter et corriger quelques défauts de transmission. Enfin, nous avons étudié l’impact de certaines fonctions de mapping-demapping sur l’identification aveugle des paramètres des codes non-binaires. / In the first part of this thesis, we have focused on the blind identification of non-binary error correcting codes over the Galois field GF(2m). A study of the properties of Galois fields and non-binarycodes has been presented so as to get the essential elements for a blind identification of non-binary codes parameters. From the knowledge of only the received symbols, methods have been developed to identify the code parameters in the case of a noiseless transmission. The relevance of this approach has been highlighted when the parameters of the used Galois field are known bythe receiver. A theoretical study of rank criterion behaviors has been also presented to justify its use by the most existing identification methods. Then, three blind identification methods of the codeword size for binary and non-binary linear codes have been developped in the framework of a noisy transmission. In order to identify a dual code basis, an existing method for binary codes based on the use of a hard decision demodulation has been generalized to non-binary codes. The detection performance of this technique has been improved when an iterative process based on the joint use of a soft-decision demodulator and a soft-decision iterative decoding is introduced. In the second part of this thesis manuscript, a mathematical formalism is proposed in order to investigate the impact of mapping-demapping functions on linear algebra computations and properties over Galois field in the case of non-binary error correcting codes. Finally, this formalism has been exploited to detect or/and correct some digital transmission problems such as a bad synchronization. Finally, we have studied the impact of some mapping-demapping functions on the blind identification ofnon-binary error correcting codes parameters.
13

Parallelized Architectures For Low Latency Turbo Structures

Gazi, Orhan 01 January 2007 (has links) (PDF)
In this thesis, we present low latency general concatenated code structures suitable for parallel processing. We propose parallel decodable serially concatenated codes (PDSCCs) which is a general structure to construct many variants of serially concatenated codes. Using this most general structure we derive parallel decodable serially concatenated convolutional codes (PDSCCCs). Convolutional product codes which are instances of PDSCCCs are studied in detail. PDSCCCs have much less decoding latency and show almost the same performance compared to classical serially concatenated convolutional codes. Using the same idea, we propose parallel decodable turbo codes (PDTCs) which represent a general structure to construct parallel concatenated codes. PDTCs have much less latency compared to classical turbo codes and they both achieve similar performance. We extend the approach proposed for the construction of parallel decodable concatenated codes to trellis coded modulation, turbo channel equalization, and space time trellis codes and show that low latency systems can be constructed using the same idea. Parallel decoding operation introduces new problems in implementation. One such problem is memory collision which occurs when multiple decoder units attempt accessing the same memory device. We propose novel interleaver structures which prevent the memory collision problem while achieving performance close to other interleavers.
14

Advanced Techniques for Achieving Near Maximum-Likelihood Soft Detection in MIMO-OFDM Systems and Implementation Aspects for LTE/LTE-A

Aubert, Sébastien 23 September 2011 (has links) (PDF)
Cette thèse traite des systèmes MIMO à multiplexage spatial, associés à la modulation OFDM. L'étude s'attarde particulièrement sur les systèmes 4x4, inclus ou à l'étude dans les normes 3GPP LTE et 3GPP LTE-A. Ces dimensions particulières nécessitent une étude de conception poussée du récepteur. Il s'agit notamment de proposer des détecteurs qui affichent à la fois de bonnes performances, une faible latence et une complexité de calcul réalisable dans un système embarqué. Le défi consiste plus particulièrement à proposer un détecteur offrant des performances quasi-optimales, tout en ne nécessitant qu'une complexité de calcul polynomiale. Une attention particulière est prêtée aux problèmes d'implantation. Ainsi, avantage est donné aux algorithmes à complexité fixe et permettant la réalisation d'opérations en parallèle. En réponse aux problématiques rencontrées, l'architecture du détecteur requiert une attention particulière. Le choix stratégique adopté est de chercher à transférer au prétraitement - qui ne dépend pas des données - le plus possible de complexité de calcul. Au cours de ce travail et suite à l'introduction du contexte général et des principaux pré-requis, l'inventaire des grandes tendances dans la littérature en ce qui concerne les détecteurs à décision dure est fait. Ils constituent le coeur du sujet et un détecteur original est proposé, incluant notamment les aspects de réduction de réseau et de décodage sphérique. Son avantage par rapport aux techniques existantes est ainsi démontré, et les résultats prometteurs sont maintenus lors de son extension à la décision souple. Comme attendu, le choix de transférer au prétraitement la complexité de calcul s'avère gagnant. Notamment, la réduction de complexité de calcul qu'il permet est présentée dans cette thèse. Parmi les principaux résultats, ce travail a débouché sur la proposition d'un détecteur original, qui a démontré un compromis entre performance et complexité de calcul efficace. Notamment, il requiert une complexité de calcul presque constante - selon les tailles de constellation -, tout en offrant des performances proches du maximum de vraisemblance. Par conséquent, le détecteur à décision souple proposé se positionne par rapport à l'état de l'art comme une solution d'une grande efficacité dans les systèmes 4x4.
15

Performance analysis of the IEEE 802.11A WLAN standard optimum and sub-optimum receiver in frequency-selective, slowly fading Nakagami channels with AWGN and pulsed noise jamming

Kalogrias, Christos 03 1900 (has links)
Approved for public release, distribution is unlimited / Wide local area networks (WLAN) are increasingly important in meeting the needs of next generation broadband wireless communications systems for both commercial and military applications. Under IEEE 802.11a 5GHz WLAN standard, OFDM was chosen as the modulation scheme for transmission because of its well-known ability to avoid multi-path effects while achieving high data rates. The objective of this thesis is to investigate the performance of the IEEE 802.11a WLAN standard receiver over flat fading Nakagami channels in a worst case, pulse-noise jamming environment, for the different combinations of modulation type (binary and non-binary modulation) and code rate specified by the WLAN standard. Receiver performance with Viterbi soft decision decoding (SDD) will be analyzed for additive white Gaussian noise (AWGN) alone and for AWGN plus pulse-noise jamming. Moreover, the performance of the IEEE 802.11a WLAN standard receiver will be examined both in the scenario where perfect side information is considered to be available (optimum receiver) and when it is not (sub-optimum receiver). In the sub-optimum receiver scenario, the receiver performance is examined both when noise-normalization is utilized and when it is not. The receiver performance is severely affected by the pulse-noise jamming environment, especially in the suboptimum receiver scenario. However, the sub-optimum receiver performance is significantly improved when noise-normalization is implemented. / Lieutenant, Hellenic Navy
16

Dekodovanje MTR kodova principom finog odlučivanja na kanalima za magnetsko memorisanje informacija / Soft-decision decoding of MTR codes over magnetic recording channels

Đurić Nikola 20 November 2009 (has links)
<p>U radu su predstavljene nove tehnike dekodovanja maximum<br />transition run (MTR) kodova na principu finog odlučivanja.<br />Analizirane su performanse ovih tehnika u kombinaciji sa<br />za&scaron;titnim LDPC kodom na kanalima za magnetsko memorisanje<br />informacija, sa posebnim osvrtom na model kanala sa<br />dve staze za zapisivanje i dve glave za čitanje. U modelu kanala<br />je kori&scaron;ćena idealna E2PR4 ekvalizacija staza adekvatna<br />za sisteme sa visokom gustinom magnetskog zapisa.</p> / <p>This thesis presents the novel soft-decision decoding techniques<br />for decoding of the maximum transition run (MTR)<br />codes. Performances of such techniques have been analyzed<br />in combination with error correcting LDPC code over magnetic<br />recording channels, especially the two-track two-head<br />channel model. Ideal E2PR4 track equalization suitable for<br />high density magnetic recording has been used.</p>
17

Low-power discrete Fourier transform and soft-decision Viterbi decoder for OFDM receivers

Suh, Sangwook 31 August 2011 (has links)
The purpose of this research is to present a low-power wireless communication receiver with an enhanced performance by relieving the system complexity and performance degradation imposed by a quantization process. With an overwhelming demand for more reliable communication systems, the complexity required for modern communication systems has been increased accordingly. A byproduct of this increase in complexity is a commensurate increase in power consumption of the systems. Since the Shannon's era, the main stream of the methodologies for promising the high reliability of communication systems has been based on the principle that the information signals flowing through the system are represented in digits. Consequently, the system itself has been heavily driven to be implemented with digital circuits, which is generally beneficial over analog implementations when digitally stored information is locally accessible, such as in memory systems. However, in communication systems, a receiver does not have a direct access to the originally transmitted information. Since the received signals from a noisy channel are already continuous values with continuous probability distributions, we suggest a mixed-signal system in which the received continuous signals are directly fed into the analog demodulator and the subsequent soft-decision Viterbi decoder without any quantization involved. In this way, we claim that redundant system complexity caused by the quantization process is eliminated, thus gives better power efficiency in wireless communication systems, especially for battery-powered mobile devices. This is also beneficial from a performance perspective, as it takes full advantage of the soft information flowing through the system.
18

Rate Flexible Soft Decision Viterbi Decoder using SiLago

Baliga, Naveen Bantwal January 2021 (has links)
The IEEE 802.11a protocol is part of the IEEE 802 protocols for implementing WLAN Wi- Fi computer communications in various frequencies. These protocols find applications worldwide, covering a wide range of devices like mobile phones, computers, laptops, household appliances, etc. Since wireless communication is being used, data that is transmitted is susceptible to noise. As a means to recover from noise, the data transmitted is encoded using convolutional encoding and correspondingly decoded on the receiver side. The decoder used is the Viterbi decoder, in the PHY layer of the protocol. This thesis investigates soft-decision Viterbi decoder implementations that meet the requirements of the IEEE 802.11a protocol. It aims to implement a rate-flexible design as a coarse grain re-configurable architecture using the SiLago framework. SiLago is a modular approach towards ASIC design. Components are designed as hardened blocks, which means they are synthesised and pre-verified. Each block is also abuttable like LEGO blocks, which allows users to connect compatible blocks and make designs specific to their requirements, while getting performance similar to that of traditional ASICs. This approach significantly reduces the design costs, as verification is a one-time task. The thesis discusses the strongly connected trellis Viterbi decoding algorithm and proposes a design for a soft decision Viterbi decoder. The proposed design meets the throughput requirements of the communication protocol and it can be reconfigured to work for 45 different code rates, with programmable soft decision width and parallelism. The algorithm used is compared against MATLAB for its BER performance. Results from RTL simulations, advantages and disadvantages of the proposed design are discussed. Recommendations for future improvements are also made. / IEEE 802.11a-protokollet är en del av IEEE 802-protokollen för att implementera WLAN Wi-Fi-datorkommunikation i olika frekvenser. Dessa protokoll används i applikationer över hela världen som täcker ett brett spektrum av produkter som mobiltelefoner, datorer, bärbara datorer, hushållsapparater etc. Eftersom trådlös kommunikation används är data som överförs känslig för brus. Som ett medel för att återhämta sig från brus kodas överförd data med hjälp av faltningskodning och avkodas på motsvarande sätt på mottagarsidan. Den avkodare som används är Viterbi-avkodaren, i PHY-skiktet i protokollet. Denna avhandling undersöker mjuka beslut Viterbi avkodarimplementeringar som uppfyller kraven i IEEE 802.11a protokollet. Det syftar till att implementera en hastighetsflexibel design som en grovkornig konfigurerbar arkitektur som använder SiLago ramverket. SiLago är ett modulärt synsätt på ASIC design. Komponenterna är utformade som härda block, vilket innebär att de är syntetiserade och förverifierade. Varje block kan också kopplas ihop, som LEGO block, vilket gör det möjligt för användare att ansluta kompatibla block och göra designer som är specifika för deras krav, samtidigt som de får prestanda som liknar traditionella ASICs. Detta tillvägagångssätt minskar designkostnaderna avsevärt, eftersom verifiering är en engångsuppgift. Avhandlingen diskuterar den starkt kopplade trellis Viterbi-avkodningsalgoritmen och föreslår en design för en mjuk Viterbi-avkodare. Den föreslagna designen uppfyller kommunikationsprotokollets genomströmningskrav och den kan konfigureras om för att fungera för 45 olika kodhastigheter, med programmerbar mjuk beslutsbredd och parallellitet. Algoritmen som används jämförs mot MATLAB för dess BER-prestanda. Resultat från RTL-simuleringar, fördelar och nackdelar med den föreslagna designen diskuteras. Rekommendationer för framtida förbättringar görs också.
19

Advanced techniques to improve the performance of OFDM Wireless LAN

Segkos, Michail 06 1900 (has links)
Approved for public release; distribution is unlimited / OFDM systems have experienced increased attention in recent years and have found applications in a number of diverse areas including telephone-line based ADSL links, digital audio and video broadcasting systems, and wireless local area networks (WLAN). Orthogonal frequency-division multiplexing (OFDM) is a powerful technique for high data-rate transmission over fading channels. However, to deploy OFDM in a WLAN environment, precise frequency synchronization must be maintained and tricky frequency offsets must be handled. In this thesis, various techniques to improve the data throughput of OFDM WLAN are investigated. A simulation tool was developed in Matlab to evaluate the performance of the IEEE 802.11a physical layer. We proposed a rapid time and frequency synchronization algorithm using only the short training sequence of the IEEE 802.11a standard, thus reducing the training overhead to 50%. Particular attention was paid to channel coding, block interleaving and antenna diversity. Computer simulation showed that drastic improvement in error rate performance is achievable when these techniques are deployed. / Lieutenant, Hellenic Navy
20

Viterbi Decoded Linear Block Codes for Narrowband and Wideband Wireless Communication Over Mobile Fading Channels

Staphorst, Leonard 08 August 2005 (has links)
Since the frantic race towards the Shannon bound [1] commenced in the early 1950’s, linear block codes have become integral components of most digital communication systems. Both binary and non-binary linear block codes have proven themselves as formidable adversaries against the impediments presented by wireless communication channels. However, prior to the landmark 1974 paper [2] by Bahl et al. on the optimal Maximum a-Posteriori Probability (MAP) trellis decoding of linear block codes, practical linear block code decoding schemes were not only based on suboptimal hard decision algorithms, but also code-specific in most instances. In 1978 Wolf expedited the work of Bahl et al. by demonstrating the applicability of a block-wise Viterbi Algorithm (VA) to Bahl-Cocke-Jelinek-Raviv (BCJR) trellis structures as a generic optimal soft decision Maximum-Likelihood (ML) trellis decoding solution for linear block codes [3]. This study, largely motivated by code implementers’ ongoing search for generic linear block code decoding algorithms, builds on the foundations established by Bahl, Wolf and other contributing researchers by thoroughly evaluating the VA decoding of popular binary and non-binary linear block codes on realistic narrowband and wideband digital communication platforms in lifelike mobile environments. Ideally, generic linear block code decoding algorithms must not only be modest in terms of computational complexity, but they must also be channel aware. Such universal algorithms will undoubtedly be integrated into most channel coding subsystems that adapt to changing mobile channel conditions, such as the adaptive channel coding schemes of current Enhanced Data Rates for GSM Evolution (EDGE), 3rd Generation (3G) and Beyond 3G (B3G) systems, as well as future 4th Generation (4G) systems. In this study classic BCJR linear block code trellis construction is annotated and applied to contemporary binary and non-binary linear block codes. Since BCJR trellis structures are inherently sizable and intricate, rudimentary trellis complexity calculation and reduction algorithms are also presented and demonstrated. The block-wise VA for BCJR trellis structures, initially introduced by Wolf in [3], is revisited and improved to incorporate Channel State Information (CSI) during its ML decoding efforts. In order to accurately appraise the Bit-Error-Rate (BER) performances of VA decoded linear block codes in authentic wireless communication environments, Additive White Gaussian Noise (AWGN), flat fading and multi-user multipath fading simulation platforms were constructed. Included in this task was the development of baseband complex flat and multipath fading channel simulator models, capable of reproducing the physical attributes of realistic mobile fading channels. Furthermore, a complex Quadrature Phase Shift Keying (QPSK) system were employed as the narrowband communication link of choice for the AWGN and flat fading channel performance evaluation platforms. The versatile B3G multi-user multipath fading simulation platform, however, was constructed using a wideband RAKE receiver-based complex Direct Sequence Spread Spectrum Multiple Access (DS/SSMA) communication system that supports unfiltered and filtered Complex Spreading Sequences (CSS). This wideband platform is not only capable of analysing the influence of frequency selective fading on the BER performances of VA decoded linear block codes, but also the influence of the Multi-User Interference (MUI) created by other users active in the Code Division Multiple Access (CDMA) system. CSS families considered during this study include Zadoff-Chu (ZC) [4, 5], Quadriphase (QPH) [6], Double Sideband (DSB) Constant Envelope Linearly Interpolated Root-of- Unity (CE-LI-RU) filtered Generalised Chirp-like (GCL) [4, 7-9] and Analytical Bandlimited Complex (ABC) [7, 10] sequences. Numerous simulated BER performance curves, obtained using the AWGN, flat fading and multi-user multipath fading channel performance evaluation platforms, are presented in this study for various important binary and non-binary linear block code classes, all decoded using the VA. Binary linear block codes examined include Hamming and Bose-Chaudhuri-Hocquenghem (BCH) codes, whereas popular burst error correcting non-binary Reed-Solomon (RS) codes receive special attention. Furthermore, a simple cyclic binary linear block code is used to validate the viability of employing the reduced trellis structures produced by the proposed trellis complexity reduction algorithm. The simulated BER performance results shed light on the error correction capabilities of these VA decoded linear block codes when influenced by detrimental channel effects, including AWGN, Doppler spreading, diminished Line-of-Sight (LOS) signal strength, multipath propagation and MUI. It also investigates the impact of other pertinent communication system configuration alternatives, including channel interleaving, code puncturing, the quality of the CSI available during VA decoding, RAKE diversity combining approaches and CSS correlation characteristics. From these simulated results it can not only be gathered that the VA is an effective generic optimal soft input ML decoder for both binary and non-binary linear block codes, but also that the inclusion of CSI during VA metric calculations can fortify the BER performances of such codes beyond that attainable by classic ML decoding algorithms. / Dissertation (MEng(Electronic))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted

Page generated in 0.0891 seconds