• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 18
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 80
  • 80
  • 80
  • 80
  • 50
  • 48
  • 28
  • 27
  • 19
  • 18
  • 17
  • 15
  • 14
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Design of Low-Cost Low-Density Parity-Check Code Decoder

Liao, Wei-Chung 06 September 2005 (has links)
With the enormous growing applications of mobile communications, how to reduce the power dissipation of wireless communication has become an important issue that attracts much attention. One of the key techniques to achieve low power transmission is to develop a powerful channel coding scheme which can perform good error correcting capability even at low signal-to-noise ratio. In recent years, the trend of the error control code development is based on the iterative decoding algorithm which can lead to higher coding gain. Especially, the rediscovery of the low-density parity-check code ¡]LDPC¡^has become the most famous code after the introduction of Turbo code since it is the code closest to the well-know Shannon limit. However, since the block size used in LDPC is usually very large, and the parity matrix used in LDPC is quite random, the hardware implementation of LDPC has become very difficult. It may require a significant number of arithmetic units as well as very complex routing topology. Therefore, this thesis will address several design issues of LDPC decoder. First, under no SNR estimation condition, some simulation results of several LDPC architectures are provided and have shown that some architectures can achieve close performance to those with SNR estimation. Secondly, a novel message quantization method is proposed and applied in the design LDPC to reduce to the memory and table sizes as well as routing complexity. Finally, several early termination schemes for LDPC are considered, and it is found that up to 42% of bit node operation can be saved.
22

Κωδικοποίηση και διόρθωση λαθών σε μνήμες NAND πολλαπλών επιπέδων

Ευταξιάδης, Ευστράτιος, Μπίκας, Γεώργιος 09 October 2014 (has links)
Οι MLC NAND Flash μνήμες παίζουν πρωταγωνιστικό ρόλο για την αποθήκευση δε- δομένων, καθώς έχουν μεγάλη αποθηκευτική ικανότητα λόγω της μεγάλης πυκνότητάς τους, χαμηλό κόστος και χαμηλή απαίτηση σε ισχύ. Για τους λόγους αυτούς, έγινε εφικτό από τους σκληρούς δίσκους οδήγησης (HDDs) πλέον έχουμε περάσει στην εποχή των Solid State Drives (SSDs) που αποτελούν ένα μεγάλο βήμα για την αποθήκευση δεδομένων αποδοτικά και αξιόπιστα. Βέβαια η παρουσία λαθών στις MLC NAND Flash μνήμες, λόγω φαινομένων όπως η γήρανση του υλικού καθιστά απαραίτητη την εφαρμογή κωδίκων διόρθωσης λαθών (ECC) ώστε να διατηρηθεί η αξιοπιστία σε επιθυμητά επίπεδα. Σκοπός λοιπόν αυτής της διπλωματικής είναι αρχικά η ανάπτυξη ενός παραμετροποιήσιμου μοντέλου MLC NAND Flash μνήμης για την εξομοίωση εμφάνισης λαθών. Στη συνέχεια η χρησιμοποίηση soft-decision Low Density Parity Check (LDPC) κωδίκων για τη διόρθωση λαθών με τέτοι οτρόπο ώστε να παρατείνουμε το χρόνο ζωής της μνήμης και τελικά να υπολογίσουμε το Life Time Capacity που αποτελεί το συνολικό μέγεθος της πληροφορίας που μπορεί να αποθηκευθεί σε μία μνήμη καθ’όλη τη διάρκεια ζωής της. / --
23

Μελέτη της συμπεριφοράς αποκωδικοποιητών LDPC στην περιοχή του Error Floor

Γιαννακοπούλου, Γεωργία 07 May 2015 (has links)
Σε διαγράμματα BER, με τα οποία αξιολογείται ένα σύστημα αποκωδικοποίησης, και σε χαμηλά επίπεδα θορύβου, παρατηρείται πολλές φορές η περιοχή Error Floor, όπου η απόδοση του αποκωδικοποιητή δε βελτιώνεται πλέον, καθώς μειώνεται ο θόρυβος. Με πραγματοποίηση εξομοίωσης σε software, το Error Floor συνήθως δεν είναι ορατό, κι έτσι κύριο ζητούμενο είναι η πρόβλεψη της συμπεριφοράς του αποκωδικοποιητή, αλλά και γενικότερα η βελτιστοποίηση της απόδοσής του σε αυτήν την περιοχή. Στην παρούσα διπλωματική εργασία μελετάται η ανεπιτυχής αποκωδικοποίηση ορισμένων κωδικών λέξεων καθώς και ο μηχανισμός ενεργοποίησης των Trapping Sets, δηλαδή δομών, οι οποίες φαίνεται να είναι το κύριο αίτιο εμφάνισης του Error Floor. Xρησιμοποιείται το AWGN μοντέλο καναλιού και κώδικας με αραιό πίνακα ελέγχου ισοτιμίας (LDPC), ενώ οι εξομοιώσεις επαναληπτικών αποκωδικοποιήσεων πραγματοποιούνται σε επίπεδα (Layers), με αλγορίθμους ανταλλαγής μηνυμάτων (Message Passing). Αναλύονται προτεινόμενοι τροποποιημένοι αλγόριθμοι και μελετώνται οι επιπτώσεις του κβαντισμού των δεδομένων. Τέλος, προσδιορίζεται η επίδραση του θορύβου στην αποκωδικοποίηση και αναπτύσσεται ένα ημιαναλυτικό μοντέλο υπολογισμού της πιθανότητας ενεργοποίησης ενός Trapping Set και της πιθανότητας εμφάνισης σφάλματος κατά τη μετάδοση. / In BER plots, which are used in order to evaluate a decoding system, and at low-noise level, the Error Floor region is sometimes observed, where the decoder performance is no longer improved, as noise is reduced. When a simulation is executed using software, the Error Floor region is usually not visible, so the main goal is the prediction of the decoder's behavior, as well as the improvement in general of its performance in that particular region. In this thesis, we study the conditions which result in a decoding failure for specific codewords and a Trapping Set activation. Trapping Sets are structures in a code, which seem to be the main cause of the Error Floor presence in BER plots. For the purpose of our study, we use the AWGN channel model and a linear block code with low density parity check matrix (LDPC), while iterative decoding simulations are executed by splitting the parity check matrix into layers (Layered Decoding) and by using Message Passing algorithms. We propose and analyze three new modified algorithms and we study the effects caused by data quantization. Finally, we determine the noise effects on the decoding procedure and we develop a semi-analytical model used for calculating the probability of a Trapping Set activation and for calculating the error probability during transmission.
24

Υλοποίηση επαναληπτικής αποκωδικοποίησης κωδικών LDPC για ασύρματους δέκτες MIMO

Φρέσκος, Σταμάτιος 08 March 2010 (has links)
Στα πλαίσια αυτής της διπλωματικής εργασίας μελετήσαμε μεθόδους κωδικοποίησης με χρήση πινάκων ισοτιμίας μεγάλων διαστάσεων που έχουν χρησιμοποιηθεί και εφαρμοσθεί μέχρι τώρα σε προηγούμενες μελέτες. Επιλέξαμε τη σχεδίαση ενός αποκωδικοποιητή, που στηρίζεται στο WiMAX – 802.16e ΙΕΕΕ πρότυπο μετάδοσης και συγκεκριμένα με χρήση πομπού και δέκτη με περισσότερες από μία κεραίες. Παρουσιάζουμε, λοιπόν τη θεωρία που συσχετίζεται με το θέμα αυτό τόσο από την πλευρά της κωδικοποίησης όσο κι από την πλευρά της ασύρματης ΜΙΜΟ μετάδοσης και το πρότυπο WiMAX. Αναλύουμε κάθε τμήμα του συστήματός που προσομοιώνουμε και παραθέτουμε τα αποτελέσματα της προσομοίωσης. / -
25

Iterative Decoding Beyond Belief Propagation of Low-Density Parity-Check Codes

Planjery, Shiva Kumar January 2013 (has links)
The recent renaissance of one particular class of error-correcting codes called low-density parity-check (LDPC) codes has revolutionized the area of communications leading to the so-called field of modern coding theory. At the heart of this theory lies the fact that LDPC codes can be efficiently decoded by an iterative inference algorithm known as belief propagation (BP) which operates on a graphical model of a code. With BP decoding, LDPC codes are able to achieve an exceptionally good error-rate performance as they can asymptotically approach Shannon's capacity. However, LDPC codes under BP decoding suffer from the error floor phenomenon, an abrupt degradation in the error-rate performance of the code in the high signal-to-noise ratio region, which prevents the decoder from achieving very low error-rates. It arises mainly due to the sub-optimality of BP decoding on finite-length loopy graphs. Moreover, the effects of finite precision that stem from hardware realizations of BP decoding can further worsen the error floor phenomenon. Over the past few years, the error floor problem has emerged as one of the most important problems in coding theory with applications now requiring very low error rates and faster processing speeds. Further, addressing the error floor problem while taking finite precision into account in the decoder design has remained a challenge. In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the binary symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs), are capable of surpassing the BP in the error floor region at a much lower complexity and memory usage than BP without any compromise in decoding latency. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability which improves the error floor performance. We provide a methodology for the design of FAIDs on column-weight-three codes. Using this methodology, we design 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor region on several column-weight-three codes of practical interest. While the proposed FAIDs are able to outperform the BP decoder with low precision, the analysis of FAIDs still proves to be a difficult issue. Furthermore, their achievable guaranteed error correction capability is still far from what is achievable by the optimal maximum-likelihood (ML) decoding. In order to address these two issues, we propose another novel class of decoders called decimation-enhanced FAIDs for LDPC codes. For this class of decoders, the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during decoding, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes and provide insights into the analysis of such decoders. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly increase the slope of the error floor in the error-rate performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by ML decoding.
26

Protograph-Based Generalized LDPC Codes: Enumerators, Design, and Applications

Abu-Surra, Shadi Ali January 2009 (has links)
Among the recent advances in the area of low-density parity-check (LDPC) codes, protograph-based LDPC codes have the advantages of a simple design procedure and highly structured encoders and decoders. These advantages can also be exploited in the design of protograph-based generalized LDPC (G-LDPC) codes. In this dissertation we provide analytical tools which aid the design of protograph-based LDPC and G-LDPC codes. Specifically, we propose a method for computing the codeword-weight enumerators for finite-length protograph-based G-LDPC code ensembles, and then we consider the asymptotic case when the block-length goes to infinity. These results help the designer identify good ensembles of protograph-based G-LDPC codes in the minimum distance sense (i.e., ensembles which have minimum distances grow linearly with code length). Furthermore, good code ensembles can be characterized by good stopping set, trapping set, or pseudocodeword properties, which assist in the design of G-LDPC codes with low floors. We leverage our method for computing codeword-weight enumerators to compute stopping-set, and pseudocodeword enumerators for the finite-length and the asymptotic ensembles of protograph-based G-LDPC codes. Moreover, we introduce a method for computing trapping set enumerators for finite-length (and asymptotic) protograph-based LDPC code ensembles. Trapping set enumerators for G-LDPC codes represents a more complex problem which we do not consider here. Inspired by our method for computing trapping set enumerators for protograph-based LDPC code ensembles, we developed an algorithm for estimating the trapping set enumerators for a specific LDPC code given its parity-check matrix. We used this algorithm to enumerate trapping sets for several LDPC codes from communication standards. Finally, we study coded-modulation schemes with LDPC codes and pulse position modulation (LDPC-PPM) over the free-space optical channel. We present three different decoding schemes and compare their performances. In addition, we developed a new density evolution tool for use in the design of LDPC codes with good performances over this channel.
27

Capacity estimation and code design principles for continuous phase modulation (CPM)

Ganesan, Aravind 30 September 2004 (has links)
Continuous Phase Modulation is a popular digital modulation scheme for systems which have tight spectral efficiency and Peak-to-Average ratio (PAR) constraints. In this thesis we propose a method of estimating the capacity for a Continuous Phase Modulation (CPM) system and also describe techniques for design of codes for this system. We note that the CPM modulator can be decomposed into a trellis code followed by a memoryless modulator. This decomposition enables us to perform iterative demodulation of the signal and improve the performance of the system. Thus we have the option of either performing iterative demodulation, where the channel decoder and the demodulator are invoked in an iterative fashion, or a non-iterative demodulation, where the demodulation is performed only once followed by the decoding of the message. We highlight the recent results in the estimation of capacity for channels with memory and apply it to a CPM system. We estimate two different types of capacity of the CPM system over an Additive White Gaussian Noise (AWGN). The first capacity assumes that optimum demodulation and decoding is done, and the second one assumes that the demodulation is done only once. Having obtained the capacity of the system we try to approach this capacity by designing outer codes matched to the CPM system. We utilized LDPC codes, since they can be designed to perform very close to capacity limit of the system. The design complexity for LDPC codes can be reduced by assuming that the input to the decoder is Gaussian distributed. We explore three different ways of approximating the CPM demodulator output to a Gaussian distribution and use it to design LDPC codes for a Bit Interleaved Coded Modulation (BICM) system. Finally we describe the design of Multi Level Codes (MLC) for CPM systems using the capacity matching rule.
28

Modern coding schemes for unequal error protection

Deetzen, Neele von January 2009 (has links)
Zugl.: Bremen, Univ., Diss., 2009
29

Iterative Codierungs- und Entzerrungsverfahren für die optische Nachrichtenübertragung

Schorr, Torsten January 2006 (has links)
Zugl.: Kaiserslautern, Techn. Univ., Diss., 2006
30

Optimized belief propagation decoding for low delay applications in digital communications /

Hehn, Thorsten. January 2009 (has links)
Zugl.: Erlangen, Nürnberg, University, Diss., 2009.

Page generated in 0.0944 seconds