Spelling suggestions: "subject:"LDPC modes""
21 |
Linear Interactive Encoding and Decoding Schemes for Lossless Source Coding with Decoder Only Side InformationMeng, Jin January 2008 (has links)
Near lossless source coding with side information only at the decoder, was first considered by Slepian and Wolf in 1970s, and rediscovered recently due to applications such as sensor network and distributed video coding. Suppose X is a source and Y is the side information. The coding scheme proposed by Slepian and Wolf, called SW coding, in which information only flows from the encoder to the decoder, was shown to achieve the rate H(X|Y) asymptotically for stationary ergodic source pairs, but not for non-ergodic case, shown by Yang and He. Recently, a new source coding paradigm called interactive encoding and decoding(IED) was proposed for near lossless coding with side information only at the decoder, where information flows in both ways, from the encoder to the decoder and vice verse.
The results by Yang and He show that IED schemes are much more appealing than SW coding schemes to applications where the interaction between the encoder and the decoder is possible. However, the IED schemes proposed by Yang and He do not have an intrinsic structure that is amenable to design and implement in practice. Towards practical design, we restrict the encoding method to linear block codes, resulting in linear IED schemes. It is then shown that this restriction will not undermine the asymptotical performance of IED. Another step of practical design of IED schemes is to make the computational complexity incurred by encoding and decoding feasible. In the framework of linear IED, a scheme can be conveniently described by parity check matrices. Then we get an interesting trade-off between the density of the associated parity check matrices and the resulting symbol error probability.
To implement the idea of linear IED and follow the instinct provided by the result above, Low Density Parity Check(LDPC) codes and Belief Propagation(BP) decoding are utilized. A successive LDPC code is proposed, and a new BP decoding algorithm is proposed, which applies to the case where the correlation between $Y$ and $X$ can be modeled as a finite state channel. Finally, simulation results show that linear IED schemes are indeed superior to SW coding schemes.
|
22 |
Linear Interactive Encoding and Decoding Schemes for Lossless Source Coding with Decoder Only Side InformationMeng, Jin January 2008 (has links)
Near lossless source coding with side information only at the decoder, was first considered by Slepian and Wolf in 1970s, and rediscovered recently due to applications such as sensor network and distributed video coding. Suppose X is a source and Y is the side information. The coding scheme proposed by Slepian and Wolf, called SW coding, in which information only flows from the encoder to the decoder, was shown to achieve the rate H(X|Y) asymptotically for stationary ergodic source pairs, but not for non-ergodic case, shown by Yang and He. Recently, a new source coding paradigm called interactive encoding and decoding(IED) was proposed for near lossless coding with side information only at the decoder, where information flows in both ways, from the encoder to the decoder and vice verse.
The results by Yang and He show that IED schemes are much more appealing than SW coding schemes to applications where the interaction between the encoder and the decoder is possible. However, the IED schemes proposed by Yang and He do not have an intrinsic structure that is amenable to design and implement in practice. Towards practical design, we restrict the encoding method to linear block codes, resulting in linear IED schemes. It is then shown that this restriction will not undermine the asymptotical performance of IED. Another step of practical design of IED schemes is to make the computational complexity incurred by encoding and decoding feasible. In the framework of linear IED, a scheme can be conveniently described by parity check matrices. Then we get an interesting trade-off between the density of the associated parity check matrices and the resulting symbol error probability.
To implement the idea of linear IED and follow the instinct provided by the result above, Low Density Parity Check(LDPC) codes and Belief Propagation(BP) decoding are utilized. A successive LDPC code is proposed, and a new BP decoding algorithm is proposed, which applies to the case where the correlation between $Y$ and $X$ can be modeled as a finite state channel. Finally, simulation results show that linear IED schemes are indeed superior to SW coding schemes.
|
23 |
Multiterminal source coding: sum-rate loss, code designs, and applications to video sensor networksYang, Yang 15 May 2009 (has links)
Driven by a host of emerging applications (e.g., sensor networks and wireless video),
distributed source coding (i.e., Slepian-Wolf coding, Wyner-Ziv coding and various other
forms of multiterminal source coding), has recently become a very active research area.
This dissertation focuses on multiterminal (MT) source coding problem, and consists
of three parts. The first part studies the sum-rate loss of an important special case
of quadratic Gaussian multi-terminal source coding, where all sources are positively symmetric
and all target distortions are equal. We first give the minimum sum-rate for joint
encoding of Gaussian sources in the symmetric case, and then show that the supremum of
the sum-rate loss due to distributed encoding in this case is 1
2 log2
5
4 = 0:161 b/s when L = 2
and increases in the order of
º
L
2 log2 e b/s as the number of terminals L goes to infinity.
The supremum sum-rate loss of 0:161 b/s in the symmetric case equals to that in general
quadratic Gaussian two-terminal source coding without the symmetric assumption. It is
conjectured that this equality holds for any number of terminals.
In the second part, we present two practical MT coding schemes under the framework
of Slepian-Wolf coded quantization (SWCQ) for both direct and indirect MT problems.
The first, asymmetric SWCQ scheme relies on quantization and Wyner-Ziv coding, and it
is implemented via source splitting to achieve any point on the sum-rate bound. In the second,
conceptually simpler scheme, symmetric SWCQ, the two quantized sources are compressed
using symmetric Slepian-Wolf coding via a channel code partitioning technique that is capable of achieving any point on the Slepian-Wolf sum-rate bound. Our practical
designs employ trellis-coded quantization and turbo/LDPC codes for both asymmetric and
symmetric Slepian-Wolf coding. Simulation results show a gap of only 0.139-0.194 bit per
sample away from the sum-rate bound for both direct and indirect MT coding problems.
The third part applies the above two MT coding schemes to two practical sources, i.e.,
stereo video sequences to save the sum rate over independent coding of both sequences.
Experiments with both schemes on stereo video sequences using H.264, LDPC codes for
Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with
LDPC codes for Wyner-Ziv coding of the residual coefficients give slightly smaller sum
rate than separate H.264 coding of both sequences at the same video quality.
|
24 |
Design And Performance Of Capacity Approaching Irregular Low-density Parity-check CodesBardak, Erinc Deniz 01 September 2009 (has links) (PDF)
In this thesis, design details of binary irregular Low-Density Parity-Check (LDPC) codes are investigated. We especially focus on the trade-off between the average variable node degree, wa, and the number of length-6 cycles of an irregular code. We observe that the performance of the irregular code improves with increasing wa up to a critical value, but deteriorates for larger wa because of the exponential increase in the number of length-6 cycles. We have designed an irregular code of length 16,000 bits with average variable node degree wa=3.8, that we call &lsquo / 2/3/13&rsquo / since it has some variable nodes of degree 2 and 13 in addition to the majority of degree-3 nodes. The observed performance is found to be very close to that of the capacity approaching commercial codes. Time spent for decoding 50,000 codewords of length 1800 at Eb/No=1.6 dB for an irregular 2/3/13 code is measured to be 19% less than that of the regular (3, 6) code, mainly because of the smaller number of decoding failures.
|
25 |
Τεχνικές ανάλυσης κωδίκων LDPC για τον εντοπισμό trapping sets με εφαρμογή στους κώδικες του προτύπου IEEE 802.11nΒασιλόπουλος, Χρήστος 09 October 2014 (has links)
Σήμερα οι απαιτήσεις τόσο σε όγκο πληροφορίας προς μετάδοση όσο και της αξιόπιστης μετάδοσης και προστασίας της πληροφορίας είναι ιδιαίτερα υψηλές. Καθοριστικό ρόλο σε αυτό παίζει το αντικείμενο της Αναγνώρισης και Διόρθωσης Λαθών με τους κώδικες διόρθωσης λαθών που βρίσκονται σε κάθε πλευρά της καθημερινής και όχι μόνο ζωής οι οποίοι προστατεύουν από την αλλοίωση των δεδομένων και χρησιμοποιούνται για παράδειγμα σε συσκευές αποθήκευσης, κινητή τηλεφωνία, ασύρματα δίκτυα και επεκτείνονται μέχρι και στην δορυφορική επικοινωνία. Οι κώδικες LDPC είναι μια τέτοια κατηγορία κωδίκων με ποικίλες εφαρμογές και συγκαταλέγονται ανάμεσα στους καλύτερους του πεδίου της Αναγνώρισης και Διόρθωσης Λαθών. Όμως για να προστατευθεί το αναλλοίωτο της πληροφορίας είναι απαραίτητη η αξιόπιστη και επιτυχής αποκωδικοποίηση μετά τη λήψη των δεδομένων.
Το πρόβλημα στην επαναληπτική αποκωδικοποίηση κωδίκων LDPC εμφανίζεται όταν έχουμε κύκλους στον πίνακα ελέγχου ισοτιμίας και στο γράφημα Tanner και εμφανίζονται κάποιες δομές που ονομάζονται trapping sets, οι οποίες οδηγούν σε διαφορετική από την αναμενόμενη συμπεριφορά της καμπύλης που δίνει το ρυθμό σφάλματος ανά bit. Σε αυτές τις περιπτώσεις η καμπύλη εμφανίζει από ένα σημείο και μετά διαφορετική κλίση από την αναμενόμενη και επηρεάζεται το κατώτατο σφάλμα το οποίο τώρα είναι υψηλότερο. Η μέθοδος που ακολουθήθηκε στη παρούσα εργασία ήταν για την μελέτη των χαρακτηριστικών κωδίκων μέσω της καταμέτρησης των trapping sets. / Today our requirements for reliable transmission of huge amounts of information are very high. The objective of Error Identification and Correction plays an important role in this effort with the use of error correction codes which are present in every aspect of everyday life and beyond for keeping information unchanged. Such examples of their use are storage devices, mobile communication, wireless networks and even satellite communication. LDPC codes are such a category of error correction codes, have many applications and constitute of some of the greatest codes of the field of Error Identification and Correction. But in order to achieve unchanged information after transmission, it is essential that decoding problems which appear must be resolved. The problem with iterative decoding of LDPC codes appears when cycles exist inside the parity check matrix and the Tanner graph and as a result some other structures appear, which are called trapping sets. These trapping sets are responsible for the deviation of the bearing of the graph of bit error rate and error floor. In these cases the graph has a suddenly change in gradient. So the error floor is much higher now. The method used here was the study of characteristics of some codes from counting the trapping sets.
|
26 |
EXPERIMENTAL DEMONSTRATION OF MITIGATION OF LINEAR AND NONLINEAR IMPAIRMENTS IN FIBER-OPTIC COMMUNICATION SYSTEMS BY LDPC-CODED TURBO EQUALIZATIONMinkov, Lyubomir L. January 2009 (has links)
The ever-increasing demands for transmission capacity are the cause for the quick evolution of optical communication systems. Channel transmission at 100 Gb/s is already being considered by network operators. The major transmission impairments at these high rates are intra-channel and inter-channel nonlinearities, nonlinear phase noise, and polarization mode dispersion. By implementing LDPC-coded modulation schemes with soft decoding and Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm for equalization we have demonstrated significant improvements in system performance experiencing several impairments simultaneously. The new turbo-equalization scheme is used as a mean to simultaneously mitigate both linear and nonlinear impairments. This approach is general and applicable to both direct and coherent detection.We provide comprehensive study of LDPC codes suitable for implementation in high-speed optical transmission systems. We determine channel capacity based on the forward step of the BCJR algorithm and show that by using LDPC codes we can closely approach the maximum transmission capacity that is possible. We propose the multilevel maximum a posteriori probability (MAP) turbo equalization scheme based on multilevel BCJR algorithm and an LDPC decoder, which considers independent symbols transmitted over both polarizations as two dimensional super-symbols. The use of multilevel modulation schemes provide higher spectral efficiency, while all related signal processing is performed at lower symbol rates, where dealing with PMD compensation and fiber nonlinearities mitigation is more manageable. We show significant improvement in system performance over a system employing an equalizer that considers symbols transmitted in different polarizations as independent.
|
27 |
Iterative Decoding of Codes on GraphsSankaranarayanan, Sundararajan January 2006 (has links)
The growing popularity of a class of linear block codes called the low-density parity-check (LDPC) codes can be attributed to the low complexity of the iterative decoders, and their potential to achieve performance very close to the Shannon capacity. This makes them an attractive candidate for ECC applications in communication systems. This report proposes methods to systematically construct regular and irregular LDPC codes.A class of regular LDPC codes are constructed from incidence structures in finite geometries like projective geometry and affine geometry. A class of irregular LDPC codes are constructed by systematically splitting blocks of balanced incomplete block designs to achieve desired weight distributions. These codes are decoded iteratively using message-passing algorithms, and the performance of these codes for various channels are presented in this report.The application of iterative decoders is generally limited to a class of codes whose graph representations are free of small cycles. Unfortunately, the large class of conventional algebraic codes, like RS codes, has several four cycles in their graph representations. This report proposes an algorithm that aims to alleviate this drawback by constructing an equivalent graph representation that is free of four cycles. It is theoretically shown that the four-cycle free representation is better suited to iterative erasure decoding than the conventional representation. Also, the new representation is exploited to realize, with limited success, iterative decoding of Reed-Solomon codes over the additive white Gaussian noise channel.Wiberg, Forney, Richardson, Koetter, and Vontobel have made significant contributions in developing theoretical frameworks that facilitate finite length analysis of codes. With an exception of Richardson's, most of the other frameworks are much suited for the analysis of short codes. In this report, we further the understanding of the failures in iterative decoders for the binary symmetric channel. The failures of the decoder are classified into two categories by defining trapping sets and propagating sets. Such a classification leads to a successful estimation of the performance of codes under the Gallager B decoder. Especially, the estimation techniques show great promise in the high signal-to-noise ratio regime where the simulation techniques are less feasible.
|
28 |
Analysis of Failures of Decoders for LDPC CodesChilappagari, Shashi Kiran January 2008 (has links)
Ever since the publication of Shannon's seminal work in 1948, the search for capacity achieving codes has led to many interesting discoveries in channel coding theory. Low-density parity-check (LDPC) codes originally proposed in 1963 were largely forgotten and rediscovered recently. The significance of LDPC codes lies in their capacity approaching performance even when decoded using low complexity sub-optimal decoding algorithms. Iterative decoders are one such class of decoders that work on a graphical representation of a code known as the Tanner graph. Their properties have been well understood in the asymptotic limit of the code length going to infinity. However, the behavior of various decoders for a given finite length code remains largely unknown.An understanding of the failures of the decoders is vital for the error floor analysis of a given code. Broadly speaking, error floor is the abrupt degradation in the frame error rate (FER) performance of a code in the high signal-to-noise ratio domain. Since the error floor phenomenon manifests in the regions not reachable by Monte-Carlo simulations, analytical methods are necessary for characterizing the decoding failures. In this work, we consider hard decision decoders for transmission over the binary symmetric channel (BSC).For column-weight-three codes, we provide tight upper and lower bounds on the guaranteed error correction capability of a code under the Gallager A algorithm by studying combinatorial objects known as trapping sets. For higher column weight codes, we establish bounds on the minimum number of variable nodes that achieve certain expansion as a function of the girth of the underlying Tanner graph, thereby obtaining lower bounds on the guaranteed error correction capability. We explore the relationship between a class of graphs known as cage graphs and trapping sets to establish upper bounds on the error correction capability.We also propose an algorithm to identify the most probable noise configurations, also known as instantons, that lead to error floor for linear programming (LP) decoding over the BSC. With the insight gained from the above analysis techniques, we propose novel code construction techniques that result in codes with superior error floor performance.
|
29 |
Διόρθωση λαθών με τη χρήση κωδίκων RS-LDPCΓκίκα, Ζαχαρούλα 07 June 2013 (has links)
Σήμερα, σε όλα σχεδόν τα τηλεπικοινωνιακά συστήματα τα οποία προορίζονται για αποστολή δεδομένων σε υψηλούς ρυθμούς, έχουν υιοθετηθεί κώδικες διόρθωσης λαθών για την αύξηση της αξιοπιστίας τους και τη μείωση της απαιτούμενης ισχύος εκπομπής τους. Οι κώδικες αυτοί δίνουν τη δυνατότητα ανίχνευσης και διόρθωσης των λαθών που μπορεί να δημιουργήσει το μέσο μετάδοσης (κανάλι) σε κάποιο τμήμα πληροφορίας που μεταφέρεται μέσω του τηλεπικοινωνιακού δικτύου. Μία κατηγορία τέτοιων κωδίκων, και μάλιστα με εξαιρετικές επιδόσεις, είναι η οικογένεια των LDPC (Low Density Parity Check) κωδίκων. Πρόκειται για γραμμικούς μπλοκ κώδικες, με απόδοση πολύ κοντά στο όριο Shannon.
Στην παρούσα διπλωματική μελετώνται οι κώδικες LDPC και σχετικές αρχιτεκτονικές υλικού. Oι κώδικες LDPC χρησιμοποιούνται όλο και περισσότερο σε εφαρμογές που απαιτούν αξιόπιστη και υψηλής απόδοσης μετάδοση, υπό την παρουσία ισχυρού θορύβου. Η κατασκευή τους στηρίζεται στη χρήση πινάκων ελέγχου ισοτιμίας χαμηλής πυκνότητας, ενώ η αποκωδικοποίηση εκτελείται με τη χρήση επαναληπτικών αλγορίθμων. Σε υψηλά επίπεδα θορύβου παρουσιάζουν πολύ καλή διορθωτική ικανότητα, αλλά υστερούν σε χαμηλότερα επίπεδα θορύβου, όπου υποφέρουν από το φαινόμενο του error floor. Στη συγκεκριμένη εργασία μελετάται εκτενώς μία αλγεβρική μέθοδος για την κατασκευή regular LDPC κωδίκων που βασίζεται σε κώδικες Reed-Solomon με δύο σύμβολα πληροφορίας. Η μέθοδος αυτή μας επιτρέπει την κατασκευή ενός πίνακα ελέγχου ισοτιμίας Η για τον κώδικα LDPC, όπου το διάγραμμα Tanner που του αντιστοιχεί δεν περιέχει κύκλους μήκους 4 (ελάχιστο μήκος κύκλου 6). Οι κύκλοι μικρού μήκους στο διάγραμμα Tanner «εγκλωβίζουν» τον αποκωδικοποιητή σε καταστάσεις που δεν μπορεί να ανιχνεύσει και να διορθώσει τα λάθη που δημιουργήθηκαν στη μετάδοση. Έτσι χρησιμοποιώντας την παραπάνω μέθοδο μπορούμε να κατασκευάσουμε απλούς σε δομή κώδικες, που σε συνδυασμό με τους επαναληπτικούς αλγορίθμους αποκωδικοποίησης οδηγούν σε αποκωδικοποιητές με εξαιρετικές διορθωτικές ικανότητες και εμφάνιση error floor σε πολύ χαμηλές τιμές του BER. Ακόμα, αυτού του τύπου οι πίνακες ισοτιμίας επιβάλλουν μία συγκεκριμένη δομή για το γεννήτορα πίνακα G που χρησιμοποιείται για την κωδικοποίηση. Για το λόγο αυτό μελετάται επίσης ο τρόπος για να κατασκευάσουμε ένα συστηματικό πίνακα G, ο οποίος απλουστεύει κατά πολύ τη διαδικασία της κωδικοποίησης. Όλες οι παραπάνω διαδικασίες εφαρμόζονται για την κατασκευή του κώδικα (2048,1723) RS-LDPC. Πρόκειται για έναν κώδικα ρυθμού 0,84 που χρησιμοποιείται από το πρότυπο 802.3an της IEEE για το 10GBASE-T Ethernet και παρουσιάζει ιδιαίτερο ενδιαφέρον λόγω των επιδόσεών του. Για τον κώδικα αυτό προτείνεται σχεδίαση για τον κωδικοποιητή και τον αποκωδικοποιητή καθώς και για όλα τα εξωτερικά κυκλώματα που απαιτούνται ώστε να δημιουργηθεί ένα ολοκληρωμένο σύστημα αποστολής, λήψης και διόρθωσης δεδομένων.
Έχοντας όλο το υπόβαθρο για την κατασκευή ενός RS-LDPC συστήματος κωδικοποίησης-αποκωδικοποίησης, υλοποιήσαμε τη σχεδίαση του συστήματος σε κώδικα VHDL ενώ εκτελέστηκαν οι απαραίτητες εξομοιώσεις (Modelsim). Στη συνέχεια εκτελέστηκαν οι διαδικασίες της σύνθεσης (εργαλείο XST του Xilinx ISE) και της πλήρους υλοποίησης σε fpga (Virtex 5 XC5VLX330T-1FF1738), δίνοντας μας έτσι τη δυνατότητα διεξαγωγής ταχύτατων εξομοιώσεων ειδικά σε χαμηλά επίπεδα θορύβου σε σχέση με τις αντίστοιχες υλοποιήσεις σε λογισμικό (MATLAB). Πραγματοποιώντας πειράματα στο υλικό παρατηρούμε τη διορθωτική ικανότητα του αλγορίθμου αποκωδικοποίησης και συγκρίνουμε τα αποτελέσματα με αυτά των υλοποιήσεων σε λογισμικό. Επίσης μελετάται ο τρόπος μεταβολής της διορθωτικής ικανότητας του αλγορίθμου ανάλογα με τον αριθμό των επαναλήψεων που εκτελεί. Τέλος, πήραμε κάποιες μετρήσεις για το throughput του αποκωδικοποιητή, ώστε σε περίπτωση που θέλουμε να πετύχουμε ένα συγκεκριμένο ρυθμό επεξεργασίας δεδομένων να μπορούμε να υπολογίσουμε τον αριθμό των αποκωδικοποιητών που θα χρειαστούμε. / Nowadays, almost every telecommunication system that aims to achieve high transmission rates has adopted error correction codes in order to increase its reliability while decreasing the required power of transmission. The information signal is transmitted over a communication channel with the presence of noise. Error correction codes allow systems to detect and correct errors that occurred to the information signal due to the noise. LDPC (Low Density Parity Check) codes compose a large family of error correcting linear block codes with great performance, close to the Shannon limit.
In this thesis we analyze LDPC codes and the corresponding hardware designs. LDPC codes are used in applications that require reliable and highly efficient transmission under high levels of noise. Any LDPC code is fully defined by a sparse parity-check-matrix and all of them use iterative belief propagation techniques for the decoding process. In general, LDPC codes perform very well in high levels of noise, but in very low levels they suffer from “error floor” effect. First we present a thorough analysis of an algebraic method for constructing regular LDPC codes based on Reed-Solomon codes with two information symbols. This construction method results in a class of LDPC codes which are free of cycles of length 4 in their Tanner graphs (so the girth of their Tanner graphs is at least 6). The existence of circles with length 4 in the Tanner graph “traps” the decoder in states that it cannot detect and correct any error occuring in the transmitted codeword. So by using the previous constructing method we can create simply structured codes which, combined with iterative decoding algorithms, may provide decoders with great performance and error floor at very low levels of BER. Furthermore, this type of decoders requires that the generator matrix G used for the encoding process of the system must have specific structural properties. For this reason we are going to study the way of constructing a proper systematic generator matrix which also simplifies the decoding process. All the previous stages are carried out in order to construct the (2048, 1723) RS-LDPC code. This code was adopted in 802.3an IEEE standard for the 10GBASE-T and is of high interest due to its remarkable efficiency. For this code we demonstrate a specific implementation for the encoder, decoder and all the additional components required in order to design a complete transmitter-receiver system, coupled with error correction capabilities.
We utilize the above mentioned background so as to implement our design in VHDL code and run the proper simulations (Modelsim tool). Later on we synthesized (XST tool, Xilinx ISE) and implemented our design on an fpga board (Virtex 5 XC5VLX330T-1FF1738). This enabled us to accomplish rapid simulation times, especially under low level of noise in contrast to the corresponding software implementations (MATLAB). We evaluate the error correction capability of the decoding algorithm by running experiments in hardware and we compare these results with software implementations. Moreover we observe how the effectiveness of the decoding algorithm is affected by its number of iterations. Finally, we measure the decoder throughput so that in case we want to achieve a specific decoding rate we are able to estimate the required number of decoders for this rate.
|
30 |
FPGA Implementation of Low Density Party Check Codes DecoderVijayakumar, Suresh 08 1900 (has links)
Reliable communication over the noisy channel has become one of the major concerns in the field of digital wireless communications. The low density parity check codes (LDPC) has gained lot of attention recently because of their excellent error-correcting capacity. It was first proposed by Robert G. Gallager in 1960. LDPC codes belong to the class of linear block codes. Near capacity performance is achievable on a large collection of data transmission and storage.In my thesis I have focused on hardware implementation of (3, 6) - regular LDPC codes. A fully parallel decoder will require too high complexity of hardware realization. Partly parallel decoder has the advantage of effective compromise between decoding throughput and high hardware complexity. The decoding of the codeword follows the belief propagation alias probability propagation algorithm in log domain. A 9216 bit, (3, 6) regular LDPC code with code rate ½ was implemented on FPGA targeting Xilinx Virtex 4 XC4VLX80 device with package FF1148. This decoder achieves a maximum throughput of 82 Mbps. The entire model was designed in VHDL in the Xilinx ISE 9.2 environment.
|
Page generated in 0.0558 seconds