• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 42
  • 29
  • 18
  • 7
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 252
  • 134
  • 56
  • 54
  • 53
  • 51
  • 50
  • 46
  • 46
  • 45
  • 42
  • 40
  • 34
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

The Design of Rate-Compatible Structured Low-Density Parity-Check Codes

Kim, Jaehong 14 November 2006 (has links)
The main objective of our research is to design practical low-density parity-check (LDPC) codes which provide a wide range of code rates in a rate-compatible fashion. To this end, we first propose a rate-compatible puncturing algorithm for LDPC codes at short block lengths (up to several thousand symbols). The proposed algorithm is based on the claim that a punctured LDPC code with a smaller level of recoverability has better performance. The proposed algorithm is verified by comparing performance of intentionally punctured LDPC codes (using the proposed algorithm) with randomly punctured LDPC codes. The intentionally punctured LDPC codes show better bit error rate (BER) performances at practically short block lengths. Even though the proposed puncturing algorithm shows excellent performance, several problems are still remained for our research objective. First, how to design an LDPC code of which structure is well suited for the puncturing algorithm. Second, how to provide a wide range of rates since there is a puncturing limitation with the proposed puncturing algorithm. To attack these problems, we propose a new class of LDPC codes, called efficiently-encodable rate-compatible (E2RC) codes, in which the proposed puncturing algorithm concept is imbedded. The E2RC codes have several strong points. First, the codes can be efficiently encoded. We present low-complexity encoder implementation with shift-register circuits. In addition, we show that a simple erasure decoder can also be used for the linear-time encoding of these codes. Thus, we can share a message-passing decoder for both encoding and decoding in transceiver systems that require an encoder/decoder pair. Second, we show that the non-systematic parts of the parity-check matrix are cycle-free, which ensures good code characteristics. Finally, the E2RC codes having a systematic rate-compatible puncturing structure show better puncturing performance than any other LDPC codes in all ranges of code rates. The throughput performance of incremental redundancy (IR) hybrid automatic repeat request (HARQ) systems highly depends on the performance of high-rate codes. Since the E2RC codes show excellent puncturing performance in all ranges of code rates, especially at high puncturing rate, we verify that E2RC codes outperform in throughput than other LDPC codes in IR-HARQ systems.
122

Circuit Design of LDPC Decoder for IEEE 802.16e systems

Wang, Jhih-hao 29 March 2010 (has links)
A circuit design of Low Density Parity Check (LDPC) decoder for IEEE 802.16e systems is with new overlapped method is proposed in this thesis. This circuit can be operated with 19 modes which are corresponding to block sizes of 576, ¡K, 2304. LDPC decoders can be implemented by using iterations with Variable Node and Check Node Processes. The hardware utilization ratio, which can be enhanced from 50% to 100% by using our proposed overlapped method, is better than traditional overlapped method. In [2], the traditional overlapped method utilization ratio just can be enhanced from 50% to 75% for IEEE 802.16e LDPC decoder with code rate 1/2. Under the same operating frequency, our proposed method can further increase 25% when compared with traditional overlapped method [2]. In this thesis, we also propose two circuit architectures to increase the operating frequency. First, we use a faster comparison circuit in our comparison unit [1]. Second, we use Carry Save Adder¡]CSA¡^method [8] to replace the common adder unit. The circuit is carried out by TSMC CMOS 0.18£gm 1P6M process with chip area 3.11 x 3.08 mm2. In the gate level simulation, the output data rate of this circuit is above 78.4MHz, so the circuit can meet the requirement of IEEE 802.16e system.
123

Multiterminal source coding: sum-rate loss, code designs, and applications to video sensor networks

Yang, Yang 15 May 2009 (has links)
Driven by a host of emerging applications (e.g., sensor networks and wireless video), distributed source coding (i.e., Slepian-Wolf coding, Wyner-Ziv coding and various other forms of multiterminal source coding), has recently become a very active research area. This dissertation focuses on multiterminal (MT) source coding problem, and consists of three parts. The first part studies the sum-rate loss of an important special case of quadratic Gaussian multi-terminal source coding, where all sources are positively symmetric and all target distortions are equal. We first give the minimum sum-rate for joint encoding of Gaussian sources in the symmetric case, and then show that the supremum of the sum-rate loss due to distributed encoding in this case is 1 2 log2 5 4 = 0:161 b/s when L = 2 and increases in the order of º L 2 log2 e b/s as the number of terminals L goes to infinity. The supremum sum-rate loss of 0:161 b/s in the symmetric case equals to that in general quadratic Gaussian two-terminal source coding without the symmetric assumption. It is conjectured that this equality holds for any number of terminals. In the second part, we present two practical MT coding schemes under the framework of Slepian-Wolf coded quantization (SWCQ) for both direct and indirect MT problems. The first, asymmetric SWCQ scheme relies on quantization and Wyner-Ziv coding, and it is implemented via source splitting to achieve any point on the sum-rate bound. In the second, conceptually simpler scheme, symmetric SWCQ, the two quantized sources are compressed using symmetric Slepian-Wolf coding via a channel code partitioning technique that is capable of achieving any point on the Slepian-Wolf sum-rate bound. Our practical designs employ trellis-coded quantization and turbo/LDPC codes for both asymmetric and symmetric Slepian-Wolf coding. Simulation results show a gap of only 0.139-0.194 bit per sample away from the sum-rate bound for both direct and indirect MT coding problems. The third part applies the above two MT coding schemes to two practical sources, i.e., stereo video sequences to save the sum rate over independent coding of both sequences. Experiments with both schemes on stereo video sequences using H.264, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give slightly smaller sum rate than separate H.264 coding of both sequences at the same video quality.
124

Design of Low-Cost Low-Density Parity-Check Code Decoder

Liao, Wei-Chung 06 September 2005 (has links)
With the enormous growing applications of mobile communications, how to reduce the power dissipation of wireless communication has become an important issue that attracts much attention. One of the key techniques to achieve low power transmission is to develop a powerful channel coding scheme which can perform good error correcting capability even at low signal-to-noise ratio. In recent years, the trend of the error control code development is based on the iterative decoding algorithm which can lead to higher coding gain. Especially, the rediscovery of the low-density parity-check code ¡]LDPC¡^has become the most famous code after the introduction of Turbo code since it is the code closest to the well-know Shannon limit. However, since the block size used in LDPC is usually very large, and the parity matrix used in LDPC is quite random, the hardware implementation of LDPC has become very difficult. It may require a significant number of arithmetic units as well as very complex routing topology. Therefore, this thesis will address several design issues of LDPC decoder. First, under no SNR estimation condition, some simulation results of several LDPC architectures are provided and have shown that some architectures can achieve close performance to those with SNR estimation. Secondly, a novel message quantization method is proposed and applied in the design LDPC to reduce to the memory and table sizes as well as routing complexity. Finally, several early termination schemes for LDPC are considered, and it is found that up to 42% of bit node operation can be saved.
125

Design And Performance Of Capacity Approaching Irregular Low-density Parity-check Codes

Bardak, Erinc Deniz 01 September 2009 (has links) (PDF)
In this thesis, design details of binary irregular Low-Density Parity-Check (LDPC) codes are investigated. We especially focus on the trade-off between the average variable node degree, wa, and the number of length-6 cycles of an irregular code. We observe that the performance of the irregular code improves with increasing wa up to a critical value, but deteriorates for larger wa because of the exponential increase in the number of length-6 cycles. We have designed an irregular code of length 16,000 bits with average variable node degree wa=3.8, that we call &lsquo / 2/3/13&rsquo / since it has some variable nodes of degree 2 and 13 in addition to the majority of degree-3 nodes. The observed performance is found to be very close to that of the capacity approaching commercial codes. Time spent for decoding 50,000 codewords of length 1800 at Eb/No=1.6 dB for an irregular 2/3/13 code is measured to be 19% less than that of the regular (3, 6) code, mainly because of the smaller number of decoding failures.
126

Code optimization and analysis for multiple-input and multiple-output communication systems

Yue, Guosen 01 November 2005 (has links)
Design and analysis of random-like codes for various multiple-input and multiple-output communication systems are addressed in this work. Random-like codes have drawn significant interest because they offer capacity-achieving performance. We first consider the analysis and design of low-density parity-check (LDPC) codes for turbo multiuser detection in multipath CDMA channels. We develop techniques for computing the probability density function (pdf) of the extrinsic messages at the output of the soft-input soft-output (SISO) multiuser detectors as a function of the pdf of input extrinsic messages, user spreading codes, channel impulse responses, and signal-to-noise ratios. Using these techniques, we are able to accurately compute the thresholds for LDPC codes and design good irregular LDPC codes. We then apply the tools of density evolution with mixture Gaussian approximations to optimize irregular LDPC codes and to compute minimum operational signal-to-noise ratios for ergodic MIMO OFDM channels. In particular, the optimization is done for various MIMO OFDM system configurations which include different number of antennas, different channel models and different demodulation schemes. We also study the coding-spreading tradeoff in LDPC coded CDMA systems employing multiuser joint decoding. We solve the coding-spreading optimization based on the extrinsic information SNR evolution curves for the SISO multiuser detectors and the SISO LDPC decoders. Both single-cell and multi-cell scenarios will be considered. For each of these cases, we will characterize the extrinsic information for both finite-size systems and the so-called large systems where asymptotic performance results must be evoked. Finally, we consider the design optimization of irregular repeat accumulate (IRA) codes for MIMO communication systems employing iterative receivers. We present the density evolution-based procedure with Gaussian approximation for optimizing the IRA code ensemble. We adopt an approximation method based on linear programming to design an IRA code with the extrinsic information transfer (EXIT) chart matched to that of the soft MIMO demodulator.
127

SOQPSK with LDPC: Spending Bandwidth to Buy Link Margin

Hill, Terry, Uetrecht, Jim 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / Over the past decade, SOQPSK has been widely adopted by the flight test community, and the low density parity check (LDPC) codes are now in widespread use in many applications. This paper defines the waveform and presents the bit error rate (BER) performance of SOQPSK coupled with a rate 2/3 LDPC code. The scheme described here expands the transmission bandwidth by approximately 56% (which is still 22% less than the legacy PCM/FM modulation), for the benefit of improving link margin by over 10 dB at BER = 10⁻⁶.
128

Κωδικοποίηση και διόρθωση λαθών σε μνήμες NAND πολλαπλών επιπέδων

Ευταξιάδης, Ευστράτιος, Μπίκας, Γεώργιος 09 October 2014 (has links)
Οι MLC NAND Flash μνήμες παίζουν πρωταγωνιστικό ρόλο για την αποθήκευση δε- δομένων, καθώς έχουν μεγάλη αποθηκευτική ικανότητα λόγω της μεγάλης πυκνότητάς τους, χαμηλό κόστος και χαμηλή απαίτηση σε ισχύ. Για τους λόγους αυτούς, έγινε εφικτό από τους σκληρούς δίσκους οδήγησης (HDDs) πλέον έχουμε περάσει στην εποχή των Solid State Drives (SSDs) που αποτελούν ένα μεγάλο βήμα για την αποθήκευση δεδομένων αποδοτικά και αξιόπιστα. Βέβαια η παρουσία λαθών στις MLC NAND Flash μνήμες, λόγω φαινομένων όπως η γήρανση του υλικού καθιστά απαραίτητη την εφαρμογή κωδίκων διόρθωσης λαθών (ECC) ώστε να διατηρηθεί η αξιοπιστία σε επιθυμητά επίπεδα. Σκοπός λοιπόν αυτής της διπλωματικής είναι αρχικά η ανάπτυξη ενός παραμετροποιήσιμου μοντέλου MLC NAND Flash μνήμης για την εξομοίωση εμφάνισης λαθών. Στη συνέχεια η χρησιμοποίηση soft-decision Low Density Parity Check (LDPC) κωδίκων για τη διόρθωση λαθών με τέτοι οτρόπο ώστε να παρατείνουμε το χρόνο ζωής της μνήμης και τελικά να υπολογίσουμε το Life Time Capacity που αποτελεί το συνολικό μέγεθος της πληροφορίας που μπορεί να αποθηκευθεί σε μία μνήμη καθ’όλη τη διάρκεια ζωής της. / --
129

Μελέτη της συμπεριφοράς αποκωδικοποιητών LDPC στην περιοχή του Error Floor

Γιαννακοπούλου, Γεωργία 07 May 2015 (has links)
Σε διαγράμματα BER, με τα οποία αξιολογείται ένα σύστημα αποκωδικοποίησης, και σε χαμηλά επίπεδα θορύβου, παρατηρείται πολλές φορές η περιοχή Error Floor, όπου η απόδοση του αποκωδικοποιητή δε βελτιώνεται πλέον, καθώς μειώνεται ο θόρυβος. Με πραγματοποίηση εξομοίωσης σε software, το Error Floor συνήθως δεν είναι ορατό, κι έτσι κύριο ζητούμενο είναι η πρόβλεψη της συμπεριφοράς του αποκωδικοποιητή, αλλά και γενικότερα η βελτιστοποίηση της απόδοσής του σε αυτήν την περιοχή. Στην παρούσα διπλωματική εργασία μελετάται η ανεπιτυχής αποκωδικοποίηση ορισμένων κωδικών λέξεων καθώς και ο μηχανισμός ενεργοποίησης των Trapping Sets, δηλαδή δομών, οι οποίες φαίνεται να είναι το κύριο αίτιο εμφάνισης του Error Floor. Xρησιμοποιείται το AWGN μοντέλο καναλιού και κώδικας με αραιό πίνακα ελέγχου ισοτιμίας (LDPC), ενώ οι εξομοιώσεις επαναληπτικών αποκωδικοποιήσεων πραγματοποιούνται σε επίπεδα (Layers), με αλγορίθμους ανταλλαγής μηνυμάτων (Message Passing). Αναλύονται προτεινόμενοι τροποποιημένοι αλγόριθμοι και μελετώνται οι επιπτώσεις του κβαντισμού των δεδομένων. Τέλος, προσδιορίζεται η επίδραση του θορύβου στην αποκωδικοποίηση και αναπτύσσεται ένα ημιαναλυτικό μοντέλο υπολογισμού της πιθανότητας ενεργοποίησης ενός Trapping Set και της πιθανότητας εμφάνισης σφάλματος κατά τη μετάδοση. / In BER plots, which are used in order to evaluate a decoding system, and at low-noise level, the Error Floor region is sometimes observed, where the decoder performance is no longer improved, as noise is reduced. When a simulation is executed using software, the Error Floor region is usually not visible, so the main goal is the prediction of the decoder's behavior, as well as the improvement in general of its performance in that particular region. In this thesis, we study the conditions which result in a decoding failure for specific codewords and a Trapping Set activation. Trapping Sets are structures in a code, which seem to be the main cause of the Error Floor presence in BER plots. For the purpose of our study, we use the AWGN channel model and a linear block code with low density parity check matrix (LDPC), while iterative decoding simulations are executed by splitting the parity check matrix into layers (Layered Decoding) and by using Message Passing algorithms. We propose and analyze three new modified algorithms and we study the effects caused by data quantization. Finally, we determine the noise effects on the decoding procedure and we develop a semi-analytical model used for calculating the probability of a Trapping Set activation and for calculating the error probability during transmission.
130

Hypermap-Homology Quantum Codes

Leslie, Martin P. January 2013 (has links)
We introduce a new type of sparse CSS quantum error correcting code based on the homology of hypermaps. Sparse quantum error correcting codes are of interest in the building of quantum computers due to their ease of implementation and the possibility of developing fast decoders for them. Codes based on the homology of embeddings of graphs, such as Kitaev's toric code, have been discussed widely in the literature and our class of codes generalize these. We use embedded hypergraphs, which are a generalization of graphs that can have edges connected to more than two vertices. We develop theorems and examples of our hypermap-homology codes, especially in the case that we choose a special type of basis in our homology chain complex. In particular, the most straightforward generalization of the m × m toric code to hypermap-homology codes gives us a [(3/2)m²,2, m] code as compared to the toric code which is a [2m²,2, m]code. Thus we can protect the same amount of quantum information, with the same error-correcting capability, using less physical qubits.

Page generated in 0.0585 seconds