Spelling suggestions: "subject:"A modes""
301 |
Generalizing binary quadratic residue codes to higher power residues over larger fieldsCharters, Philippa Liana 13 June 2011 (has links)
In this paper, we provide a generalization of binary quadratic residue codes to the cases of higher power prime residues over the finite field of the same order, which we will call qth power residue codes. We find generating polynomials for such codes, define a new notion corresponding to the binary concept of an idempotent, and use this to find square root lower bound for the codeword weight of the duals of such codes, which leads to a lower bound on the weight of the codewords themselves. In addition, we construct a family of asymptotically bad qth power residue codes. / text
|
302 |
The Original View of Reed-Solomon Coding and the Welch-Berlekamp Decoding AlgorithmMann, Sarah Edge January 2013 (has links)
Reed-Solomon codes are a class of maximum distance separable error correcting codes with known fast error correction algorithms. They have been widely used to assure data integrity for stored data on compact discs, DVDs, and in RAID storage systems, for digital communications channels such as DSL internet connections, and for deep space communications on the Voyager mission. The recent explosion of storage needs for "Big Data'' has generated renewed interest in large storage systems with extended error correction capacity. Reed-Solomon codes have been suggested as one potential solution. This dissertation reviews the theory of Reed-Solomon codes from the perspective taken in Reed and Solomon's original paper on them. It then derives the Welch-Berlekamp algorithm for solving certain polynomial equations, and connects this algorithm to the problem of error correction. The discussion is mathematically rigorous, and provides a complete and consistent discussion of the error correction process. Numerous algorithms for encoding, decoding, erasure recovery, error detection, and error correction are provided and their computational cost is analyzed and discussed thus allowing this dissertation to serve as a manual for engineers interested in implementing Reed-Solomon coding.
|
303 |
Iterative Decoding of Codes on GraphsSankaranarayanan, Sundararajan January 2006 (has links)
The growing popularity of a class of linear block codes called the low-density parity-check (LDPC) codes can be attributed to the low complexity of the iterative decoders, and their potential to achieve performance very close to the Shannon capacity. This makes them an attractive candidate for ECC applications in communication systems. This report proposes methods to systematically construct regular and irregular LDPC codes.A class of regular LDPC codes are constructed from incidence structures in finite geometries like projective geometry and affine geometry. A class of irregular LDPC codes are constructed by systematically splitting blocks of balanced incomplete block designs to achieve desired weight distributions. These codes are decoded iteratively using message-passing algorithms, and the performance of these codes for various channels are presented in this report.The application of iterative decoders is generally limited to a class of codes whose graph representations are free of small cycles. Unfortunately, the large class of conventional algebraic codes, like RS codes, has several four cycles in their graph representations. This report proposes an algorithm that aims to alleviate this drawback by constructing an equivalent graph representation that is free of four cycles. It is theoretically shown that the four-cycle free representation is better suited to iterative erasure decoding than the conventional representation. Also, the new representation is exploited to realize, with limited success, iterative decoding of Reed-Solomon codes over the additive white Gaussian noise channel.Wiberg, Forney, Richardson, Koetter, and Vontobel have made significant contributions in developing theoretical frameworks that facilitate finite length analysis of codes. With an exception of Richardson's, most of the other frameworks are much suited for the analysis of short codes. In this report, we further the understanding of the failures in iterative decoders for the binary symmetric channel. The failures of the decoder are classified into two categories by defining trapping sets and propagating sets. Such a classification leads to a successful estimation of the performance of codes under the Gallager B decoder. Especially, the estimation techniques show great promise in the high signal-to-noise ratio regime where the simulation techniques are less feasible.
|
304 |
Soft-decision decoding of Reed-Solomon codes for mobile messaging systemsKosmach, James J. 12 1900 (has links)
No description available.
|
305 |
Design of effective decoding techniques in network coding networks / Suné von SolmsVon Solms, Suné January 2013 (has links)
Random linear network coding is widely proposed as the solution for practical network coding
applications due to the robustness to random packet loss, packet delays as well as network topology
and capacity changes. In order to implement random linear network coding in practical scenarios
where the encoding and decoding methods perform efficiently, the computational complex coding
algorithms associated with random linear network coding must be overcome.
This research contributes to the field of practical random linear network coding by presenting
new, low complexity coding algorithms with low decoding delay. In this thesis we contribute to this
research field by building on the current solutions available in the literature through the utilisation
of familiar coding schemes combined with methods from other research areas, as well as developing
innovative coding methods.
We show that by transmitting source symbols in predetermined and constrained patterns from
the source node, the causality of the random linear network coding network can be used to create
structure at the receiver nodes. This structure enables us to introduce an innovative decoding
scheme of low decoding delay. This decoding method also proves to be resilient to the effects of
packet loss on the structure of the received packets. This decoding method shows a low decoding
delay and resilience to packet erasures, that makes it an attractive option for use in multimedia
multicasting.
We show that fountain codes can be implemented in RLNC networks without changing the
complete coding structure of RLNC networks. By implementing an adapted encoding algorithm at
strategic intermediate nodes in the network, the receiver nodes can obtain encoded packets that
approximate the degree distribution of encoded packets required for successful belief propagation
decoding.
Previous work done showed that the redundant packets generated by RLNC networks can be
used for error detection at the receiver nodes. This error detection method can be implemented
without implementing an outer code; thus, it does not require any additional network resources. We
analyse this method and show that this method is only effective for single error detection, not
correction.
In this thesis the current body of knowledge and technology in practical random linear network
coding is extended through the contribution of effective decoding techniques in practical network
coding networks. We present both analytical and simulation results to show that the developed
techniques can render low complexity coding algorithms with low decoding delay in RLNC networks. / Thesis (PhD (Computer Engineering))--North-West University, Potchefstroom Campus, 2013
|
306 |
Design of effective decoding techniques in network coding networks / Suné von SolmsVon Solms, Suné January 2013 (has links)
Random linear network coding is widely proposed as the solution for practical network coding
applications due to the robustness to random packet loss, packet delays as well as network topology
and capacity changes. In order to implement random linear network coding in practical scenarios
where the encoding and decoding methods perform efficiently, the computational complex coding
algorithms associated with random linear network coding must be overcome.
This research contributes to the field of practical random linear network coding by presenting
new, low complexity coding algorithms with low decoding delay. In this thesis we contribute to this
research field by building on the current solutions available in the literature through the utilisation
of familiar coding schemes combined with methods from other research areas, as well as developing
innovative coding methods.
We show that by transmitting source symbols in predetermined and constrained patterns from
the source node, the causality of the random linear network coding network can be used to create
structure at the receiver nodes. This structure enables us to introduce an innovative decoding
scheme of low decoding delay. This decoding method also proves to be resilient to the effects of
packet loss on the structure of the received packets. This decoding method shows a low decoding
delay and resilience to packet erasures, that makes it an attractive option for use in multimedia
multicasting.
We show that fountain codes can be implemented in RLNC networks without changing the
complete coding structure of RLNC networks. By implementing an adapted encoding algorithm at
strategic intermediate nodes in the network, the receiver nodes can obtain encoded packets that
approximate the degree distribution of encoded packets required for successful belief propagation
decoding.
Previous work done showed that the redundant packets generated by RLNC networks can be
used for error detection at the receiver nodes. This error detection method can be implemented
without implementing an outer code; thus, it does not require any additional network resources. We
analyse this method and show that this method is only effective for single error detection, not
correction.
In this thesis the current body of knowledge and technology in practical random linear network
coding is extended through the contribution of effective decoding techniques in practical network
coding networks. We present both analytical and simulation results to show that the developed
techniques can render low complexity coding algorithms with low decoding delay in RLNC networks. / Thesis (PhD (Computer Engineering))--North-West University, Potchefstroom Campus, 2013
|
307 |
The hybrid list decoding and Chase-like algorithm of Reed-Solomon codes.Jin, Wei. January 2005 (has links)
Reed-Solomon (RS) codes are powerful error-correcting codes that can be found in a
wide variety of digital communications and digital data-storage systems. Classical
hard decoder of RS code can correct t = (dmin -1) /2 errors where dmin = (n - k+ 1)
is the minimum distance of the codeword, n is the length of codeword and k is the
dimension of codeword. Maximum likelihood decoding (MLD) performs better
than the classical decoding and therefore how to approach the performance of
the MLD with less complexity is a subject which has been researched extensively.
Applying the bit reliability obtained from channel to the conventional decoding
algorithm is always an efficient technique to approach the performance of MLD,
although the exponential increase of complexity is always concomitant. It is definite
that more enhancement of performance can be achieved if we apply the bit
reliability to enhanced algebraic decoding algorithm that is more powerful than
conventional decoding algorithm.
In 1997 Madhu Sudan, building on previous work of Welch-Berlekamp, and others,
discovered a polynomial-time algorithm for decoding low-rate Reed- Solomon
codes beyond the classical error-correcting bound t = (dmin -1) /2. Two years later
Guruswami and Sudan published a significantly improved version of Sudan's algorithm
(GS), but these papers did not focus on devising practical implementation.
The other authors, Kotter, Roth and Ruckenstein, were able to find realizations for
the key steps in the GS algorithm, thus making the GS algorithm a practical instrument
in transmission systems. The Gross list algorithm, which is a simplified one
with less decoding complexity realized by a reencoding scheme, is also taken into
account in this dissertation. The fundamental idea of the GS algorithm is to take
advantage of an interpolation step to get an interpolation polynomial produced by
support symbols, received symbols and their corresponding multiplicities. After
that the GS algorithm implements a factorization step to find the roots of the interpolation
polynomial. After comparing the reliability of these codewords which
are from the output of factorization, the GS algorithm outputs the most likely
one. The support set, received set and multiplicity set are created by Koetter Vardy
(KV) front end algorithm. In the GS list decoding algorithm, the number
of errors that can be corrected increases to tcs = n - 1 - lJ (k - 1) n J. It is easy
to show that the GS list decoding algorithm is capable of correcting more errors
than a conventional decoding algorithm.
In this dissertation, we present two hybrid list decoding and Chase-like algorithms.
We apply the Chase algorithms to the KV soft-decision front end. Consequently,
we are able to provide a more reliable input to the KV list algorithm. In
the application of Chase-like algorithm, we take two conditions into consideration,
so that the floor cannot occur and more coding gains are possible. With an increase
of the bits that are chosen by the Chase algorithm, the complexity of the hybrid
algorithm increases exponentially. To solve this problem an adaptive algorithm
is applied to the hybrid algorithm based on the fact that as signal-to-noise ratio
(SNR) increases the received bits are more reliable, and not every received sequence
needs to create the fixed number of test error patterns by the Chase algorithm. We
set a threshold according to the given SNR and utilize it to finally decide which
unreliable bits are picked up by Chase algorithm. However, the performance of the
adaptive hybrid algorithm at high SNRs decreases as the complexity decreases. It
means that the adaptive algorithm is not a sufficient mechanism for eliminating
the redundant test error patterns.
The performance of the adaptive hybrid algorithm at high SNRs motivates us
to find out another way to reduce the complexity without loss of performance.
We would consider the two following problems before dealing with the problem
on hand. One problem is: can we find a terminative condition to decide which
generated candidate codeword is the most likely codeword for received sequence
before all candidates of received set are tested? Another one is: can we eliminate
the test error patterns that cannot create more likely codewords than the generated
codewords? In our final algorithm, an optimality lemma coming from the Kaneko
algorithm is applied to solve the first problem and the second problem is solved by a
ruling out scheme for the reduced list decoding algorithm. The Gross list algorithm
is also applied in our final hybrid algorithm. After the two problems have been
solved, the final hybrid algorithm has performance comparable with the hybrid
algorithm combined the KV list decoding algorithm and the Chase algorithm but
much less complexity at high SNRs. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005
|
308 |
Performance analysis of a LINK-16/JTIDS compatible waveform with noncoherent detection, diversity and side informationKagioglidis, Ioannis. January 2009 (has links) (PDF)
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2009. / Thesis Advisor(s): Robertson, R. Clark. "September 2009." Description based on title screen as viewed on 6 November 2009. Author(s) subject terms: Link-16/JTIDS, (31, 15) Reed-Solomon (RS) coding, 32-ary Orthogonal signaling, Additive White Gaussian Noise (AWGN), Pulse-Noise Interference (PNI), Perfect Side Information (PSI). Includes bibliographical references (p. 49-51). Also available in print.
|
309 |
Performance analysis of the link-16/JTIDS waveform with concatenated codingKoromilas, Ioannis. January 2009 (has links) (PDF)
Thesis (M.S. in Electronic Warfare Systems Engineering)--Naval Postgraduate School, September 2009. / Thesis Advisor(s): Robertson, Ralph C. "September 2009." Description based on title screen as viewed on 5 November 2009. Author(s) subject terms: Link-16/JTIDS, Reed-Solomon (RS) coding, Cyclic Code-Shift Keying (CCSK), Minimum-Shift Keying (MSK), convolutional codes, concatenated codes, perfect side information (PSI), Pulsed-Noise Interference (PNI), Additive White Gaussian Noise (AWGN), coherent detection, noncoherent detection. Includes bibliographical references (p. 79). Also available in print.
|
310 |
Robust high throughput space-time block coded MIMO systems : a thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering from the University of Canterbury, Christchurch, New Zealand /Pau, Nicholas S. J. January 1900 (has links)
Thesis (Ph. D.)--University of Canterbury, 2007. / Typescript (photocopy). "June 2007." Includes bibliographical references (p. 159-166). Also available via the World Wide Web.
|
Page generated in 0.0349 seconds