Whenever data is stored or transmitted, it inevitably encounters noise that can lead to harmful corruption. The communication technologies rely on decoding the data using Error Correcting Codes (ECC) that enable the rectification of noise to retrieve the original message. Maximum Likelihood (ML) decoding has proven to be optimally accurate, but it has not been adopted due to the lack of a feasible implementation arising from its computational complexity. It has been established that ML decoding of arbitrary linear codes is a Nondeterministic Polynomial-time (NP) hard problem. As a result, many code-specific decoders have been developed as an approximation of an ML decoder. This code-centric decoding approach leads to a hardware implementation that tightly couples with a specific code structure. Recently proposed Guessing Random Additive Noise Decoding (GRAND) offers a solution by establishing a noise-centric decoding approach, thereby making it a universal ML decoder. Both the soft-detection and hard-detection variants of GRAND have shown to be capacity achieving for any moderate redundancy arbitrary code.
This thesis claims that GRAND can be efficiently implemented in hardware with low complexity while offering significantly higher energy efficiency than state-of-the-art code-centric decoders. In addition to being hardware-friendly, GRAND offers high parallelizability that can be chosen according to the throughput requirement making it flexible for a wide range of applications. To support this claim, this thesis presents custom-designed energy-efficient integrated circuits and hardware architectures for the family of GRAND algorithms. The universality of the algorithm is demonstrated through measurements across various codebooks for different channel conditions. Furthermore, we employ the noise recycling technique in both hard-detection and soft-detection scenarios to improve the decoding by exploiting the temporal noise correlations. Using the fabricated chips, we demonstrate that employing noise recycling with GRAND significantly reduces energy and latency, while providing additional gains in decoding performance.
Efficient integrated architectures of GRAND will significantly reduce the hardware complexity while future-proofing a device so that it can decode any forthcoming code. The noise-centric decoding approach overcomes the need for code standardization making it adaptable for a wide range of applications. A single GRAND chip can replace all existing decoders, offering competitive decoding performance while also providing significantly higher energy and area efficiency. / 2026-05-23T00:00:00Z
Identifer | oai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/48868 |
Date | 24 May 2024 |
Creators | Riaz, Arslan |
Contributors | Yazicigil, Rabia T. |
Source Sets | Boston University |
Language | en_US |
Detected Language | English |
Type | Thesis/Dissertation |
Rights | Attribution 4.0 International, http://creativecommons.org/licenses/by/4.0/ |
Page generated in 0.0023 seconds