21 |
VLSI design and implementation of non-linear decoder combined with linear decompressor to achieve greater test vector compressionDoddi, Srujana 26 October 2010 (has links)
This paper investigates the cost-tradeoffs of implementing a test data compression
technique previously presented in [Lee 06] which uses a small non-linear decoder combined
with a linear decompressor to achieve greater test data compressions. The non-linear decoder
is a sequential non-linear decompressor that exploits bit-wise and pattern-wise correlations in
test vectors. This paper further emphasizes the design and implementation side of the
proposed test data compressor. The linear decompressor used in this design is a Linear
Feedback Shift Register (LFSR) which after choosing the right seed has the ability to produce the
correct care bit values while filling the don’t care value bits with pseudo-random values.
Experimental results show that using the presented compression scheme here significantly
improves the overall compression. Area and power results are presented for the experiments
carried out on the given design. / text
|
22 |
High-Performance Decoder Architectures For Low-Density Parity-Check CodesZhang, Kai 09 January 2012 (has links)
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
|
23 |
Detection for multiple input multiple output channels : analysis of sphere decoding and semidefinite relaxationJaldén, Joakim January 2006 (has links)
The problem of detecting a vector of symbols, drawn from a finite alphabet and transmitted over a multiple-input multiple-output (MIMO) channel with Gaussian noise, is of central importance in digital communications and is encountered in several different applications. Examples include, but are not limited to; detection of symbols spatially multiplexed over a multiple-antenna channel and the multiuser detection problem in a code division multiple access (CDMA) system. Two algorithms previously proposed in the literature are considered and analyzed. Both algorithms have their origin in other fields of science but have gained mainstream recognition as efficient algorithms for the detection problem considered herein. Specifically, we consider the sphere decoder and semidefinite relaxation detector. By incorporating assumptions applicable in the communications context the performance of the two algorithms is addressed. The first algorithm, the sphere decoder, offers optimal performance in terms of its error probability. Further, the algorithm has proved extremely efficient in terms of computational complexity for moderately sized problems at high signal to noise ratio (SNR). Although it is recognized that the algorithm has an exponential worst case complexity, there has been a widespread belief that the algorithm has a polynomial average complexity at high SNR. A contribution made herein is to show that this is incorrect and that the average complexity, as the worst case complexity, is exponential in the number of symbols detected. Instead, another explanation of the observed efficiency of the algorithm is offered by deriving the exponential rate of growth and showing that this rate, although strictly positive for finite SNR, is small in the high SNR regime. The second algorithm, the semidefinite relaxation (SDR) detector, offers polynomial complexity at the expense of suboptimal performance in terms of error probability. Nevertheless, previous numerical observations suggest that error probability of the SDR algorithm is close to that of the optimal detector. Herein, the near optimality is of the SDR algorithm is given a precise meaning by studying the diversity of the SDR algorithm when applied to the (real valued) i.i.d.~Rayleigh fading channel and it is shown that the SDR algorithm achieves the same diversity order as the optimal detector. Further, criteria under which the SDR estimates coincide with the optimal estimates are derived and discussed. / Ett grundläggande problem som påträffats inom digital kommunikation är detektering av en symbolvektor, tillhörande ett ändligt symbolalfabet, som sänts över en MIMO (från engelskans multiple-input multiple-output) kanal med Gausiskt brus. Detta problem påträffas bland annat då symboler sänts över en trådlös kanal med flera antenner hos mottagaren och sändaren samt då flera användare i ett CDMA system simultant skall avkodas. In denna avhandling behandlas två mottagaralgoritmer konstruerade för detta ändamål. Algoritmerna har sin bakgrund i andra forskningsområden men kan i nuläget sägas vara mycket välkända inom kommunikationsområdet. De benämns vanligtvis som sfäravkodaren (eng. sphere decoder) samt den semidefinita relaxeringsdetektorn (eng. semidefinite relaxation detector). Algoritmerna analyseras i denna avhandling matematiskt genom att införa förenklande antaganden som är relevanta och applicerbara för de kommunikationsproblem som är av intesse. Den första algoritmen, sfäravkodaren, löser dessa detektionsproblem på ett optimalt sätt i betydelsen att den minimerar sannolikheten för att detektorn fattar ett felaktigt beslut rörande det sända meddelandet (symbolvektorn). Också vad gäller algoritmens komplexitet har simuleringar visat att den är oväntat låg, åtminstone vid höga signalbrusförhållanden (SNR). Trots att det är allmänt känt att algoritmen i sämsta fall har exponentiell komplexitet så har detta lett till den allmänt spridda uppfattningen att medelkomplexiteten (eller den förväntade komplexiteten) endast är polynomisk vid höga signalbrusförhållanden. Ett av huvudbidragen i denna avhandling är att visa att denna uppfattning är felaktig och att också medelkomplexiteten växer exponentiellt i antalet symboler som simultant detekteras. Ytterligare ett bidrag ligger i att ge en alternativ förklaring till den observerat låga medelkomplexiteten. Det visas att den exponentiella hastighet med vilken komplexiteten växer beror på signalbrusförhållande, och att den är låg för höga SNR. Den andra algoritmen, den semidefinita relaxeringsdetektorn, erbjuder polynomisk komplexitet vid en något högre felsannolikhet. Intressant nog har dock felsannolikheten tidigare, genom simuleringar, visat sig vara endast marginellt högre än felsannolikheten hos den optimala mottagaren. Bidraget som relaterar till den semidefinita relaxeringsmottagaren ligger i att både förklara och i att ge en specifik kvatifierbar mening åt uttalandet att felsannolikheten endast är marginellt högre. I syfte att åstadkomma detta studeras diversitetsordningen för detektorn, och det bevisas att diversitetsordningen för den semidefinita relaxeringsdetektorn är densamma som för den optimala mottagaren. Utöver detta karakteriseras också de krav som måste uppfyllas för att den detektorn skall finna den optimala lösningen. / QC 20100901
|
24 |
Evaluation of the Turbo-decoder Coprocessor on a TMS320C64x Digital Signal ProcessorAhlqvist, Johan January 2011 (has links)
One technique that is used to reduce the errors brought upon signals, when transmitted over noisy channels, is error control coding. One type of such coding, which has a good performance, is turbo coding. In some of the TMS320C64xTM digital signal processors there is a built in coprocessor that performs turbo decoding. This thesis is performed on the account of Communication Developments, within Saab AB and presents an evaluation of this coprocessor. The evaluation deals with both the memory consumption as well as the data rate. The result is also compared to an implementation of turbo coding that does not use the coprocessor. / En teknik som används för att minska de fel som en signal utsätts för vid transmission över en brusig kanal är felrättande kodning. Ett exempel på sådan kodning som ger ett mycket bra resultat är turbokodning. I några digitalsignalprocessorer, av sorten TMS320C64xTM, finns en inbyggd coprocessor som utför turboavkodning. Denna uppsats är utförd åt Communication Development inom Saab AB och presenterar en utvärdering av denna coprocessor. Utvärderingen avser såväl minnesförbrukning som datatakt och innehåller även en jämförelse med en implementering av turbokodning utan att använda coprocessorn.
|
25 |
A Low-power Convolutional Decoder with Error Detection AbilityYeh, Wei-ting 03 August 2010 (has links)
In wireless communication systems, we may encounter many problems. One of the main issues is noise interference. To overcome the problem, the sender can use the Convolutional coding method to encode the data, and the receiver can utilize the Viterbi algorithm for decoding and correction purposes. Due to the high complexity of the Viterbi algorithm, the VLSI structure of Viterbi decoder will consume large amounts of power, leading the portable devices to short standby time and high operating temperature. In order to solve these problems we have to design a low power decoder.
As a matter of fact, the Viterbi decoder can be actually shutdown when no noise interference exists. As a consequence, we use a detection circuit to determine whether the signal is influenced by noise. If the signal is interfered, we choose the Viterbi decoder to perform the decoding process. Otherwise, we utilize a low cost decoder to lessen the power consumed at the receiver end.
In addition, dynamic adjustment of SMU module is also developed and implemented in the proposed decoder. SMU module consumes the most power in Viterbi decoder. So, our developed and goal is to reduce the usage of SMU module. If noise distribution is not so dense, we don¡¦t have to use high decoding ability to decode section data. Therefore, the registers in SMU can be decreased. Clock gating technique is adopted in this thesis to shutdown these idle registers to reduce the power consumption of SMU.
The proposed decoder has been implemented and synthesized using the Artisan TSMC 0.13£gm standard cell library. Compared with the traditional Viterbi decoder, the proposed decoder can achieve 25% and nearly 60% power saving when the SNR is 1dB and 8dB respectively, with 6% area reduction. According to the above experimental results, we can say that the proposed decoder is able to reduce power consumption.
|
26 |
Performance Evaluation of Turbo code in LTE systemWu, Han-Ying 25 July 2011 (has links)
As the increasing demand for high data-rate multimedia servicesin wireless broadband access, the advance wireless communication technologies have been developed rapidly. The Long-Term Evolution (LTE) is the new standard for wireless broadband access recently specified by the 3GPP(3rd Generation Partnership Project) on the way towards the fourth-generation mobile. In this thesis, we are interested in the 3GPP-LTE technology and focus on the turbo coding technique used therein. By employing MATLAB/Simulink, we build up the turbo codec simulation platform for 3GPP-LTE system. Two convolutional encoders that realize the concept of parallel concatenated convolutional codes (PCCCs) and a quadratic permutation polynomial (QPP) interleaver are used to implement the turbo encoder. The a posteriori probability (APP) decoder built-in Simulink is utilized to design the decoder that performs the soft-input and soft-output Viterbi Algorithm (SOVA). The zero-order hold block is used to control the number of decoding iteration for the iterative decoding process. We carry out the 3GPP-LTE turbo codec performance in the AWGN channel on the developed platform. Various cases that consider different data length, the number of decoding iteration, interleaver and decoding algorithm are simulated. The simulation results are compared to those of the Xilinx 3GPP-LTE turbo codec. The comparisons show that our turbo codec works properly and meets the LTE standard.
|
27 |
Low-Power Adaptive Viterbi Decoder with Section Error IdentificationLi, Shih-Jie 28 July 2011 (has links)
In wireless communication system, convolutional coding method is often used to encode the data. In decoding convolutional code (CC), Viterbi algorithm is considered to be the best mechanism. Viterbi decoder (VD) was developed to execute the algorithm on mobile devices more effectively. This decoder is often used on 2G and 3G mobile phones. However, on 2G phones, VD consumes about one third of total power consumption of the signal receiver. Therefore it is very necessary to reduce the power consumption of VD on 2G and 3G phones.
VD uses large amount of register in survivor metric unit (SMU), so that the decoder can receive enough CC and converge automatically. The goal of this thesis is to decrease power consumption of SMU by using path metric compare unit (PMCU) to find the best state of path metric unit (PMU). This way decreases half of registers and multiplexers required in SMU, leading to significant area reduction in decoder. During the process of signal transmission in wireless communication, different causes like the atmosphere, outer space radiation and man-made will interfere the signal by different degree. The stronger the noise is, the more interference CC will get.
The error detection circuit used will mark the sections with noise interference before the CC enters the VD. If CC is interfered, it will be decoded by the whole VD. Otherwise, it will be decoded by low power decoder, where the controller will start clock gating mechanism on SMU to close up unnecessary power consumption block.
The power consumption of is varying proposed Adaptive Viterbi decoder according to the interference degree. When interference degree is high, the power consumption is 21% less than conventional VD; when interference is low, it is 44% less. The results show that the proposed method can effectively reduce the power consumption of VD.
|
28 |
Robust Design of Precoder and Decoder for Relay-Assisted Decorrelating CDMA Systems with Imperfect CSITsai, Yong-Chun 25 August 2012 (has links)
In this paper, we explore a cooperative code-division-multiple-access(CDMA) network. Users cooperate by forwarding each other¡¦s messages toward the destination. For simplicity, we assume that signal reception at the destination is well-synchronized. Due to practical design issues of CDMA systems, spreading waveforms allocated to users are not perfectly orthogonal in general. This results in multiple-access interference(MAI) at relays and destination. In CDMA uplink networks one common approach is to adopt decorrelating multi-user detection, but it will lead to noise amplification[16,17]. Therefore, we employ relay-assisted decorrelating multiuser detector(RAD-MUD) to mitigate MAI[1] by performing half of decorrelation at the relay and destination respectively. Based on the availability of CSI at relays, we can further adopt cooperative strategies to improve performance, e.g., transmit beamforming and selective relaying. The destination side will use minimum mean-square error(MMSE) detector to demodulate source symbols. In the existing literatures, channel state information(CSI) is assumed to be perfectly known at relay and destination. Actually, CSI is obtained from channel estimation, which usually contains estimation errors. In order to alleviate effects of channel estimation, one goal of this thesis is to design a robust system. Using estimated CSI and statistical property channel estimation errors, we design robust precoder and detector for the relay and destination. It shows that, even with distortion on channel estimations, the system still achieve excellent transmission efficiency. From the simulation results, it shows that the robust design is better than the system without consider channel estimation errors. Finally, we can see that the stable robust design can effectively mitigate effects of imperfect CSI.
|
29 |
Design and Implementation of Simulator Mechanisms of Architecture Description LanguageLiu, Yi-ting 13 September 2012 (has links)
In the age of system-on-chip designs, design complexity of systems increases continuingly. This results in difficulty of design convergence. In design exploration of system architectures, we need to design, specify, and verify system designs effectively. By employing an architecture description language (ADL), we can effectively support specification and verification of system level designs. Existing ADLs have certain de-efficiencies in specification capabilities. We designed and improved specification capabilities in our architecture description language. Specification techniques in our ADL include behavioral description, structural description, regular structure description, built-in architecture feature description, and data integration description. In this thesis research, we focus on supporting verification capability of our ADL. We designed a simulator of the ADL. The simulation mechanisms include language input design, simulation data structure construction, behavioral simulation, structural simulation, regular architecture simulation, built-in architecture feature simulation, and data integration mechanism. With the ADL simulator, we can verify functionality and performance of architecture designs specified in the ADL. Simulation results can thus be used to guide design exploration and help design convergence.
|
30 |
NTSC Digital Video Decoder and Multi-Symbol CodecChen, Chun-Chih 12 August 2004 (has links)
The first topic of this thesis proposes a digital video decoder for NTSC. The new fully digital design employs a DDFS (digital direct frequency synthesizer) and an adaptive digital PLL to track and lock the demodulation carrier. The complexity of the digital video decoder, hence, is drastically reduced. The overall cost of the proposed design is 6.0 mm2 (39K gates). The maximum power dissipation is 86 mW at the hightest clock rate which is 21.48 MHz.
The second topic is to carry out a codec (encoder-decoder) design for interfacing variable-length and fixed-length data compression. The poor memory efficiency caused by the variable-length words converting into a fixed-length packet such that the compression can be hardwaredly and parallelly processing is significantly improved. The proposed codec is to encode more symbols in the redundant bits of the padding bits of the fixed-length packets. This novel encoding scheme relaxes the intrinsic poor bit rate of the traditional fixed-length data compression.
|
Page generated in 0.0432 seconds