• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 5
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Design of low-power error-control code decoder architecture based on reference path generation

Lin, Wang-Ting 14 February 2011 (has links)
In this thesis, the low-power design of two popular error-control code decoders has been presented. It first proposes a low-power Viterbi decoder based on the improved reference path generation method which can lead to significant reduction of the memory accesses during the trace-back operation of the survival memory unit. The use of the reference path has been addressed in the past; this mechanism is further extended in this thesis to take into account the selection of starting states for the trace-back and path prediction operations. Our simulation results show that the best saving ratio of memory access can be up to 92% by choosing the state with the minimum state-metric for both trace-back and path prediction. However, the implementation of our look-ahead path prediction initiated from the minimum state will suffer a lot of area overhead especially for Viterbi applications with large state number. Therefore, this thesis instead realizes a 64-state Viterbi decoder whose path prediction starts from the predicted state obtained from the previous prediction phase. Our implementation results show that the actual power reduction ratio ranges from 31% to 47% for various signal-to-noise ratio settings while the area overhead is about 10%. The second major contribution of this thesis is to apply the similar low-power technique to the design of Soft-Output-Viterbi-Algorithm (SOVA) based Turbo code decoders. Our experimental results show that for eight-state SOVA Turbo code, our reference path generation mechanism can reduce more that 95% memory accesses, which can help saving the overall power consumption by 15.6% with a slight area overhead of 3%.
12

Design of the Tail-biting Convolution Code Decoder with Error Detection Ability

Tseng, I-Ping 25 July 2012 (has links)
In wireless communication system, convolution code has been one of the most popular error-correcting code. To prevent from the interference of noise during transmission, the transmitter usually applies convolution encode to code the processed information, and the receiver will use Viterbi decoder to decode and correct the error bit to decrease the bit error rate. In 3G mobile communication, such decoder is often applied between the base station and the communication device as a decoding mechanism. Since traditional decoders of communication devices consume more than one third power of the whole receiver, the present study focuses on the way effectively reducing the power consumption of Viterbi decoder. Traditional convolution coders use zero-tail, which make decoder be able to resist the interference of noise; however, this method would increase extra tail bits, which would decrease the code rate and affect the efficiency of transmission, especially for those information with short length, such as the header of packet. Tail-biting convolution code is another error-correcting code, which maintains the code rate, and it has been used in the control channel of LTE. Tail-biting convolution code is more complex than traditional decoder. Therefore, this thesis modifies the Wrap-Around Viterbi Algorithm (WAVA) to enormously decrease the power consuming while maintaining the bit error rate and the correctness of decoding. The aim of the present study is achieved by decreasing iteration number of WAVA algorithm to reduce one fourth of the whole power consumption. On the other hand, if the received information is not interfered by noise, it¡¦s unnecessary to turn on Tail-biting Convolution Decoder. As a result, the present study introduces the error detection circuit so that the received information can be simply decode and detected with the error detection circuit. If there is no noise interference, it can directly be outputted; if there is noise interference, however, it should be decoded by Tail-biting Convolution Decoder. The experimental results show that the survivor memory unit saves more than 60% power than traditional decoders, moreover, it will save 55%~88% power consumption when it goes with the error detection circuit. Consequently, the proposed method is indeed able to reduce the power consumption of Tail-biting Convolution Decoder. Keyword¡Gwireless communication, tail-biting convolution code, code rate, Viterbi decoder, power consumption
13

A Viterbi Decoder Using System C For Area Efficient Vlsi Implementation

Sozen, Serkan 01 September 2006 (has links) (PDF)
In this thesis, the VLSI implementation of Viterbi decoder using a design and simulation platform called SystemC is studied. For this purpose, the architecture of Viterbi decoder is tried to be optimized for VLSI implementations. Consequently, two novel area efficient structures for reconfigurable Viterbi decoders have been suggested. The traditional and SystemC design cycles are compared to show the advantages of SystemC, and the C++ platforms supporting SystemC are listed, installation issues and examples are discussed. The Viterbi decoder is widely used to estimate the message encoded by Convolutional encoder. For the implementations in the literature, it can be found that special structures called trellis have been formed to decrease the complexity and the area. In this thesis, two new area efficient reconfigurable Viterbi decoder approaches are suggested depending on the rearrangement of the states of the trellis structures to eliminate the switching and memory addressing complexity. The first suggested architecture based on reconfigurable Viterbi decoder reduces switching and memory addressing complexity. In the architectures, the states are reorganized and the trellis structures are realized by the usage of the same structures in subsequent instances. As the result, the area is minimized and power consumption is reduced. Since the addressing complexity is reduced, the speed is expected to increase. The second area efficient Viterbi decoder is an improved version of the first one and has the ability to configure the parameters of constraint length, code rate, transition probabilities, trace-back depth and generator polynomials.
14

A High Throughput Low Power Soft-Output Viterbi Decoder

Ouyang, Gan January 2011 (has links)
A high-throughput low-power Soft-Output Viterbi decoder designed for the convolutional codes used in the ECMA-368 UWB standard is presented in this thesis. The ultra wide band (UWB) wireless communication technology is supposed to be used in physical layer of the wireless personal area network (WPAN) and next generation Blue Tooth. MB-OFDM is a very popular scheme to implement the UWB system and is adopted as the ECMA-368 standard. To make the high speed data transferred over the channel reappear reliably at the receiver, the error correcting codes (ECC) are wildly utilized in modern communication systems. The ECMA-368 standard uses concatenated convolutional codes and Reed-Solomon (RS) codes to encode the PLCP header and only convolutional codes to encode the PPDU Payload. The Viterbi algorithm (VA) is a popular method of decoding convolutional codes for its fairly low hardware implementation complexity and relatively good performance. Soft-Output Viterbi Algorithm (SOVA) proposed by J. Hagenauer in 1989 is a modified Viterbi Algorithm. A SOVA decoder can not only take in soft quantized samples but also provide soft outputs by estimating the reliability of the individual symbol decisions. These reliabilities can be provided to the subsequent decoder to improve the decoding performance of the concatenated decoder. The SOVA decoder is designed to decode the convolutional codes defined in the ECMA-368 standard. Its code rate and constraint length is R=1/3 and K=7 respectively. Additional code rates derived from the "mother" rate R=1/3 codes by employing "puncturing", including 1/2, 3/4, 5/8, can also be decoded. To speed up the add-compare-select unit (ACSU), which is always the speed bottleneck of the decoder, the modified CSA structure proposed by E.Yeo is adopted to replace the conventional ACS structure. Besides, the seven-level quantization instead of the traditional eight-level quantization is proposed to be used is in this decoder to speed up the ACSU in further and reduce its hardware implementation overhead. In the SOVA decoder, the delay line storing the path metric difference of every state contains the major portion of the overall required memory. A novel hybrid survivor path management architecture using the modified trace-forward method is proposed. It can reduce the overall required memory and achieve high throughput without consuming much power. In this thesis, we also give the way to optimize the other modules of the SOVA decoder. For example, the first K-1 necessary stages in the Path Comparison Unit (PCU) and Reliability Measurement Unit (RMU) are IX removed without affecting the decoding results. The attractiveness of SOVA decoder enables us to find a way to deliver its soft output to the RS decoder. We have to convert bit reliability into symbol reliability because the soft output of SOVA decoder is the bit-oriented while the reliability per byte is required by the RS decoder. But no optimum transformation strategy exists because the SOVA output is correlated. This thesis compare two kinds of the sub-optimum transformation strategy and proposes an easy to implement scheme to concatenate the SOVA decoder and RS decoder under various kinds of convolutional code rates. Simulation results show that, using this scheme, the concatenated SOVA-RS decoder can achieve about 0.35dB decoding performance gain compared to the conventional Viterbi-RS decoder.
15

Management d'opérateurs communs dans les architectures de terminaux multistandards. / Management of common operators in the architectures of multi-standard terminals.

Naoues, Malek 26 November 2013 (has links)
Les équipements de communications numériques intègrent de plus en plus de standards. La commutation d’un standard à l’autre doit pouvoir se faire au prix d’un surcoût matériel modéré, ce qui impose l’utilisation de ressources communes dans des instanciations différentes. La plateforme matérielle nécessaire à l’exécution d’une couche physique multistandard est le segment du système présentant le plus de contraintes par rapport à la reconfiguration : réactivité, consommation et occupation de ressources matérielles. Nos travaux se focalisent sur la paramétrisation qui vise une implémentation multistandards efficace. L’objectif de cette technique est d’identifier des traitements communs entre les standards, voire entre blocs de traitement au sein d’un même standard, afin de définir des blocs génériques pouvant être réutilisés facilement. Nous définissons le management d’opérateurs mutualisés (opérateurs communs) et nous étudions leur implémentation en se basant essentiellement sur des évaluations de complexité pour quelques standards utilisant la modulation OFDM. Nous proposons en particulier l’architecture d’un opérateur commun permettant la gestion efficace des ressources matérielles entre les algorithmes FFT et décodage de Viterbi. L’architecture, que nous avons proposé et implémenté sur FPGA, permet d’adapter le nombre d’opérateurs communs alloués à chaque algorithme et donc permet l’accélération des traitements. Les résultats montrent que l’utilisation de cette architecture commune offre des gains en complexité pouvant atteindre 30% dans les configurations testées par rapport à une implémentation classique avec une réduction importante de l’occupation mémoire. / Today's telecommunication systems require more and more flexibility, and reconfiguration mechanisms are becoming major topics especially when it comes to multistandard designs. In typical hardware designs, the communication standards are implemented separately using dedicated instantiations which are difficult to upgrade for the support of new features. To overcome these issues, we exploit a parameterization approach called the Common Operator (CO) technique that can be considered to build a generic terminal capable of supporting a large range of communication standards. The main principle of the CO technique is to identify common elements based on smaller structures that could be widely reused across signal processing functions. This technique aims at designing a scalable digital signal processing platform based on medium granularity operators, larger than basic logic cells and smaller than signal processing functions. In this thesis, the CO technique is applied to two widely used algorithms in wireless communication systems: Viterbi decoding and Fast Fourier Transform (FFT). Implementing the FFT and Viterbi algorithms in a multistandard context through a common architecture poses significant architectural constraints. Thus, we focus on the design of a flexible processor to manage the COs and take advantage from structural similarities between FFT and Viterbi trellis. A flexible FFT/Viterbi processor was proposed and implemented on FPGA and compared to dedicated hardware implementations. The results show a considerable gain in flexibility. This gain is achieved with no complexity overhead since the complexity if even decreased up to 30% in the considered configurations.
16

Rastreamento automático da bola de futebol em vídeos

Ilha, Gustavo January 2009 (has links)
A localização de objetos em uma imagem e acompanhamento de seu deslocamento numa sequência de imagens são tarefas de interesse teórico e prático. Aplicações de reconhecimento e rastreamento de padrões e objetos tem se difundido ultimamente, principalmente no ramo de controle, automação e vigilância. Esta dissertação apresenta um método eficaz para localizar e rastrear automaticamente objetos em vídeos. Para tanto, foi utilizado o caso do rastreamento da bola em vídeos esportivos, especificamente o jogo de futebol. O algoritmo primeiramente localiza a bola utilizando segmentação, eliminação e ponderação de candidatos, seguido do algoritmo de Viterbi, que decide qual desses candidatos representa efetivamente a bola. Depois de encontrada, a bola é rastreada utilizando o Filtro de Partículas auxiliado pelo método de semelhança de histogramas. Não é necessária inicialização da bola ou intervenção humana durante o algoritmo. Por fim, é feita uma comparação do Filtro de Kalman com o Filtro de Partículas no escopo do rastreamento da bola em vídeos de futebol. E, adicionalmente, é feita a comparação entre as funções de semelhança para serem utilizadas no Filtro de Partículas para o rastreamento da bola. Dificuldades, como a presença de ruído e de oclusão, tanto parcial como total, tiveram de ser contornadas. / The location of objects in an image and tracking its movement in a sequence of images is a task of theoretical and practical interest. Applications for recognition and tracking of patterns and objects have been spread lately, especially in the field of control, automation and vigilance. This dissertation presents an effective method to automatically locate and track objects in videos. Thereto, we used the case of tracking the ball in sports videos, specifically the game of football. The algorithm first locates the ball using segmentation, elimination and the weighting of candidates, followed by a Viterbi algorithm, which decides which of these candidates is actually the ball. Once found, the ball is tracked using the Particle Filter aided by the method of similarity of histograms. It is not necessary to initialize the ball or any human intervention during the algorithm. Next, a comparison of the Kalman Filter to Particle Filter in the scope of tracking the ball in soccer videos is made. And in addition, a comparison is made between the functions of similarity to be used in the Particle Filter for tracking the ball. Difficulties, such as the presence of noise and occlusion, in part or in total, had to be circumvented.
17

Rastreamento automático da bola de futebol em vídeos

Ilha, Gustavo January 2009 (has links)
A localização de objetos em uma imagem e acompanhamento de seu deslocamento numa sequência de imagens são tarefas de interesse teórico e prático. Aplicações de reconhecimento e rastreamento de padrões e objetos tem se difundido ultimamente, principalmente no ramo de controle, automação e vigilância. Esta dissertação apresenta um método eficaz para localizar e rastrear automaticamente objetos em vídeos. Para tanto, foi utilizado o caso do rastreamento da bola em vídeos esportivos, especificamente o jogo de futebol. O algoritmo primeiramente localiza a bola utilizando segmentação, eliminação e ponderação de candidatos, seguido do algoritmo de Viterbi, que decide qual desses candidatos representa efetivamente a bola. Depois de encontrada, a bola é rastreada utilizando o Filtro de Partículas auxiliado pelo método de semelhança de histogramas. Não é necessária inicialização da bola ou intervenção humana durante o algoritmo. Por fim, é feita uma comparação do Filtro de Kalman com o Filtro de Partículas no escopo do rastreamento da bola em vídeos de futebol. E, adicionalmente, é feita a comparação entre as funções de semelhança para serem utilizadas no Filtro de Partículas para o rastreamento da bola. Dificuldades, como a presença de ruído e de oclusão, tanto parcial como total, tiveram de ser contornadas. / The location of objects in an image and tracking its movement in a sequence of images is a task of theoretical and practical interest. Applications for recognition and tracking of patterns and objects have been spread lately, especially in the field of control, automation and vigilance. This dissertation presents an effective method to automatically locate and track objects in videos. Thereto, we used the case of tracking the ball in sports videos, specifically the game of football. The algorithm first locates the ball using segmentation, elimination and the weighting of candidates, followed by a Viterbi algorithm, which decides which of these candidates is actually the ball. Once found, the ball is tracked using the Particle Filter aided by the method of similarity of histograms. It is not necessary to initialize the ball or any human intervention during the algorithm. Next, a comparison of the Kalman Filter to Particle Filter in the scope of tracking the ball in soccer videos is made. And in addition, a comparison is made between the functions of similarity to be used in the Particle Filter for tracking the ball. Difficulties, such as the presence of noise and occlusion, in part or in total, had to be circumvented.
18

Rastreamento automático da bola de futebol em vídeos

Ilha, Gustavo January 2009 (has links)
A localização de objetos em uma imagem e acompanhamento de seu deslocamento numa sequência de imagens são tarefas de interesse teórico e prático. Aplicações de reconhecimento e rastreamento de padrões e objetos tem se difundido ultimamente, principalmente no ramo de controle, automação e vigilância. Esta dissertação apresenta um método eficaz para localizar e rastrear automaticamente objetos em vídeos. Para tanto, foi utilizado o caso do rastreamento da bola em vídeos esportivos, especificamente o jogo de futebol. O algoritmo primeiramente localiza a bola utilizando segmentação, eliminação e ponderação de candidatos, seguido do algoritmo de Viterbi, que decide qual desses candidatos representa efetivamente a bola. Depois de encontrada, a bola é rastreada utilizando o Filtro de Partículas auxiliado pelo método de semelhança de histogramas. Não é necessária inicialização da bola ou intervenção humana durante o algoritmo. Por fim, é feita uma comparação do Filtro de Kalman com o Filtro de Partículas no escopo do rastreamento da bola em vídeos de futebol. E, adicionalmente, é feita a comparação entre as funções de semelhança para serem utilizadas no Filtro de Partículas para o rastreamento da bola. Dificuldades, como a presença de ruído e de oclusão, tanto parcial como total, tiveram de ser contornadas. / The location of objects in an image and tracking its movement in a sequence of images is a task of theoretical and practical interest. Applications for recognition and tracking of patterns and objects have been spread lately, especially in the field of control, automation and vigilance. This dissertation presents an effective method to automatically locate and track objects in videos. Thereto, we used the case of tracking the ball in sports videos, specifically the game of football. The algorithm first locates the ball using segmentation, elimination and the weighting of candidates, followed by a Viterbi algorithm, which decides which of these candidates is actually the ball. Once found, the ball is tracked using the Particle Filter aided by the method of similarity of histograms. It is not necessary to initialize the ball or any human intervention during the algorithm. Next, a comparison of the Kalman Filter to Particle Filter in the scope of tracking the ball in soccer videos is made. And in addition, a comparison is made between the functions of similarity to be used in the Particle Filter for tracking the ball. Difficulties, such as the presence of noise and occlusion, in part or in total, had to be circumvented.
19

Flexible Constraint Length Viterbi Decoders On Large Wire-area Interconnection Topologies

Garga, Ganesh 07 1900 (has links)
To achieve the goal of efficient ”anytime, anywhere” communication, it is essential to develop mobile devices which can efficiently support multiple wireless communication standards. Also, in order to efficiently accommodate the further evolution of these standards, it should be possible to modify/upgrade the operation of the mobile devices without having to recall previously deployed devices. This is achievable if as much functionality of the mobile device as possible is provided through software. A mobile device which fits this description is called a Software Defined Radio (SDR). Reconfigurable hardware-based solutions are an attractive option for realizing SDRs as they can potentially provide a favourable combination of the flexibility of a DSP or a GPP and the efficiency of an ASIC. The work presented in this thesis discusses the development of efficient reconfigurable hardware for one of the most energy-intensive functionalities in the mobile device, namely, Forward Error Correction (FEC). FEC is required in order to achieve reliable transfer of information at minimal transmit power levels. FEC is achieved by encoding the information in a process called channel coding. Previous studies have shown that the FEC unit accounts for around 40% of the total energy consumption of the mobile unit. In addition, modern wireless standards also place the additional requirement of flexibility on the FEC unit. Thus, the FEC unit of the mobile device represents a considerable amount of computing ability that needs to be accommodated into a very small power, area and energy budget. Two channel coding techniques have found widespread use in most modern wireless standards -namely convolutional coding and turbo coding. The Viterbi algorithm is most widely used for decoding convolutionally encoded sequences. It is possible to use this algorithm iteratively in order to decode turbo codes. Hence, this thesis specifically focusses on developing architectures for flexible Viterbi decoders. Chapter 2 provides a description of the Viterbi and turbo decoding techniques. The flexibility requirements placed on the Viterbi decoder by modern standards can be divided into two types -code rate flexibility and constraint length flexibility. The code rate dictates the number of received bits which are handled together as a symbol at the receiver. Hence, code rate flexibility needs to be built into the basic computing units which are used to implement the Viterbi algorithm. The constraint length dictates the number of computations required per received symbol as well as the manner of transfer of results between these computations. Hence, assuming that multiple processing units are used to perform the required computations, supporting constraint length flexibility necessitates changes in the interconnection network connecting the computing units. A constraint length K Viterbi decoder needs 2K−1computations to be performed per received symbol. The results of the computations are exchanged among the computing units in order to prepare for the next received symbol. The communication pattern according to which these results are exchanged forms a graph called a de Bruijn graph, with 2K−1nodes. This implies that providing constraint length flexibility requires being able to realize de Bruijn graphs of various sizes on the interconnection network connecting the processing units. This thesis focusses on providing constraint length flexibility in an efficient manner. Quite clearly, the topology employed for interconnecting the processing units has a huge effect on the efficiency with which multiple constraint lengths can be supported. This thesis aims to explore the usefulness of interconnection topologies similar to the de Bruijn graph, for building constraint length flexible Viterbi decoders. Five different topologies have been considered in this thesis, which can be discussed under two different headings, as done below: De Bruijn network-based architectures The interconnection network that is of chief interest in this thesis is the de Bruijn interconnection network itself, as it is identical to the communication pattern for a Viterbi decoder of a given constraint length. The problem of realizing flexible constraint length Viterbi decoders using a de Bruijn network has been approached in two different ways. The first is an embedding-theoretic approach where the problem of supporting multiple constraint lengths on a de Bruijn network is seen as a problem of embedding smaller sized de Bruijn graphs on a larger de Bruijn graph. Mathematical manipulations are presented to show that this embedding can generally be accomplished with a maximum dilation of, where N is the number of computing nodes in the physical network, while simultaneously avoiding any congestion of the physical links. In this case, however, the mapping of the decoder states onto the processing nodes is assumed fixed. Another scheme is derived based on a variable assignment of decoder states onto computing nodes, which turns out to be more efficient than the embedding-based approach. For this scheme, the maximum number of cycles per stage is found to be limited to 2 irrespective of the maximum contraint length to be supported. In addition, it is also found to be possible to execute multiple smaller decoders in parallel on the physical network, for smaller constraint lengths. Consequently, post logic-synthesis, this architecture is found to be more area-efficient than the architecture based on the embedding theoretic approach. It is also a more efficiently scalable architecture. Alternative architectures There are several interconnection topologies which are closely connected to the de Bruijn graph, and hence could form attractive alternatives for realizing flexbile constraint length Viterbi decoders. We consider two more topologies from this class -namely, the shuffle-exchange network and the flattened butterfly network. The variable state assignment scheme developed for the de Bruijn network is found to be directly applicable to the shuffle-exchange network. The average number of clock cycles per stage is found to be limited to 4 in this case. This is again independent of the constraint length to be supported. On the flattened butterfly (which is actually identical to the hypercube), a state scheduling scheme similar to that of bitonic sorting is used. This architecture is found to offer the ideal throughput of one decoded bit every clock cycle, for any constraint length. For comparison with a more general purpose topology, we consider a flexible constraint length Viterbi decoder architecture based on a 2D-mesh, which is a popular choice for general purpose applications, as well as many signal processing applications. The state scheduling scheme used here is also similar to that used for bitonic sorting on a mesh. All the alternative architectures are capable of executing multiple smaller decoders in parallel on the larger interconnection network. Inferences Following logic synthesis and power estimation, it is found that the de Bruijn network-based architecture with the variable state assignment scheme yields the lowest (area)−(time) product, while the flattened butterfly network-based architecture yields the lowest (area) - (time)2product. This means, that the de Bruijn network-based architecture is the best choice for moderate throughput applications, while the flattened butterfly network-based architecture is the best choice for high throughput applications. However, as the flattened butterfly network is less scalable in terms of size compared to the de Bruijn network, it can be concluded that among the architectures considered in this thesis, the de Bruijn network-based architecture with the variable state assignment scheme is overall an attractive choice for realizing flexible constraint length Viterbi decoders.
20

Low-power discrete Fourier transform and soft-decision Viterbi decoder for OFDM receivers

Suh, Sangwook 31 August 2011 (has links)
The purpose of this research is to present a low-power wireless communication receiver with an enhanced performance by relieving the system complexity and performance degradation imposed by a quantization process. With an overwhelming demand for more reliable communication systems, the complexity required for modern communication systems has been increased accordingly. A byproduct of this increase in complexity is a commensurate increase in power consumption of the systems. Since the Shannon's era, the main stream of the methodologies for promising the high reliability of communication systems has been based on the principle that the information signals flowing through the system are represented in digits. Consequently, the system itself has been heavily driven to be implemented with digital circuits, which is generally beneficial over analog implementations when digitally stored information is locally accessible, such as in memory systems. However, in communication systems, a receiver does not have a direct access to the originally transmitted information. Since the received signals from a noisy channel are already continuous values with continuous probability distributions, we suggest a mixed-signal system in which the received continuous signals are directly fed into the analog demodulator and the subsequent soft-decision Viterbi decoder without any quantization involved. In this way, we claim that redundant system complexity caused by the quantization process is eliminated, thus gives better power efficiency in wireless communication systems, especially for battery-powered mobile devices. This is also beneficial from a performance perspective, as it takes full advantage of the soft information flowing through the system.

Page generated in 0.0816 seconds