• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 21
  • 19
  • 19
  • 16
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 190
  • 40
  • 38
  • 36
  • 31
  • 28
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Projeto e implementação de um novo algoritmo e protocolo de encaminhamento de pacotes baseado em códigos convolucionais usando TCNet: Trellis Coded Network. / Design and implementation of a new algorithm and packed forwarding protocol based on convolutional codes using TCNet: Trellis Coded Network.

Lima Filho, Diogo Ferreira 24 February 2015 (has links)
Os Wireless sensor networks (WSNs) evoluíram a partir da idéia de que sensores sem fio podem ser utilizados para coletar informações de ambientes nas mais diversas situações. Os primeiros trabalhos sobre WSNs foram desenvolvidos pelo Defense Advanced Research Projects Agency (DARPA)1, com o conceito de Smart Dust baseados em microelectromechanical systems (MEMS), dispositivos com capacidades de detectar luminosidade, temperatura, vibração, magnetismo ou elementos químicos, com processamento embarcado e capaz de transmitir dados via wireless. Atualmente tecnologias emergentes têm aproveitado a possibilidade de comunicação com a World Wide Web para ampliar o rol de aplicações desta tecnologia, dentre elas a Internet das Coisas (Internet of Things) IoT. Esta pesquisa estuda a implementação de um novo algoritmo e protocolo que possibilita o encaminhamento dos dados coletados nos microsensores em cenários de redes ad hoc com os sensores distribuídos aleatoriamente, em uma área adversa. Apesar de terem sido desenvolvidos vários dispositivos de hardware pela comunidade de pesquisa sobre WSN, existe um esforço liderado pela Internet Engineering Task Force (IETF)2, na implementação e padronização de protocolos que atendam a estes mecanismos, com limitações de recursos em energia e processamento. Este trabalho propõe a implementação de novos algoritmos de encaminhamento de pacotes utilizando o conceito de códigos convolucionais. Os resultados obtidos por meio de extensivas simulações mostram ganhos em termos da redução de latência e do consumo de energia em relação ao protocolo AODV. A complexidade de implementação é extremamente baixa e compatível com os poucos recursos de hardware dos elementos que usualmente compõem uma rede de sensores sem fio (WSN). Na seção de trabalhos futuros é indicado um extenso conjunto de aplicações em que os conceitos desenvolvidos podem ser aplicados. / Wireless sensor networks (WSNs) have evolved from the idea that small wireless sensors can be used to collect information from the physical environment in a large number of situations. Early work in WSNs were developed by Defense Advanced Research Projects Agency (DARPA)1, so called Smart Dust, based on microelectromechanical systems (MEMS), devices able to detect light, temperature, vibration, magnetism or chemicals, with embedded processing and capable of transmitting wireless data. Currently emerging technologies have taken advantage of the possibility of communication with the World Wide Web to expand to all applications of this technology, among them the Internet of Things IoT. This research, studies to implement a new algorithm and protocol that allows routing of data collected in micro sensors in ad hoc networks scenarios with randomly distributed sensors in adverse areas. Although they were developed several hardware devices by the research community on WSN, there is an effort led by Internet Engineering Task Force (IETF)2, in the implementation and standardization of protocols that meet these mechanisms, with limited energy and processing resources. This work proposes the implementation of new packets forwarding algorithms using the concept of convolutional codes. The results obtained by means of extensive simulations show gains in terms of latency and energy consumption reduction compared to the AODV protocol. The implementation complexity is extremely low and compatible with the few hardware resources usually available in the elements of a wireless sensor network (WSN). In the future works section a large set of applications for which the developed concepts can be applied is indicated.
102

Projeto e implementação de um novo algoritmo e protocolo de encaminhamento de pacotes baseado em códigos convolucionais usando TCNet: Trellis Coded Network. / Design and implementation of a new algorithm and packed forwarding protocol based on convolutional codes using TCNet: Trellis Coded Network.

Diogo Ferreira Lima Filho 24 February 2015 (has links)
Os Wireless sensor networks (WSNs) evoluíram a partir da idéia de que sensores sem fio podem ser utilizados para coletar informações de ambientes nas mais diversas situações. Os primeiros trabalhos sobre WSNs foram desenvolvidos pelo Defense Advanced Research Projects Agency (DARPA)1, com o conceito de Smart Dust baseados em microelectromechanical systems (MEMS), dispositivos com capacidades de detectar luminosidade, temperatura, vibração, magnetismo ou elementos químicos, com processamento embarcado e capaz de transmitir dados via wireless. Atualmente tecnologias emergentes têm aproveitado a possibilidade de comunicação com a World Wide Web para ampliar o rol de aplicações desta tecnologia, dentre elas a Internet das Coisas (Internet of Things) IoT. Esta pesquisa estuda a implementação de um novo algoritmo e protocolo que possibilita o encaminhamento dos dados coletados nos microsensores em cenários de redes ad hoc com os sensores distribuídos aleatoriamente, em uma área adversa. Apesar de terem sido desenvolvidos vários dispositivos de hardware pela comunidade de pesquisa sobre WSN, existe um esforço liderado pela Internet Engineering Task Force (IETF)2, na implementação e padronização de protocolos que atendam a estes mecanismos, com limitações de recursos em energia e processamento. Este trabalho propõe a implementação de novos algoritmos de encaminhamento de pacotes utilizando o conceito de códigos convolucionais. Os resultados obtidos por meio de extensivas simulações mostram ganhos em termos da redução de latência e do consumo de energia em relação ao protocolo AODV. A complexidade de implementação é extremamente baixa e compatível com os poucos recursos de hardware dos elementos que usualmente compõem uma rede de sensores sem fio (WSN). Na seção de trabalhos futuros é indicado um extenso conjunto de aplicações em que os conceitos desenvolvidos podem ser aplicados. / Wireless sensor networks (WSNs) have evolved from the idea that small wireless sensors can be used to collect information from the physical environment in a large number of situations. Early work in WSNs were developed by Defense Advanced Research Projects Agency (DARPA)1, so called Smart Dust, based on microelectromechanical systems (MEMS), devices able to detect light, temperature, vibration, magnetism or chemicals, with embedded processing and capable of transmitting wireless data. Currently emerging technologies have taken advantage of the possibility of communication with the World Wide Web to expand to all applications of this technology, among them the Internet of Things IoT. This research, studies to implement a new algorithm and protocol that allows routing of data collected in micro sensors in ad hoc networks scenarios with randomly distributed sensors in adverse areas. Although they were developed several hardware devices by the research community on WSN, there is an effort led by Internet Engineering Task Force (IETF)2, in the implementation and standardization of protocols that meet these mechanisms, with limited energy and processing resources. This work proposes the implementation of new packets forwarding algorithms using the concept of convolutional codes. The results obtained by means of extensive simulations show gains in terms of latency and energy consumption reduction compared to the AODV protocol. The implementation complexity is extremely low and compatible with the few hardware resources usually available in the elements of a wireless sensor network (WSN). In the future works section a large set of applications for which the developed concepts can be applied is indicated.
103

Rastreamento automático da bola de futebol em vídeos

Ilha, Gustavo January 2009 (has links)
A localização de objetos em uma imagem e acompanhamento de seu deslocamento numa sequência de imagens são tarefas de interesse teórico e prático. Aplicações de reconhecimento e rastreamento de padrões e objetos tem se difundido ultimamente, principalmente no ramo de controle, automação e vigilância. Esta dissertação apresenta um método eficaz para localizar e rastrear automaticamente objetos em vídeos. Para tanto, foi utilizado o caso do rastreamento da bola em vídeos esportivos, especificamente o jogo de futebol. O algoritmo primeiramente localiza a bola utilizando segmentação, eliminação e ponderação de candidatos, seguido do algoritmo de Viterbi, que decide qual desses candidatos representa efetivamente a bola. Depois de encontrada, a bola é rastreada utilizando o Filtro de Partículas auxiliado pelo método de semelhança de histogramas. Não é necessária inicialização da bola ou intervenção humana durante o algoritmo. Por fim, é feita uma comparação do Filtro de Kalman com o Filtro de Partículas no escopo do rastreamento da bola em vídeos de futebol. E, adicionalmente, é feita a comparação entre as funções de semelhança para serem utilizadas no Filtro de Partículas para o rastreamento da bola. Dificuldades, como a presença de ruído e de oclusão, tanto parcial como total, tiveram de ser contornadas. / The location of objects in an image and tracking its movement in a sequence of images is a task of theoretical and practical interest. Applications for recognition and tracking of patterns and objects have been spread lately, especially in the field of control, automation and vigilance. This dissertation presents an effective method to automatically locate and track objects in videos. Thereto, we used the case of tracking the ball in sports videos, specifically the game of football. The algorithm first locates the ball using segmentation, elimination and the weighting of candidates, followed by a Viterbi algorithm, which decides which of these candidates is actually the ball. Once found, the ball is tracked using the Particle Filter aided by the method of similarity of histograms. It is not necessary to initialize the ball or any human intervention during the algorithm. Next, a comparison of the Kalman Filter to Particle Filter in the scope of tracking the ball in soccer videos is made. And in addition, a comparison is made between the functions of similarity to be used in the Particle Filter for tracking the ball. Difficulties, such as the presence of noise and occlusion, in part or in total, had to be circumvented.
104

Low-density parity-check codes : construction and implementation.

Malema, Gabofetswe Alafang January 2007 (has links)
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques. / Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
105

VLSI Implementation of Key Components in A Mobile Broadband Receiver

Huang, Yulin January 2009 (has links)
<p>Digital front-end and Turbo decoder are the two key components in the digital wireless communication system. This thesis will discuss the implementation issues of both digital front-end and Turbo decoder.The structure of digital front-end for multi-standard radio supporting wireless standards such as IEEE802.11n, WiMAX, 3GPP LTE is investigated in the thesis. A top-to-down design methods. 802.11n digital down-converter is designed from Matlab model to VHDL implementation. Both simulation and FPGA prototyping are carried out.As another significant part of the thesis, a parallel Turbo decoder is designed and implemented for 3GPPLTE. The block size supported ranges from 40 to 6144 and the maximum number of iteration is eight.The Turbo decoder will use eight parallel SISO units to reach a throughput up to 150Mits.</p>
106

Décodage itératif pour les codes LDPC au-delà de la propagation de croyances.

Planjery, Shiva 05 December 2012 (has links) (PDF)
Les codes Low-Density Parity-Check (LDPC) sont au coeur de la recherche des codes correcteurs d'erreurs en raison de leur excellente performance de décodage en utilisant un algorithme de décodage itératif de type propagation de croyances (Belief Propagation - BP). Cet algorithme utilise la représentation graphique d'un code, dit graphe de Tanner, et calcule les fonctions marginales sur le graphe. Même si l'inférence calculée n'est exacte que sur un graphe acyclique (arbre), l'algorithme BP estime de manière très proche les marginales sur les graphes cycliques, et les codes LDPC peuvent asymptotiquement approcher la capacité de Shannon avec cet algorithme. Cependant, sur des codes de longueurs finies dont la représentation graphique contient des cycles, l'algorithme BP est sous-optimal et donne lieu à l'apparition du phénomène dit de plancher d'erreur. Le plancher d'erreur se manifeste par la dégradation soudaine de la pente du taux d'erreur dans la zone de fort rapport signal à bruit où les structures néfastes au décodage sont connues en termes de Trapping Sets présents dans le graphe de Tanner du code, entraînant un échec du décodage. De plus, les effets de la quantification introduite par l'implémentation en hardware de l'algorithme BP peuvent amplifier ce problème de plancher d'erreur. Dans cette thèse nous introduisons un nouveau paradigme pour le décodage itératif à précision finie des codes LDPC sur le canal binaire symétrique. Ces nouveaux décodeurs, appelés décodeurs itératifs à alphabet fini (Finite Alphabet Iterative Decoders - FAID) pour préciser que les messages appartiennent à un alphabet fini, sont capables de surpasser l'algorithme BP dans la région du plancher d'erreur. Les messages échangés par les FAID ne sont pas des probabilités ou vraisemblances quantifiées, et les fonctions de mise à jour des noeuds de variable ne copient en rien le décodage par BP ce qui contraste avec les décodeurs BP quantifiés traditionnels. En effet, les fonctions de mise à jour sont de simples tables de vérité conçues pour assurer une plus grande capacité de correction d'erreur en utilisant la connaissance de topologies potentiellement néfastes au décodage présentes dans un code donné. Nous montrons que sur de multiples codes ayant un poids colonne de trois, il existe des FAID utilisant 3 bits de précision pouvant surpasser l'algorithme BP (implémenté en précision flottante) dans la zone de plancher d'erreur sans aucun compromis dans la latence de décodage. C'est pourquoi les FAID obtiennent des performances supérieures comparées au BP avec seulement une fraction de sa complexité. Par ailleurs, nous proposons dans cette thèse une décimation améliorée des FAID pour les codes LDPC dans le traitement de la mise à jour des noeuds de variable. La décimation implique de fixer certains bits du code à une valeur particulière pendant le décodage et peut réduire de manière significative le nombre d'itérations requises pour corriger un certain nombre d'erreurs fixé tout en maintenant de bonnes performances d'un FAID, le rendant plus à même d'être analysé. Nous illustrons cette technique pour des FAID utilisant 3 bits de précision codes de poids colonne trois. Nous montrons également comment cette décimation peut être utilisée de manière adaptative pour améliorer les capacités de correction d'erreur des FAID. Le nouveau modèle proposé de décimation adaptative a, certes, une complexité un peu plus élevée, mais améliore significativement la pente du plancher d'erreur pour un FAID donné. Sur certains codes à haut rendement, nous montrons que la décimation adaptative des FAID permet d'atteindre des capacités de correction d'erreur proches de la limite théorique du décodage au sens du maximum de vraisemblance.
107

Hardware Implementation Of Inverse Transform &amp / Quantization And Deblocking Filter For Low Power H.264 Decoder

Onsay, Onder 01 September 2009 (has links) (PDF)
Mobile devices such as PDAs and cellular phones became indispensible part of business and entertainment world. There are a number of applications run on these devices and they tend to increase day by day causing devices tend to consume more battery power. H.264/AVC is an emerging video compression standard that is likely to be used widely in multimedia environments. As a mobile application, video compression algorithm of H.264 standard has a complex structure that increase the power demand of realizing hardware. In order to reduce this power demand, power consuming parts of the algorithm like deblocking filter and transform&amp / quantization need to be specifically changed for low power application. A low power deblocking filter and inverse transform/quantization algorithm for H.264/AVC decoder is to be proposed and implemented on FPGA.
108

Design and implementation of test a tool for the GSM traffic channel. / Design och implementation av ett testverktyg för GSM talkanal.

Öjerteg, Theo January 2002 (has links)
<p>Todays’ systems for telecommunication are getting more and more complex. Automatic testing is required to guarantee quality of the systems produced. An actual example is the introduction of GPRS traffic in the GSM network nodes. This thesis investigates the need and demands for such an automatic testing of the traffic channels in the GSM system. A solution intended to be a part of the Ericsson TSS is proposed. One problem to be solved is that today’s tools for testing do not support testing of speech channels with the speech transcoder unit installed. As part of the investigation, a speech codec is implemented for execution on current hardware used in the test platform. The selected speech codec is the enhanced full rate codec, generating a bitstream of 12.2 kbit/s, and gives a good trade-off between compression and speech quality. The report covers the design of the test tool and the implementation of speech codec. Particularly performance problems in the implementation of the encoder will be addressed.</p>
109

Σχεδίαση κωδικοποιητή-αποκωδικοποιητή Reed-Solomon

Ρούδας, Θεόδωρος 03 August 2009 (has links)
Η εργασία αφορά ένα ειδικό είδος κωδικοποίησης εντοπισμού και διόρθωσης λαθών, την κωδικοποίση Reed-Solomon. Οι κώδικες αυτού του είδους χρησιμοποιούνται σε τηλεπικοινωνιακές εφαρμογές (ενσύρματη τηλεφωνία, ψηφιακή τηλεόραση, ευρυζωνικές ασύρματες επικοινωνίες) και σε συστήματα ψηφιακής αποθήκευσης (οπτικοί, μαγνητικοί δίσκοι). Η κωδικοποίηση Reed-Solomon βασίζεται σε μία ειδική κατηγορία αριθμητικών πεδίων τα πεδία Galois (Galois Field). Στα πλαίσια της εργασίας πραγματοποιήθηκε μελέτη των ιδιοτήτων των πεδίων Galois. και σχεδιάστηκε κωδικοποιητής-αποκωδικοποιητής για κώδικες Reed Solomon. Η σχεδίαση υλοποιήθηκε σε υλικό (hardware) σε γλώσσα Verilog HDL. Η σύνθεση των κυκλωμάτων πραγματοποιήθηκε με τεχνολογία Πεδίων Προγραμματιζόμενων Πινάκων Πυλών (τεχνολογία FPGA) και τεχνολογία Ολοκληρωμένων Κυκλωμάτων Ειδικού Σκοπού (τεχνολογία ASIC). Ακολουθήθηκε η μεθοδολογία σχεδιασμού Μονάδων Διανοητικής Ιδιοκτησίας για ολοκληρωμένα κυκλώματα (IP core), σύμφωνα με την οποία η σχεδίαση είναι ανεξάρτητη της πλατφόμας υλοποίησης και μπορεί να υλοποιηθεί με καθόλου ή ελάχιστες αλλαγές σε διαφορετικές τεχνολογίες. Η έννοια των IP core βρίσκει ιδιαίτερη εφαρμογή σε Συστήματα σε Ολοκληρωμένα Κυκλώματα (System on Chip). / The present work is about a specific group of error detection and correction codes, the Reed-Solomon codes. Such codes are used in telecommunications applications (wire telephony, digital television, broadband wireless communications) and digital storage systems (optical, magnetic disks). The Reed Solomon codes are based on a specific category of numerical fields, called Galois Fields. The Work consists of the study of the properties of Galois fields and of the design of an codec for Reed Solomon codes. The design was implemented in hardware with the use of Verilog HDL language. The synthesis of the circuit targets Field programmable Gate Array (FPGA) and Applications Specific Integrated Circuit (ASIC) technologies. The design methodology for Intellectual Property Units for integrated circuits (IP cores) was used. According to that methodology the design is platform independent and consequently the implementation can be achieved with minimal or no changes in different technologies. The IP cores model is widely applied in Systems on Integrated Circuits (System on Chips).
110

Low-density parity-check codes : construction and implementation.

Malema, Gabofetswe Alafang January 2007 (has links)
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques. / Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007

Page generated in 0.0745 seconds