• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 42
  • 29
  • 18
  • 7
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 251
  • 134
  • 56
  • 54
  • 53
  • 51
  • 50
  • 46
  • 46
  • 44
  • 41
  • 40
  • 34
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Universality for Multi-terminal Problems via Spatial Coupling

Yedla, Arvind 2012 August 1900 (has links)
Consider the problem of designing capacity-achieving codes for multi-terminal communication scenarios. For point-to-point communication problems, one can optimize a single code to approach capacity, but for multi-terminal problems this translates to optimizing a single code to perform well over the entire region of channel parameters. A coding scheme is called universal if it allows reliable communication over the entire achievable region promised by information theory. It was recently shown that terminated low-density parity-check convolutional codes (also known as spatially-coupled low-density parity-check ensembles) have belief-propagation thresholds that approach their maximum a-posteriori thresholds. This phenomenon, called "threshold saturation via spatial-coupling," was proven for binary erasure channels and then for binary memoryless symmetric channels. This approach provides us with a new paradigm for constructing capacity approaching codes. It was also conjectured that the principle of spatial coupling is very general and that the phenomenon of threshold saturation applies to a very broad class of graphical models. In this work, we consider a noisy Slepian-Wolf problem (with erasure and binary symmetric channel correlation models) and the binary-input Gaussian multiple access channel, which deal with correlation between sources and interference at the receiver respectively. We derive an area theorem for the joint decoder and empirically show that threshold saturation occurs for these multi-user scenarios. We also show that the outer bound derived using the area theorem is tight for the erasure Slepian-Wolf problem and that this bound is universal for regular LDPC codes with large left degrees. As a result, we demonstrate near-universal performance for these problems using spatially-coupled coding systems.
32

Design of low-density parity-check Codes for multiple-input multiple-output wireless systems

Brown, Raymond January 2009 (has links)
Masters Research - Masters of Engineering / Mobile telephony, wireless networks and wireless telemetry systems have gone from simple single-input single-output wireless architectures with low data transmission rates to complex systems employing multiple antennas and forward error correction algorithms capable of high data transmission rates over wireless channels. Claude Shannon provided the fundamental capacity limits for a communications system and it can be shown that the capacity for a single-input single-output systems is limited in it’s capability to provide for modern wireless applications. The introduction of multiple-input multiple-output systems employing multiple antenna elements and orthogonal coding structures proved beneficial and could provide the capacities required for modern wireless applications. This thesis begins with an introduction and overview of space-time coding and the codes of Tarokh, Jafarkhani and Alamouti. Further, this thesis provides an introduction and overview to the family of forward error correction codes known as low-density parity-check (LDPC) codes. LDPC codes, when employed over Gaussian channels, provide near-Shannon limit performance and the question is posed as to their suitability for a wireless multiple-input multiple-output system employing multiple antennas and space-time coding. This question is answered by the use and demonstration of LDPC codes as outer codes to a MIMO system employing space-time block codes and a modified maximum-likelihood decoder. By modifying the space-time block-code decoder to provide a soft-information output, iterative decoders such as the sum-product algorithm can be employed to provide significant performance gains over a Rayleigh flat-fading channel. Further the use of design tools such as EXIT charts can then be used to design codes. The key to allowing the use of EXIT charts is the observation that a MIMO system employing orthogonal transmissions in a Rayleigh flat-fading channel is the equivalent to a SISO channel employing Nakagami-m fading coefficients. The seemingly complex MIMO system can now be analyzed in the form of a simpler SISO equivalent allowing the use of techniques such as EXIT charts to be employed in order to design codes with known and predictable performance haracteristics. This thesis demonstrates this technique and shows by example the performance gains that can be achieved for MIMO systems and opens some further questions for future research.
33

Design of low-density parity-check Codes for multiple-input multiple-output wireless systems

Brown, Raymond January 2009 (has links)
Masters Research - Masters of Engineering / Mobile telephony, wireless networks and wireless telemetry systems have gone from simple single-input single-output wireless architectures with low data transmission rates to complex systems employing multiple antennas and forward error correction algorithms capable of high data transmission rates over wireless channels. Claude Shannon provided the fundamental capacity limits for a communications system and it can be shown that the capacity for a single-input single-output systems is limited in it’s capability to provide for modern wireless applications. The introduction of multiple-input multiple-output systems employing multiple antenna elements and orthogonal coding structures proved beneficial and could provide the capacities required for modern wireless applications. This thesis begins with an introduction and overview of space-time coding and the codes of Tarokh, Jafarkhani and Alamouti. Further, this thesis provides an introduction and overview to the family of forward error correction codes known as low-density parity-check (LDPC) codes. LDPC codes, when employed over Gaussian channels, provide near-Shannon limit performance and the question is posed as to their suitability for a wireless multiple-input multiple-output system employing multiple antennas and space-time coding. This question is answered by the use and demonstration of LDPC codes as outer codes to a MIMO system employing space-time block codes and a modified maximum-likelihood decoder. By modifying the space-time block-code decoder to provide a soft-information output, iterative decoders such as the sum-product algorithm can be employed to provide significant performance gains over a Rayleigh flat-fading channel. Further the use of design tools such as EXIT charts can then be used to design codes. The key to allowing the use of EXIT charts is the observation that a MIMO system employing orthogonal transmissions in a Rayleigh flat-fading channel is the equivalent to a SISO channel employing Nakagami-m fading coefficients. The seemingly complex MIMO system can now be analyzed in the form of a simpler SISO equivalent allowing the use of techniques such as EXIT charts to be employed in order to design codes with known and predictable performance haracteristics. This thesis demonstrates this technique and shows by example the performance gains that can be achieved for MIMO systems and opens some further questions for future research.
34

Decodificação híbrida para códigos LDPC

Guimarães, Walter Prado de Souza 22 February 2013 (has links)
Submitted by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-04-17T14:27:28Z No. of bitstreams: 2 Tese Walter Guimaraes.pdf: 1074140 bytes, checksum: 9576e862fa9714c03f55d10649568935 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-04-17T14:27:28Z (GMT). No. of bitstreams: 2 Tese Walter Guimaraes.pdf: 1074140 bytes, checksum: 9576e862fa9714c03f55d10649568935 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-02-22 / CAPES / Os códigos Low-Density Parity-Check (LDPC) constituem uma família definida a partir de matrizes esparsas de verificação de paridade e que apresentam excelente desempenho no canal com ruído aditivo Gaussiano branco (RAGB). Devido às suas boas características, têm sido largamente empregados na codificação de canais em sistemas de transmissão via satélite, sistemas de telefonia móvel e sistemas de radiodifusão de TV digital. O sucesso desses códigos é devido à sua representação na forma de grafos, ao uso de métodos de construção mais simplificados e ao processo de decodificação iterativa. Esta tese introduz um método de decodificação iterativa híbrida que, diferentemente da maioria dos modelos existentes, associa a correção de erros à correção de apagamentos em canais com RAGB, como uma forma de melhorar o desempenho do código LDPC nestes canais. O alvo dessa abordagem é a região de patamar de erros dos códigos LDPC, em que os padrões de erros, em sua maioria, são de pequena cardinalidade e resultantes do que se conhece por conjunto de armadilhas. Alguns aspectos do funcionamento e da operação otimizada da decodificação iterativa híbrida são explorados e discutidos. Para confirmar a eficácia da técnica de decodificação introduzida, são apresentados resultados de simulação em computador para códigos LDPC empregados no padrão IEEE802.11n, acompanhados da respectiva análise.
35

Peak-to-Average Power Ratio Reduced Parallel Interference Cancellation Multicarrier-Code Division Multiple Access System with Anti-Interference Property

Luo, Jun 09 July 2008 (has links)
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
36

Parallellisering i CUDA av LDPC-avkodningsalgoritmen MSA, för NVIDIA:s GPU:er / Parallellization of the LDPC decoding algorithm MSA, using CUDA for NVIDIA GPUs

Lindbom, David, Pettersson, Jonathan January 2023 (has links)
Inom dagens samhälle är de flesta mobilenheter uppkopplade till en basstation. Mycket information förväntas kunna överföras från telefonen till basstationen utan några störningar för användaren. Detta kan underlättas genom att använda en bitfelskorrigerare exempelvis Min Sum Algoritmen (MSA), för att avkoda Low-Density Parity-Check (LDPC) koder. Algoritmen fungerar genom att utföra fyra moment: initialisering, radoperation, kolumnoperation och beslutsoperation. Istället för att utföra momenten på en Central Processing Unit (CPU), effektiviseras processen genom att utnyttja Graphics Processing Units (GPU) möjlighet till parallellisering. Optimeringen för detta sker genom Compute Unified Device Architecture (CUDA). Resultatet visar på en effektivisering på 89% vad gäller exekveringstid för bitfelskorrigering genom att använda GPU:er istället för CPU:er. / In today's society, most mobile devices are connected to a base station. A lot of information is expected to be able to be transferred from the phone to the base station without any interference for the user. This can be facilitated by using a bit error corrector such as the Min-Sum Algorithm (MSA), to decode Low-Density Parity-Check (LDPC) codes. The algorithm works by performing four steps: initialization, row operation, column operation, and decision operation. Instead of performing the steps on a Central Processing Unit (CPU), the process is made more efficient by utilizing the Graphics Processing Unit's (GPU) ability to parallelize. The optimization is done by using CUDA. The result shows an 89% efficiency improvement in execution time for bit error correction by using GPUs instead of CPUs.
37

Iterative decoding beyond belief propagation for low-density parity-check codes / Décodage itératif pour les codes LDPC au-delà de la propagation de croyances

Planjery, Shiva Kumar 05 December 2012 (has links)
Les codes Low-Density Parity-Check (LDPC) sont au coeur de larecherche des codes correcteurs d'erreurs en raison de leur excellenteperformance de décodage en utilisant un algorithme de décodageitératif de type propagation de croyances (Belief Propagation - BP).Cet algorithme utilise la représentation graphique d'un code, ditgraphe de Tanner, et calcule les fonctions marginales sur le graphe.Même si l'inférence calculée n'est exacte que sur un graphe acyclique(arbre), l'algorithme BP estime de manière très proche les marginalessur les graphes cycliques, et les codes LDPC peuvent asymptotiquementapprocher la capacité de Shannon avec cet algorithme.Cependant, sur des codes de longueurs finies dont la représentationgraphique contient des cycles, l'algorithme BP est sous-optimal etdonne lieu à l'apparition du phénomène dit de plancher d'erreur. Leplancher d'erreur se manifeste par la dégradation soudaine de la pentedu taux d'erreur dans la zone de fort rapport signal à bruit où lesstructures néfastes au décodage sont connues en termes de TrappingSets présents dans le graphe de Tanner du code, entraînant un échec dudécodage. De plus, les effets de la quantification introduite parl'implémentation en hardware de l'algorithme BP peuvent amplifier ceproblème de plancher d'erreur.Dans cette thèse nous introduisons un nouveau paradigme pour ledécodage itératif à précision finie des codes LDPC sur le canalbinaire symétrique. Ces nouveaux décodeurs, appelés décodeursitératifs à alphabet fini (Finite Alphabet Iterative Decoders – FAID)pour préciser que les messages appartiennent à un alphabet fini, sontcapables de surpasser l'algorithme BP dans la région du plancherd'erreur. Les messages échangés par les FAID ne sont pas desprobabilités ou vraisemblances quantifiées, et les fonctions de miseà jour des noeuds de variable ne copient en rien le décodage par BP cequi contraste avec les décodeurs BP quantifiés traditionnels. Eneffet, les fonctions de mise à jour sont de simples tables de véritéconçues pour assurer une plus grande capacité de correction d'erreuren utilisant la connaissance de topologies potentiellement néfastes audécodage présentes dans un code donné. Nous montrons que sur demultiples codes ayant un poids colonne de trois, il existe des FAIDutilisant 3 bits de précision pouvant surpasser l'algorithme BP(implémenté en précision flottante) dans la zone de plancher d'erreursans aucun compromis dans la latence de décodage. C'est pourquoi lesFAID obtiennent des performances supérieures comparées au BP avecseulement une fraction de sa complexité.Par ailleurs, nous proposons dans cette thèse une décimation amélioréedes FAID pour les codes LDPC dans le traitement de la mise à jour desnoeuds de variable. La décimation implique de fixer certains bits ducode à une valeur particulière pendant le décodage et peut réduire demanière significative le nombre d'itérations requises pour corriger uncertain nombre d'erreurs fixé tout en maintenant de bonnesperformances d'un FAID, le rendant plus à même d'être analysé. Nousillustrons cette technique pour des FAID utilisant 3 bits de précisioncodes de poids colonne trois. Nous montrons également comment cettedécimation peut être utilisée de manière adaptative pour améliorer lescapacités de correction d'erreur des FAID. Le nouveau modèle proposéde décimation adaptative a, certes, une complexité un peu plus élevée,mais améliore significativement la pente du plancher d'erreur pour unFAID donné. Sur certains codes à haut rendement, nous montrons que ladécimation adaptative des FAID permet d'atteindre des capacités decorrection d'erreur proches de la limite théorique du décodage au sensdu maximum de vraisemblance. / At the heart of modern coding theory lies the fact that low-density parity-check (LDPC) codes can be efficiently decoded by message-passing algorithms which are traditionally based on the belief propagation (BP) algorithm. The BP algorithm operates on a graphical model of a code known as the Tanner graph, and computes marginals of functions on the graph. While inference using BP is exact only on loop-free graphs (trees), the BP still provides surprisingly close approximations to exact marginals on loopy graphs, and LDPC codes can asymptotically approach Shannon's capacity under BP decoding.However, on finite-length codes whose corresponding graphs are loopy, BP is sub-optimal and therefore gives rise to the error floor phenomenon. The error floor is an abrupt degradation in the slope of the error-rate performance of the code in the high signal-to-noise regime, where certain harmful structures generically termed as trapping sets present in the Tanner graph of the code, cause the decoder to fail. Moreover, the effects of finite precision that are introduced during hardware realizations of BP can further contribute to the error floor problem.In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the Binary Symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs) to signify that the message values belong to a finite alphabet, are capable of surpassing the BP in the error floor region. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder, which is in contrast to traditional quantized BP decoders. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability by using the knowledge of potentially harmful topologies that could be present in a given code. We show that on several column-weight-three codes of practical interest, there exist 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor without any compromise in decoding latency. Hence, they are able to achieve a superior performance compared to BP with only a fraction of its complexity.Additionally in this dissertation, we propose decimation-enhanced FAIDs for LDPC codes, where the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during the decoding process, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly improve the slope of the error floor performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by maximum-likelihood decoding.
38

Optimisation des stratégies de décodage des codes LDPC dans les environnements impulsifs : application aux réseaux de capteurs et ad hoc / LDPC strategy decoding optimization in impulsive environments : sensors and ad hoc networks application

Ben Maad, Hassen 29 June 2011 (has links)
L’objectif de cette thèse est d’étudier le comportement des codes LDPC dans un environnement où l’interférence générée par un réseau n’est pas de nature gaussienne mais présente un caractère impulsif. Un premier constat rapide montre que sans précaution, les performances de ces codes se dégradent très significativement. Nous étudions tout d’abord les différentes solutions possibles pour modéliser les bruits impulsifs. Dans le cas des interférences d’accès multiples qui apparaissent dans les réseaux ad hoc et les réseaux de capteurs, il nous semble approprié de choisir les distributions alpha-stables. Généralisation de la gaussienne, stables par convolution, elles peuvent être validées théoriquement dans plusieurs situations.Nous déterminons alors la capacité de l’environnement α-stable et montrons par une approche asymptotique que les codes LDPC dans cet environnement sont bons mais qu’une simple opération linéaire à l’entrée du décodeur ne permet pas d’obtenir de bonnes performances. Nous avons donc proposé différentes façons de calculer la vraisemblance en entrée du décodeur. L’approche optimale est très complexe à mettre en oeuvre. Nous avons étudié plusieurs approches différentes et en particulier le clipping dont nous avons cherché les paramètres optimaux. / The goal of this PhD is to study the performance of LDPC codes in an environment where interference, generated by the network, has not a Gaussian nature but presents an impulsive behavior.A rapid study shows that, if we do not take care, the codes’ performance significantly degrades.In a first step, we study different approaches for impulsive noise modeling. In the case of multiple access interference that disturb communications in ad hoc or sensor networks, the choice of alpha-stable distributions is appropriate. They generalize Gaussian distributions, are stable by convolution and can be theoretically justified in several contexts.We then determine the capacity if the α-stable environment and show using an asymptotic method that LDPC codes in such an environment are efficient but that a simple linear operation on the received samples at the decoder input does not allow to obtain the expected good performance. Consequently we propose several methods to obtain the likelihood ratio necessary at the decoder input. The optimal solution is highly complex to implement. We have studied several other approaches and especially the clipping for which we proposed several approaches to determine the optimal parameters.
39

Turbo égalisation à haute performance pour la transmission par satellite au-delà de la cadence de Nyquist / High performance turbo equalisation for faster-than-Nyquist satellite communications

Abelló Barberán, Albert 15 November 2018 (has links)
Le contexte de ces travaux de thèse est la transmission dite faster-than-Nyquist (FTN). Cette technique propose d’augmenter l’efficacité spectrale en augmentant lerythme de transmission au-delà de la bande occupée par le signal émis, indépendamment de laconstellation choisie. Il a été montré que le FTN offre des taux d’information supérieurs à ceuxdes systèmes de Nyquist. Toutefois, le non respect du critère de Nyquist entraîne l’apparitiond’interférence entre symboles et des techniques de réception appropriées doivent être utilisées.La technique de réception dite channel shortening consiste à filtrer la séquence reçue puis àcalculer des probabilités symbole a posteriori approximatives à l’aide de l’algorithme BCJRen considérant une réponse de canal modifiée, de longueur réduite. Dans la littérature, enprésence d’information a priori, les filtres du récepteur channel shortening sont optimiséssous critère de maximisation de l’information mutuelle généralisée (IMG) en utilisant desméthodes numériques. Nous proposons dans ces travaux de thèse une solution analytiquepour l’ensemble des filtres channel shortening sous critère de maximisation de l’IMG lorsquele récepteur dispose d’information a priori. Nous démontrons ensuite que l’égaliseur au sens dela minimisation de l’erreur quadratique moyenne (MMSE) est un cas particulier de l’égaliseurchannel shortening. Dans le cadre de la turbo égalisation, nous étudions ensuite un estimateurpermettant d’obtenir l’information a priori à partir de l’information en sortie du décodeurcorrecteur d’erreurs. Finalement, nous évaluons les performances du système complet aveccodage correcteur d’erreurs sur canal à bruit additif blanc Gaussien. / In order to increase the spectral efficiency of digital communications systems,the faster-than-Nyquist (FTN) approach increases the symbol rate beyond the occupied bandwidthof the transmitted signal independently of the constellation type and size. It has beenshown that information rates of FTN systems are greater than those of Nyquist systems.However, the non-compliance of the Nyquist criterion causes inter-symbol interference to appearand therefore appropriate reception techniques must be used. At reception, the channelshortening approach consists on a receiving filter followed by a BCJR algorithm computingapproximate a posteriori symbol probabilities by considering a modified channel response ofreduced length. In the literature, the channel shortening receiving filters are chosen to maximizethe generalized mutual information (GMI). Such optimization is performed by usingnumerical optimization methods. In this PhD thesis, we propose a closed-form solution forall channel shortening filters considering the GMI maximization criterion. We show that theminimum mean square error (MMSE) equalizer is a particular case of the channel shorteningapproach. Within the frame of turbo equalization, we then study a suitable estimator allowingto obtain symbols a priori information from the information provided by the a decoder. Finally,we study the performance of the complete system with channel coding over an additivewhite Gaussian noise channel.
40

Codes LDPC multi-binaires hybrides et méthodes de décodage itératif

Sassatelli, Lucile 03 October 2008 (has links) (PDF)
Cette thèse porte sur l'analyse et le design de codes de canal définis par des graphes creux. Le but est de construire des codes ayant de très bonnes performances sur de larges plages de rapports signal à bruit lorsqu'ils sont décodés itérativement. Dans la première partie est introduite une nouvelle classe de codes LDPC, nommés code LDPC hybrides. L'analyse de cette classe pour des canaux symétriques sans mé- moire est réalisée, conduisant à l'optimisation des paramètres, pour le canal gaussien à entrée binaire. Les codes LDPC hybrides résultants ont non seulement de bonnes proprié- tés de convergence, mais également un plancher d'erreur très bas pour des longueurs de mot de code inférieures à trois mille bits, concurrençant ainsi les codes LDPC multi-edge. Les codes LDPC hybrides permettent donc de réaliser un compromis intéressant entre ré- gion de convergence et plancher d'erreur avec des techniques de codage non-binaires. La seconde partie de la thèse a été consacrée à étudier quel pourrait être l'apport de méthodes d'apprentissage artificiel pour le design de bons codes et de bons décodeurs itératifs, pour des petites tailles de mot de code. Nous avons d'abord cherché comment construire un code en enlevant des branches du graphe de Tanner d'un code mère, selon un algorithme d'apprentissage, dans le but d'optimiser la distance minimale. Nous nous sommes ensuite penchés sur le design d'un décodeur itératif par apprentissage artificiel, dans l'optique d'avoir de meilleurs résultats qu'avec le décodeur BP, qui devient sous- optimal dès qu'il y a des cycles dans le graphe du code. Dans la troisième partie de la thèse, nous nous sommes intéressés au décodage quan- tifié dans le même but que précédemment : trouver des règles de décodage capables de décoder des configurations d'erreur difficiles. Nous avons proposé une classe de déco- deurs utilisant deux bits de quantification pour les messages du décodeur. Nous avons prouvé des conditions suffisantes pour qu'un code LDPC, avec un poids de colonnes égal à quatre, et dont le plus petit cycle du graphe est de taille au moins six, corrige n'importe quel triplet d'erreurs. Ces conditions montrent que décoder avec cette règle à deux bits permet d'assurer une capacité de correction de trois erreurs pour des codes de rendements plus élevés qu'avec une règle de décodage à un bit.

Page generated in 0.0484 seconds