• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 42
  • 29
  • 18
  • 7
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 252
  • 134
  • 56
  • 54
  • 53
  • 51
  • 50
  • 46
  • 46
  • 45
  • 42
  • 40
  • 34
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Model optického komunikačního systému na principu OFDM / Model of optical communication system based on OFDM

Fíla, Lukáš January 2012 (has links)
The work explores ways to generate the OFDM signal and LDPC channel coding methods. Describes the creation of basic modules of the communication system in Matlab and simulation methods for atmospheric transmission environment, including effects of turbulence, attenuation along the route and weather conditions on the transmitted signal.
172

Non-Binary Coded Modulation for FMF-Based Coherent Optical Transport Networks

Lin, Changyu January 2016 (has links)
The Internet has fundamentally changed the way of modern communication. Current trends indicate that high-capacity demands are not going to be saturated anytime soon. From Shannon's theory, we know that information capacity is a logarithmic function of signal-to-noise ratio (SNR), but a linear function of the number of dimensions. Ideally, we can increase the capacity by increasing the launch power, however, due to the nonlinear characteristics of silica optical fibers that imposes a constraint on the maximum achievable optical-signal-to-noise ratio (OSNR). So there exists a nonlinear capacity limit on the standard single mode fiber (SSMF). In order to satisfy never ending capacity demands, there are several attempts to employ additional degrees of freedom in transmission system, such as few-mode fibers (FMFs), which can dramatically improve the spectral efficiency. On the other hand, for the given physical links and network equipment, an effective solution to relax the OSNR requirement is based on forward error correction (FEC), as the response to the demands of high speed reliable transmission. In this dissertation, we first discuss the model of FMF with nonlinear effects considered. Secondly, we simulate the FMF based OFDM system with various compensation and modulation schemes. Thirdly, we propose tandem-turbo-product nonbinary byte-interleaved coded modulation (BICM) for next-generation high-speed optical transmission systems. Fourthly, we study the Q factor and mutual information as threshold in BICM scheme. Lastly, an experimental study of the limits of nonlinearity compensation with digital signal processing has been conducted.
173

Nonbinary-LDPC-Coded Modulation Schemes for High-Speed Optical Communication Networks

Arabaci, Murat January 2010 (has links)
IEEE has recently finished its ratification of the IEEE Standard 802.3ba in June 2010 which set the target Ethernet speed as 100 Gbps. The studies on the future trends of the ever-increasing demands for higher speed optical fiber communications show that there is no sign of decline in the demand. Constantly increasing internet traffic and the bandwidth-hungry multimedia services like HDTV, YouTube, voice-over-IP, etc. can be shown as the main driving forces. Indeed, the discussions over the future upgrades on the Ethernet speeds have already been initiated. It is predicted that the next upgrade will enable 400 Gbps Ethernet and the one after will be toward enabling the astounding 1 Tbps Ethernet.Although such high and ultra high transmission speeds are unprecedented over any transmission medium, the bottlenecks for achieving them over the optical fiber remains to be fundamental. At such high operating symbol rates, the signal impairments due to inter- and intra-channel fiber nonlinearities and polarization mode dispersion get exacerbated to the levels that cripple the high-fidelity communication over optical fibers. Therefore, efforts should be exerted to provide solutions that not only answer the need for high-speed transmission but also maintain low operating symbol rates.In this dissertation, we contribute to these efforts by proposing nonbinary-LDPC-coded modulation (NB-LDPC-CM) schemes as enabling technologies that can meet both the aforementioned goals. We show that our proposed NB-LDPC-CM schemes can outperform their prior-art, binary counterparts called bit-interleaved coded modulation (BI-LDPC-CM) schemes while attaining the same aggregate bit rates at a lower complexity and latency. We provide comprehensive analysis on the computational complexity of both schemes to justify our claims with solid evidence. We also compare the performances of both schemes by using amplified spontaneous emission (ASE) noise dominated optical fiber transmission and short to medium haul optical fiber transmission scenarios. Both applications show outstanding performances of NB-LDPC-CM schemes over the prior-art BI-LDPC-CM schemes with increasing gaps in coding gain as the transmission speeds increase. Furthermore, we present how a rate-adaptive NB-LDPC-CM can be employed to fully utilize the resources of a long haul optical transport network throughout its service time.
174

Practical Robust MIMO OFDM Communication System for High-Speed Mobile Communication

Grabner, Mitchell John James 05 1900 (has links)
This thesis presents the design of a communication system (PRCS) which improves on all aspects of the current state of the art 4G communication system Long Term Evolution (LTE) including peak to average power ratio (PAPR), data reliability, spectral efficiency and complexity using the most recent state of the art research in the field combined with novel implementations. This research is relevant and important to the field of electrical and communication engineering because it provides benefits to consumers in the form of more reliable data with higher speeds as well as a reduced burden on hardware original equipment manufacturers (OEMs). The results presented herein show up to a 3 dB reduction in PAPR, less than 10-5 bit errors at 7.5 dB signal to noise ratio (SNR) using 4QAM, up to 3 times increased throughput in the uplink mode and 10 times reduced channel coding complexity.
175

Conception du décodeur NB-LDPC à débit ultra-élevé / Design of ultra high throughput rate NB-LDPC decoder

Harb, Hassan 08 November 2018 (has links)
Les codes correcteurs d’erreurs Non-Binaires Low Density Parity Check (NB-LDPC) sont connus pour avoir de meilleure performance que les codes LDPC binaires. Toutefois, la complexité de décodage des codes non-binaires est bien supérieure à celle des codes binaires. L’objectif de cette thèse est de proposer de nouveaux algorithmes et de nouvelles architectures matérielles de code NB-LDPC pour le décodage des NBLDPC. La première contribution de cette thèse consiste à réduire la complexité du nœud de parité en triant en amont ses messages d’entrées. Ce tri initial permet de rendre certains états très improbables et le matériel requis pour les traiter peut tout simplement être supprimé. Cette suppression se traduit directement par une réduction de la complexité du décodeur NB-LDPC, et ce, sans affecter significativement les performances de décodage. Un modèle d’architecture, appelée "architecture hybride" qui combine deux algorithmes de l’état de l’art ("l’Extended Min Sum" et le "Syndrome Based") a été proposé afin d’exploiter au maximum le pré-tri. La thèse propose aussi de nouvelles méthodes pour traiter les nœuds de variable dans le contexte d’une architecture pré-tri. Différents exemples d’implémentations sont donnés pour des codes NB-LDPC sur GF(64) et GF(256). En particulier, une architecture très efficace de décodeur pour un code de rendement 5/6 sur GF(64) est présentée. Cette architecture se caractérise par une architecture de check node nœud de parité entièrement parallèle. Enfin, une problématique récurrente dans les architectures NB-LDPC, qui est la recherche des P minimums parmi une liste de taille Ns, est abordée. La thèse propose une architecture originale appelée first-then-second minimum pour une implantation efficace de cette tâche. / The Non-Binary Low Density Parity Check (NB-LDPC) codes constitutes an interesting category of error correction codes, and are well known to outperform their binary counterparts. However, their non-binary nature makes their decoding process of higher complexity. This PhD thesis aims at proposing new decoding algorithms for NB-LDPC codes that will be shaping the resultant hardware architectures expected to be of low complexity and high throughput rate. The first contribution of this thesis is to reduce the complexity of the Check Node (CN) by minimizing the number of messages being processed. This is done thanks to a pre-sorting process that sorts the messages intending to enter the CN based on their reliability values, where the less likely messages will be omitted and consequently their dedicated hardware part will be simply removed. This reliability-based sorting enabling the processing of only the highly reliable messages induces a high reduction of the hardware complexity of the NB-LDPC decoder. Clearly, this hardware reduction must come at no significant performance degradation. A new Hybrid architectural CN model (H-CN) combining two state-of-the-art algorithms - Forward-Backward CN (FB-CN) and Syndrome Based CN (SB-CN) - has been proposed. This hybrid model permits to effectively exploit the advantages of pre-sorting. This thesis proposes also new methods to perform the Variable Node (VN) processing in the context of pre-sorting-based architecture. Different examples of implementation of NB-LDPC codes defined over GF(64) and GF(256) are presented. For decoder to run faster, it must become parallel. From this perspective, we have proposed a new efficient parallel decoder architecture for a 5/6 rate NB-LDPC code defined over GF(64). This architecture is characterized by its fully parallel CN architecture receiving all the input messages in only one clock cycle. The proposed new methodology of parallel implementation of NB-LDPC decoders constitutes a new vein in the hardware conception of ultra-high throughput rate decoders. Finally, since the NB-LDPC decoders requires the implementation of a sorting function to extract P minimum values among a list of size Ns, a chapter is dedicated to this problematic where an original architecture called First-Then-Second-Extrema-Selection (FTSES) has been proposed.
176

Correção de apagamentos em rajadas utilizando códigos LDPC gerados pela composição de matrizes bases e pelos moviementos de matrizes circulantes

SILVA, Cássio André Sousa da 21 October 2016 (has links)
Submitted by camilla martins (camillasmmartins@gmail.com) on 2017-04-24T11:48:05Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_CorrecaoApagamentosRajadas.pdf: 12648601 bytes, checksum: 32c72b34186616144110cb119cba02b1 (MD5) / Approved for entry into archive by Edisangela Bastos (edisangela@ufpa.br) on 2017-04-24T16:57:51Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_CorrecaoApagamentosRajadas.pdf: 12648601 bytes, checksum: 32c72b34186616144110cb119cba02b1 (MD5) / Made available in DSpace on 2017-04-24T16:57:51Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese_CorrecaoApagamentosRajadas.pdf: 12648601 bytes, checksum: 32c72b34186616144110cb119cba02b1 (MD5) Previous issue date: 2016-10-21 / Nesta tese são propostos procedimentos para a construção de matrizes de verificação de paridade para codificação e decodificação de códigos LDPC (low-density paritycheck) na recuperação de bits apagados no canal com apagamentos em rajada. As matrizes de verificação de paridade são produzidas por concatenação das matrizes bases binárias justapostas por matrizes circulantes sendo de fácil implementação e de menor aleatoriedade. As matrizes bases são desenvolvidas a partir de fundamentos da álgebra e da geometria. Para demonstrar o potencial da técnica foi elaborado um conjunto de simulações que usa codificação de baixa complexidade, bem como o uso dos algoritmos soma e produto para recuperar os apagamentos. Foram gerados vários códigos LDPC, a partir das matrizes, e os resultados obtidos foram comparados com outros códigos LDPC obtidos da literatura. São ainda apresentados os resultados da simulação da recuperação de apagamentos resultantes da transmissão de uma imagem através de um canal ruidoso.partir das matrizes, e os resultados obtidos foram comparados com outros códigos LDPC obtidos da literatura. São ainda apresentados os resultados da simulação da recuperação de apagamentos resultantes da transmissão de uma imagem através de um canal ruidoso. / This thesis proposed procedures for the construction of parity check matrices for encoding and decoding of LDPC codes in the recovery of deleted bits in Burst Erasure Channel. The parity check matrices are produced by concatenation of binary bases matrices juxtaposed by circulating matrices are easy to implement and lower randomness. The base arrays are developed from the foundations of algebra and geometry. To demonstrate the potential of the technique, we developed a number of simulations using low complexity encoding as well as the sum-product algorithm. Several LDPC codes (matrices) were generated and the results were compared with other approaches. We also present the outcomes of erasure recovery simulations that result from the transmission of an image through a noisy channel.
177

Conception d'architectures embarquées : des décodeurs LDPC aux systèmes sur puce reconfigurables

Verdier, François 05 December 2006 (has links) (PDF)
Les travaux de recherche dont la synthèse est présentée dans ce document portent sur deux aspects de la conception d'architectures numériques embarquées pour des applications de traitement de l'information. Le premier axe concerne l'étude et la conception de modèles architecturaux pour les décodeurs de canal utilisés dans les communications numériques. Les décodeurs étudiés sont basés sur les codes LDPC (Low Density Parity Check codes) qui, depuis quelques années, sont proposés comme codes correcteurs d'erreurs dans plusieurs normes de transmission. On s'intéresse en particulier à la norme DVB-S2 de radio-diffusion de programmes multimédia. Ces architectures de décodeurs mettent en oeuvre des algorithmes dont les réalisations matérielles reposent sur une adéquation fine entre le taux de parallélisme, l'ordonnancement des calculs et les quantités de ressources nécessaires. Une étude sur la réduction de complexité des algorithmes de décodage LDPC non binaires, préalable à la définition d'une architecture associée est également présentée. Le deuxième axe de recherche étend la problématique aux architectures très fortement intégrées, de type SoC (systèmes sur puces), et qui disposent de capacités de flexibilité, d'adaptabilité et de reconfiguration matérielle dynamique. La présence d'un système d'exploitation temps-réel embarqué devient alors nécessaire pour gérer de telles architectures et rend inadaptées les méthodes classiques de conception. Le deuxième axe des travaux porte sur de nouvelles méthodologies d'exploration et de conception d'architectures reconfigurable. Le cas de la modélisation des systèmes d'exploitation embarqués est abordé ainsi que le cas de la conception des applications et plates-formes pour la radio-logicielle.
178

Efficient Decoding Algorithms for Low-Density Parity-Check Codes / Effektiva avkodningsalgoritmer för low density parity check-koder

Blad, Anton January 2005 (has links)
<p>Low-density parity-check codes have recently received much attention because of their excellent performance and the availability of a simple iterative decoder. The decoder, however, requires large amounts of memory, which causes problems with memory consumption. </p><p>We investigate a new decoding scheme for low density parity check codes to address this problem. The basic idea is to define a reliability measure and a threshold, and stop updating the messages for a bit whenever its reliability is higher than the threshold. We also consider some modifications to this scheme, including a dynamic threshold more suitable for codes with cycles, and a scheme with soft thresholds which allow the possibility of removing a decision which have proved wrong. </p><p>By exploiting the bits different rates of convergence we are able to achieve an efficiency of up to 50% at a bit error rate of less than 10^-5. The efficiency should roughly correspond to the power consumption of a hardware implementation of the algorithm.</p>
179

Joint Equalization and Decoding via Convex Optimization

Kim, Byung Hak 2012 May 1900 (has links)
The unifying theme of this dissertation is the development of new solutions for decoding and inference problems based on convex optimization methods. Th first part considers the joint detection and decoding problem for low-density parity-check (LDPC) codes on finite-state channels (FSCs). Hard-disk drives (or magnetic recording systems), where the required error rate (after decoding) is too low to be verifiable by simulation, are most important applications of this research. Recently, LDPC codes have attracted a lot of attention in the magnetic storage industry and some hard-disk drives have started using iterative decoding. Despite progress in the area of reduced-complexity detection and decoding algorithms, there has been some resistance to the deployment of turbo-equalization (TE) structures (with iterative detectors/decoders) in magnetic-recording systems because of error floors and the difficulty of accurately predicting performance at very low error rates. To address this problem for channels with memory, such as FSCs, we propose a new decoding algorithms based on a well-defined convex optimization problem. In particular, it is based on the linear-programing (LP) formulation of the joint decoding problem for LDPC codes over FSCs. It exhibits two favorable properties: provable convergence and predictable error-floors (via pseudo-codeword analysis). Since general-purpose LP solvers are too complex to make the joint LP decoder feasible for practical purposes, we develop an efficient iterative solver for the joint LP decoder by taking advantage of its dual-domain structure. The main advantage of this approach is that it combines the predictability and superior performance of joint LP decoding with the computational complexity of TE. The second part of this dissertation considers the matrix completion problem for the recovery of a data matrix from incomplete, or even corrupted entries of an unknown matrix. Recommender systems are good representatives of this problem, and this research is important for the design of information retrieval systems which require very high scalability. We show that our IMP algorithm reduces the well-known cold-start problem associated with collaborative filtering systems in practice.
180

Vers une architecture optimisée d'ASIP pour turbo décodage multi-standard

AL KHAYAT, Rachid 16 November 2012 (has links) (PDF)
Les systèmes sur puces dans le domaine des communications numériques deviennent extrêmement diversifiés et complexes avec la constante émergence de nouveaux standards et de nouvelles applications. Dans ce domaine, le turbo-décodeur est l'un des composants les plus exigeants en termes de calcul, de communication et de mémoire, donc de consommation d'énergie. Outre les exigences de performances croissantes, les nouveaux systèmes de communications numériques imposent une interopérabilité multi-standard qui introduit la nouvelle exigence de flexibilité de l'implémentation. Dans ce contexte, des travaux récents ont proposé l'utilisation du nouveau concept de processeurs à jeu d'instructions dédié à l'application (ASIP). Un tel modèle d'architecture permet au concepteur d'affiner librement le compromis flexibilité/performance tel que requis par l'application considérée. Toutefois, l'efficacité architecturale des processeurs dédiés à l'application est directement liée au jeu d'instruction défini ainsi qu'au taux d'utilisation des étages de pipeline. La plupart des travaux proposés récemment ne considèrent pas ces aspects explicitement. Par conséquent, ce travail de thèse s'inscrit dans l'objectif principal d'unifier l'approche orientée sur la flexibilité et celle orientée sur l'optimalité dans la conception de décodeurs de canal. Dans cet objectif, plusieurs contributions ont été proposées : (1) conception d'un turbo-décodeur multi-standard basé sur le concept ASIP assurant une efficacité architecturale élevée en bit/cycle/iteration/mm2, (2) optimisation de la vitesse de reconfiguration dynamique de l'ASIP proposé supportant tous les paramètres spécifiés dans les normes 3GPP-LTE/WiMAX/DVB-RCS, (3) conception d'entrelaceurs ARP et QPP de faible complexité pour le schéma de décodage de type papillon avec la technique de compression de treillis de type Radix4 et (4) proposition et mise en oeuvre d'un prototype FPGA de système de communication complet intégrant le turbo-décodeur multi-standard proposé. De plus, une première contribution a été proposée vers la conception d'une architecture multi-ASIP flexible et extensible supportant le décodage des turbocodes et des codes LDPC.

Page generated in 0.0316 seconds