• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 750
  • 194
  • 183
  • 159
  • 42
  • 34
  • 22
  • 20
  • 16
  • 14
  • 14
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 1992
  • 506
  • 458
  • 420
  • 388
  • 320
  • 252
  • 222
  • 178
  • 149
  • 148
  • 134
  • 129
  • 126
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Codage et traitements distribués pour les réseaux de communication / Distributed coding and computing for networks

Jardel, Fanny 11 January 2016 (has links)
Ce travail est dédié à la conception, l’analyse et l’évaluation des performances de nouveaux schémas de codage appropriés aux systèmes de stockage distribué. La première partie de ce travail est consacrée à l’étude des performances des codes spatialement couplés pour les canaux à effacements. Une nouvelle méthode de couplage spatial des ensembles classiques de contrôle de parité à faible densité (LDPC) est proposée. La méthode est inspirée du codage en couches. Les arêtes des ensembles locaux et celles définissant le couplage spatial sont construites séparément. Nous proposons également de saturer le seuil d’un ensemble Root-LDPC par couplage spatial de ses bits de parité dans le but de faire face aux évanouissements quasi-statiques. Le couplage spatial est dans un deuxième temps appliqué à un ensemble Root-LDPC, ayant une double diversité, conçu pour un canal à effacements par blocs à 4 états. Dans la deuxième partie de ce travail, nous considérons les codes produits non-binaires avec des composantes MDS et leur décodage algébrique itératif ligne-colonne sur un canal à effacements. Les effacements indépendants et par blocs sont considérés. Une représentation graphique compacte du code est introduite avec laquelle nous définissions la notion de coloriage à double diversité. Les ensembles d’arrêt sont définis et une caractérisation complète est donnée. La performance des codes produits à composantes MDS, avec et sans coloration, à double diversité est analysée en présence d’effacements indépendants et par blocs. Les résultats numériques montrent aussi une excellente performance en présence d’effacements à probabilité inégale due au coloriage ayant une double diversité. / This work is dedicated to the design, analysis, and the performance evaluation of new coding schemes suitable for distributed storage systems. The first part is devoted to spatially coupled codes for erasure channels. A new method of spatial coupling for low-density parity-check ensembles is proposed. The method is inspired from overlapped layered coding. Edges of local ensembles and those defining the spatial coupling are separately built. We also propose to saturate the whole Root-LDPC boundary via spatial coupling of its parity bits to cope with quasi-static fading. Then, spatial coupling is applied on a Root-LDPC ensemble with double diversity designed for a channel with 4 block-erasure states. In the second part of this work, we consider non-binary product codes with MDS components and their iterative row-column algebraic decoding on the erasure channel. Both independent and block erasures are considered. A compact graph representation is introduced on which we define double-diversity edge colorings via the rootcheck concept. Stopping sets are defined and a full characterization is given in the context of MDS components. A differential evolution edge coloring algorithm that produces colorings with a large population of minimal rootcheck order symbols is presented. The performance of MDS-based product codes with and without double-diversity coloring is analyzed in presence of both block and independent erasures. Furthermore, numerical results show excellent performance in presence of unequal erasure probability due to double-diversity colorings.
142

Simulations of photopumped x-ray lasers

Al'Miev, Il'dar Rifovich January 2000 (has links)
No description available.
143

Computer modelling of solidification of pure metals and alloys

Barkhudarov, Michael Rudolf January 1996 (has links)
Two numerical models have been developed to describe the volumetric changes during solidification in pure metals and alloys and to predict shrinkage defects in the castings of general three-dimensional configuration. The first model is based on the full system of the Continuity, Navier-Stokes and Enthalpy Equations. Volumetric changes are described by introducing a source term in the Continuity Equation which is a function of the rate of local phase transformation. The model is capable of simulating both volumetric shrinkage and expansion. The second simplified shrinkage model involves the solution of only the Enthalpy Equation. Simplifying assumptions that the feeding flow is governed only by gravity and solidification rate and that phase transformation proceeds only from liquid to solid allowed the fluid flow equations to be excluded from consideration. The numerical implementation of both models is based on an existing proprietary general purpose CFD code, FLOW-3D, which already contains a numerical algorithm for incompressible fluid flow with heat transfer and phase transformation. An important part of the code is. the Volume Of Fluid (VOF) algorithm for tracking multiple free surfaces. The VOF function is employed in both shrinkage models to describe shrinkage cavity formation. Several modifications to FLOW-3D have been made to improve the accuracy and efficiency of the metal/mould heat transfer and solidification algorithms. As part of the development of the upwind differencing advection algorithm used in the simulations, the Leith's method is incorporated into the public domain twodimensional SOLA code. It is shown that the resulting scheme is unconditionally stable despite being explicit.
144

Low-density parity-check codes : construction and implementation.

Malema, Gabofetswe Alafang January 2007 (has links)
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques. / Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
145

Etudes de systèmes cryptographiques construits à l'aide de codes correcteurs, en métrique de Hamming et en métrique rang.

Faure, Cédric 17 March 2009 (has links) (PDF)
Cette thèse étudie deux approches différentes visant à réduire la taille de la clé publique des cryptosystèmes à base de codes correcteurs. Une première idée en ce sens est l'utilisation de familles de codes à haute capacité de correction, comme les codes géométriques. Depuis l'attaque de Sidelnikov et Shestakov, on sait qu'un attaquant peut retrouver la structure d'un code de Reed-Solomon utilisé dans la clé publique. Nous avons réussi à adapter aux courbes hyperelliptiques la méthode d'attaque développée par Minder contre les codes elliptiques. Nous sommes notamment en mesure d'attaquer en temps polynomial le système de Janwa et Moreno développé sur des codes géométriques de genre 2 ou plus. Une seconde idée est l'utilisation de codes correcteurs pour la métrique rang. Celle-ci complique énormément les attaques par décodage, qui ne peuvent plus utiliser une fenêtre d'information pour tenter de décoder. On peut ainsi se prémunir des attaques par décodage en utilisant une clé publique de faible taille. Dans cette optique, nous présentons un cryptosystème à clé publique basé sur le problème de reconstruction de polynômes linéaires. Nous montrons que notre système est rapide, utilise des clés de taille raisonnable, et résiste à toutes les attaques connues dans l'état de l'art.
146

Low-density parity-check codes : construction and implementation.

Malema, Gabofetswe Alafang January 2007 (has links)
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques. / Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
147

Codigos convolucionais quanticos concatenados

Almeida, Antonio Carlos Aido de 14 October 2004 (has links)
Orientador : Reginaldo Palazzo Junior / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-04T00:27:05Z (GMT). No. of bitstreams: 1 Almeida_AntonioCarlosAidode_D.pdf: 2149041 bytes, checksum: 427f77a8e0ec2774c7b152dd209ba9fa (MD5) Previous issue date: 2004 / Resumo: A decoerencia é um dos maiores desafios obstrutivos da computação quantica. Os codigos corretores de erros quanticos tem sido desenvolvidos com o intuito de enfrentar este desafio. Uma estrutura de grupos e uma classe associada de codigos, a classe dos codigos estabilizadores, tem-se mostrado uteis na produção de codigos e no entendimento da estrutura de classes de codigos. Todos os codigos estabilizadores descobertos ate o momentos são codigos de bloco. Nesta tese, construiremos uma classe de codigos convolucional quanticos concatenados. Introduziremos o conceito de memoria convolucional quantica e algumas tecnicas simples para produzir bons codigos convolucionais quanticos a partir de classes de codigos concolucionais classicos / Abstract: Decoherence is one of the major challenges facing the field of quantum computation. The field of quantum error correction has developed to meet this challenge. A group-theoretical structure and associated class of quantum codes, the stabilizer codes, has proved particularly fruitful in producing codes and in understanding the structure of both specified codes and class of codes. All stabilizer codes discovered so far are block codes. In this thesis we will construct a class of concatenated quantum convolutional codes. We will introduce the concept of quantum convolutional memory and some simple techniques to produce good quantum convolutional codes from classes of classical convolutional codes / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
148

Root LDPC Codes for Non Ergodic Transmission Channels / Root LDPC Codes for Non Ergodic Transmission Channels

Bhutto, Tarique Inayat January 2011 (has links)
4 ABSTRACT Tremendous amount of research has been conducted in modern coding theory in the past few years and much of the work has been done in developing new coding techniques. Low density parity check (LDPC) codes are class of linear block error correcting codes which provide capacity performance on a large collection of data transmission and storage channels while Root LDPC codes in this thesis work are admitting implementable decoders with manageable complexity. Furthermore, work has been conducted to develop graphical methods to represent LDPC codes. This thesis implement one of the LDPC kind “Root LDPC code” using iterative method and calculate its threshold level for binary and non-binary Root LDPC code. This threshold value can serve as a starting point for further study on this topic. We use C++ as tool to simulate the code structure and parameters. The results show that non-binary Root LDPC code provides higher threshold value as compare to binary Root LDPC code. / postal address: Björnkullaringen 26, LGH 1029 14151 Huddinge Stockholm Sweden. Mobile: +46-720 490 967
149

Full-Diversity Space-Time Trellis Codes For Arbitrary Number Of Antennas And State Complexity

Ananta Narayanan, T 01 1900 (has links) (PDF)
No description available.
150

Joint JPEG2000/LDPC Code System Design for Image Telemetry

Jagiello, Kristin, Aydin, Mahmut Zafer, Ng, Wei-Ren 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / This paper considers the joint selection of the source code rate and channel code rate in an image telemetry system. Specifically considered is the JPEG2000 image coder and an LDPC code family. The goal is to determine the optimum apportioning of bits between the source and channel codes for a given channel signal-to-noise ratio and total bit rate, R(total). Optimality is in the sense of maximum peak image SNR and the tradeoff is between the JPEG2000 bit rate R(source) and the LDPC code rate R(channel). For comparison, results are included for the industry standard rate-1/2, memory-6 convolutional code.

Page generated in 0.3077 seconds