Spelling suggestions: "subject:"need solomon"" "subject:"need holomon""
61 |
Reed-Solomon-koder i ett McElieceskryptosystem : En kodteoretisk genomgångHenriksson, Magnus January 2009 (has links)
<p>Detta arbete är ett examensarbete i matematik på kandidatnivå vid Växjö universitet. Det är en studie av kodningsteori i allmänhet med fokusering på cykliska koder och Reed-Solomon-koder i synnerhet. Reed-Solomon-koderna används för att skapa McElieces kryptosystem. En kortfattad analys av McElieces kryptosystems säkerhet görs tillsammans med en genomgång av kända sätt att forcera denna typ av kryptosystem. Här visar det sig att användning av Reed-Solomon-kod försvagar kryptosystemet i förhållande till om den ursprungligt föreslagna Goppa-koden används. För att kunna göra denna säkerhetsanalys görs också en kortfattad genomgång av komplexitetsteori och vad det innebär att ett problem är NP-fullständigt.</p><p><strong>Nyckelord: </strong>Kodningsteori, Kodteori, Cykliska koder, BCH-koder, Reed-Solomon-koder, McElieces kryptosystem, Kryptering, Kodforcering, Komplexitetsteori, NP-fullständigt</p> / <p>This work is produced on bachelor level in mathematics at University of Växjö. It is a study of coding theory with focus on cyclic codes in general and Reed-Solomon codes in detail. Reed-Solomon codes are used for implementing McEliece's crypto system. A short analysis of McEliece's crypto system security is also made together with a description of some known ways to break this type of cryptosystem. It is shown that using Reed-Solomon codes weaken this cryptosystem compared to using the original supposed Goppa codes. The security analyse also need a short summary of complexity theory and what it means that a problem is NP-complete.</p><p><strong>Keywords:</strong> Coding theory, Cyclic codes, BCH codes, Reed-Solomon codes, McEliece's cryptography system, Cryptography, Code breaking, Complexity theory, NP-complete</p>
|
62 |
Sécurité des protocoles cryptographiques fondés sur la théorie des codes correcteurs d'erreurs / Security of cryptographic protocols based on coding theoryTale kalachi, Herve 05 July 2017 (has links)
Contrairement aux protocoles cryptographiques fondés sur la théorie des nombres, les systèmes de chiffrement basés sur les codes correcteurs d’erreurs semblent résister à l’émergence des ordinateurs quantiques. Un autre avantage de ces systèmes est que le chiffrement et le déchiffrement sont très rapides, environ cinq fois plus rapide pour le chiffrement, et 10 à 100 fois plus rapide pour le déchiffrement par rapport à RSA. De nos jours, l’intérêt de la communauté scientifique pour la cryptographie basée sur les codes est fortement motivé par la dernière annonce de la “National Institute of Standards and Technology" (NIST), qui a récemment initié le projet intitulé “Post-Quantum cryptography Project". Ce projet vise à définir de nouveaux standards pour les cryptosystèmes résistants aux attaques quantiques et la date limite pour la soumission des cryptosystèmes à clé publique est fixée pour novembre 2017. Une telle annonce motive certainement à proposer de nouveaux protocoles cryptographiques basés sur les codes, mais aussi à étudier profondément la sécurité des protocoles existants afin d’écarter toute surprise en matière de sécurité. Cette thèse suit cet ordre d’idée en étudiant la sécurité de plusieurs protocoles cryptographiques fondés sur la théorie des codes correcteurs d’erreurs. Nous avons commencé par l’étude de la sécurité d’une version modifiée du cryptosystème de Sidelnikov, proposée par Gueye et Mboup [GM13] et basée sur les codes de Reed-Muller. Cette modification consiste à insérer des colonnes aléatoires dans la matrice génératrice (ou de parité) secrète. La cryptanalyse repose sur le calcul de carrés du code public. La nature particulière des codes de Reed-Muller qui sont définis au moyen de polynômes multivariés binaires, permet de prédire les valeurs des dimensions des codes carrés calculés, puis permet de récupérer complètement en temps polynomial les positions secrètes des colonnes aléatoires. Notre travail montre que l’insertion de colonnes aléatoires dans le schéma de Sidelnikov n’apporte aucune amélioration en matière de sécurité. Le résultat suivant est une cryptanalyse améliorée de plusieurs variantes du cryptosystème GPT qui est un schéma de chiffrement en métrique rang utilisant les codes de Gabidulin. Nous montrons qu’en utilisant le Frobenius de façon appropriée sur le code public, il est possible d’en extraire un code de Gabidulin ayant la même dimension que le code de Gabidulin secret mais avec une longueur inférieure. Le code obtenu corrige ainsi moins d’erreurs que le code secret, mais sa capacité de correction d’erreurs dépasse le nombre d’erreurs ajoutées par l’expéditeur et par conséquent, un attaquant est capable de déchiffrer tout texte chiffré, à l’aide de ce code de Gabidulin dégradé. Nos résultats montrent qu’en fin de compte, toutes les techniques existantes visant à cacher la structure algébrique des codes de Gabidulin ont échoué. Enfin, nous avons étudié la sécurité du système de chiffrement de Faure-Loidreau [FL05] qui est également basé sur les codes de Gabidulin. Inspiré par les travaux précédents et, bien que la structure de ce schéma diffère considérablement du cadre classique du cryptosystème GPT, nous avons pu montrer que ce schéma est également vulnérable à une attaque polynomiale qui récupère la clé privée en appliquant l’attaque d’Overbeck sur un code public approprié. Comme exemple, nous arrivons en quelques secondes à casser les paramètres qui ont été proposés comme ayant un niveau de sécurité de 80 bits. / Contrary to the cryptosystems based on number theory, the security of cryptosystems based on error correcting codes appears to be resistant to the emergence of quantum computers. Another advantage of these systems is that the encryption and decryption are very fast, about five times faster for encryption, and 10 to 100 times faster for decryption compared to RSA cryptosystem. Nowadays, the interest of scientific community in code-based cryptography is highly motivated by the latest announcement of the National Institute of Standards and Technology (NIST). They initiated the Post-Quantum cryptography Project which aims to define new standards for quantum resistant cryptography and fixed the deadline for public key cryptographic algorithm submissions for November 2017. This announcement motivates to study the security of existing schemes in order to find out whether they are secure. This thesis thus presents several attacks which dismantle several code-based encryption schemes. We started by a cryptanalysis of a modified version of the Sidelnikov cryptosystem proposed by Gueye and Mboup [GM13] which is based on Reed-Muller codes. This modified scheme consists in inserting random columns in the secret generating matrix or parity check matrix. The cryptanalysis relies on the computation of the square of the public code. The particular nature of Reed-Muller which are defined by means of multivariate binary polynomials, permits to predict the values of the dimensions of the square codes and then to fully recover in polynomial time the secret positions of the random columns. Our work shows that the insertion of random columns in the Sidelnikov scheme does not bring any security improvement. The second result is an improved cryptanalysis of several variants of the GPT cryptosystem which is a rank-metric scheme based on Gabidulin codes. We prove that any variant of the GPT cryptosystem which uses a right column scrambler over the extension field as advocated by the works of Gabidulin et al. [Gab08, GRH09, RGH11] with the goal to resist Overbeck’s structural attack [Ove08], are actually still vulnerable to that attack. We show that by applying the Frobeniusoperator appropriately on the public key, it is possible to build a Gabidulin code having the same dimension as the original secret Gabidulin code, but with a lower length. In particular, the code obtained by this way corrects less errors than thesecret one but its error correction capabilities are beyond the number of errors added by a sender, and consequently an attacker is able to decrypt any ciphertext with this degraded Gabidulin code. We also considered the case where an isometrictransformation is applied in conjunction with a right column scrambler which has its entries in the extension field. We proved that this protection is useless both in terms of performance and security. Consequently, our results show that all the existingtechniques aiming to hide the inherent algebraic structure of Gabidulin codes have failed. To finish, we studied the security of the Faure-Loidreau encryption scheme [FL05] which is also a rank-metric scheme based on Gabidulin codes. Inspired by our precedent work and, although the structure of the scheme differs considerably from the classical setting of the GPT cryptosystem, we show that for a range of parameters, this scheme is also vulnerable to a polynomial-time attack that recovers the private key by applying Overbeck’s attack on an appropriate public code. As an example we break in a few seconds parameters with 80-bit security claim.
|
63 |
Design and Use of a CCSDS - Compatible Data Unit DecoderO'Donnell, John, Ramirez, Jose 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / The Consultative Committee for Space Data Systems (CCSDS) formulates and publishes recommendations for space data system standards. CCSDS Recommendations define a layered data communications network along the lines of the OSI model. In the space link layer (OSI Data Link layer) fixed length blocks of CCSDS Packets are generated and multiplexed into the data field of Virtual Channel Data Units (VCDUs) in the Virtual Channel Access Sublayer. VCDUs with error correction coding become CVCDUs (coded VCDUs). CVCDUs (or VCDUs) with an attached sync marker become Channel Access Data Units (CADUs) which are transmitted on the Physical Space Channel. This paper discusses AYDIN's DEC012 Data Unit Decoder, a VMEbus circuit card which recovers Virtual Channel Data Units (VCDUs) from corrupted Channel Access Data Units (CADUs) received on the Space Link Subnet of a CCSDS-compatible space datacomm link. The module's design and operation is described along with its use in the X-ray Timing Explorer (XTE) and Tropical Rainfall Measuring Mission (TRMM) science satellite programs run by NASA Goddard Space Flight Center.
|
64 |
Symbol level decoding of Reed-Solomon codes with improved reliability information over fading channelsOgundile, Olanyika Olaolu January 2016 (has links)
A thesis submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy in the School of Electrical and Information Engineering, 2016 / Reliable and e cient data transmission have been the subject of current research,
most especially in realistic channels such as the Rayleigh fading channels. The focus
of every new technique is to improve the transmission reliability and to increase
the transmission capacity of the communication links for more information to be
transmitted. Modulation schemes such as M-ary Quadrature Amplitude Modulation
(M-QAM) and Orthogonal Frequency Division Multiplexing (OFDM) were
developed to increase the transmission capacity of communication links without
additional bandwidth expansion, and to reduce the design complexity of communication
systems.
On the contrary, due to the varying nature of communication channels, the message
transmission reliability is subjected to a couple of factors. These factors include the
channel estimation techniques and Forward Error Correction schemes (FEC) used
in improving the message reliability. Innumerable channel estimation techniques
have been proposed independently, and in combination with di erent FEC schemes
in order to improve the message reliability. The emphasis have been to improve
the channel estimation performance, bandwidth and power consumption, and the
implementation time complexity of the estimation techniques. Of particular interest, FEC schemes such as Reed-Solomon (RS) codes, Turbo
codes, Low Density Parity Check (LDPC) codes, Hamming codes, and Permutation
codes, are proposed to improve the message transmission reliability of communication
links. Turbo and LDPC codes have been used extensively to combat
the varying nature of communication channels, most especially in joint iterative
channel estimation and decoding receiver structures. In this thesis, attention is
focused on using RS codes to improve the message reliability of a communication
link because RS codes have good capability of correcting random and burst errors,
and are useful in di erent wireless applications.
This study concentrates on symbol level soft decision decoding of RS codes. In
this regards, a novel symbol level iterative soft decision decoder for RS codes
based on parity-check equations is developed. This Parity-check matrix Transformation
Algorithm (PTA) is based on the soft reliability information derived from
the channel output in order to perform syndrome checks in an iterative process.
Performance analysis verify that this developed PTA outperforms the conventional
RS hard decision decoding algorithms and the symbol level Koetter and Vardy
(KV ) RS soft decision decoding algorithm.
In addition, this thesis develops an improved Distance Metric (DM) method of
deriving reliability information over Rayleigh fading channels for combined demodulation
with symbol level RS soft decision decoding algorithms. The newly
proposed DM method incorporates the channel state information in deriving the
soft reliability information over Rayleigh fading channels. Analysis verify that this
developed metric enhances the performance of symbol level RS soft decision decoders
in comparison with the conventional method. Although, in this thesis, the
performance of the developed DM method of deriving soft reliability information
over Rayleigh fading channels is only veri ed for symbol level RS soft decision
decoders, it is applicable to any symbol level soft decision decoding FEC scheme.
Besides, the performance of the all FEC decoding schemes plummet as a result
of the Rayleigh fading channels. This engender the development of joint iterative channel estimation and decoding receiver structures in order to improve the message
reliability, most especially with Turbo and LDPC codes as the FEC schemes.
As such, this thesis develops the rst joint iterative channel estimation and Reed-
Solomon decoding receiver structure. Essentially, the joint iterative channel estimation
and RS decoding receiver is developed based on the existing symbol level
soft decision KV algorithm. Consequently, the joint iterative channel estimation
and RS decoding receiver is extended to the developed RS parity-check matrix
transformation algorithm. The PTA provides design ease and
exibility, and lesser
computational time complexity in an iterative receiver structure in comparison
with the KV algorithm.
Generally, the ndings of this thesis are relevant in improving the message transmission
reliability of a communication link with RS codes. For instance, it is
pertinent to numerous data transmission technologies such as Digital Audio Broadcasting
(DAB), Digital Video Broadcasting (DVB), Digital Subscriber Line (DSL),
WiMAX, and long distance satellite communications. Equally, the developed, less
computationally intensive, and performance e cient symbol level decoding algorithm
for RS codes can be use in consumer technologies like compact disc and
digital versatile disc. / GS2016
|
65 |
On algebraic geometric codes and some related codesGuenda, Kenza 12 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2006. / The main topic of this thesis is the construction of the algebraic geometric
codes (Goppa codes), and their decoding by the list-decoding, which allows
one to correct beyond half of the minimum distance. We also consider the
list-decoding of the Reed–Solomon codes as they are subclass of the Goppa
codes, and the determination of the parameters of the non primitive BCH
codes.
AMS Subject Classification: 4B05, 94B15, 94B35, 94B27, 11T71, 94B65,B70.
Keywords: Linear codes, cyclic codes, BCH codes, Reed–Solomon codes,
list-decoding, Algebraic Geometric codes, decoding, bound on codes, error
probability.
|
66 |
Performance of Single Layer H.264 SVC Video Over Error Prone NetworksJanuary 2011 (has links)
abstract: With tremendous increase in the popularity of networked multimedia applications, video data is expected to account for a large portion of the traffic on the Internet and more importantly next-generation wireless systems. To be able to satisfy a broad range of customers requirements, two major problems need to be solved. The first problem is the need for a scalable representation of the input video. The recently developed scalable extension of the state-of-the art H.264/MPEG-4 AVC video coding standard, also known as H.264/SVC (Scalable Video Coding) provides a solution to this problem. The second problem is that wireless transmission medium typically introduce errors in the bit stream due to noise, congestion and fading on the channel. Protection against these channel impairments can be realized by the use of forward error correcting (FEC) codes. In this research study, the performance of scalable video coding in the presence of bit errors is studied. The encoded video is channel coded using Reed Solomon codes to provide acceptable performance in the presence of channel impairments. In the scalable bit stream, some parts of the bit stream are more important than other parts. Parity bytes are assigned to the video packets based on their importance in unequal error protection scheme. In equal error protection scheme, parity bytes are assigned based on the length of the message. A quantitative comparison of the two schemes, along with the case where no channel coding is employed is performed. H.264 SVC single layer video streams for long video sequences of different genres is considered in this study which serves as a means of effective video characterization. JSVM reference software, in its current version, does not support decoding of erroneous bit streams. A framework to obtain H.264 SVC compatible bit stream is modeled in this study. It is concluded that assigning of parity bytes based on the distribution of data for different types of frames provides optimum performance. Application of error protection to the bit stream enhances the quality of the decoded video with minimal overhead added to the bit stream. / Dissertation/Thesis / M.S. Electrical Engineering 2011
|
67 |
Implementace vrstvy RS-FEC pro 400 Gb/s Ethernet / RS-FEC layer implementation for 400Gb/s ethernetZahálka, Patrik January 2020 (has links)
Tato diplomová práce se věnuje problematice VLSI návrhu a implementaci vrstvy RS-FEC pro 400 Gb/s Ethernet do FPGA Intel® Stratix® 10 DX 2100. V práci je charakterizován současný stav rychlostí Ethernetu, význam a kontext samoopravných kódů v rámci protokolu Ethernet. Dále je popsána výroba PLD čipů i matematická podstata RS sa moopravných kódů. V části praktické je představen návrh řešení systému RS-FEC, který byl realizován genericky pomocí jazyka VHDL. Zároveň byly jeho komponenty implementovány a v závěrečné diskusi je popsáno jeho řešení, dosažené výsledky včetně jeho budoucího rozšíření.
|
68 |
Variace Reed-Solomonových kódů nad jinými algebraickými strukturami / Variants of Reed-Solomon codes over other algebraic structuresKončický, Václav January 2022 (has links)
Reed-Solomon codes are a well known family of error-correcting codes with many good properties. However, they require a finite field to operate, limiting the alphabet size to a prime power. In this work, we build a weaker algebraic structure which supports alphabet of any integer size and requires only standard addition, multiplication and division to implement. Then we study a family of error-correcting codes based on matrix multiplication over this structure. We also adapt the Reed-Solomon code principle on this code family and study its properties. We prove and verify experimentally that while a random code of this family has high distance, the Reed-Solomon adaptation fails to perform well. 1
|
69 |
Software based memory correction for a miniature satellite in low-Earth orbit / Mjukvarustyrd rättning av minnesfel för en miniatyrsatellit i låg omloppsbanaWikman, John, Sjöblom, Johan January 2017 (has links)
The harsh radiation environment of space is known to cause bit flips in computer memory. The conventional way to combat this is through error detection and correction (EDAC) circuitry, but for low-budget space missions software EDAC can be used. One such mission is the KTH project Miniature Student Satellite (MIST), which aims to send a 3U CubeSat into low-Earth orbit. To ensure a high level of data reliability on board MIST, this thesis investigates the performance of different types of EDAC algorithms. First, a prediction of the bit flip susceptibility of DRAM memory in the planned trajectory is made. After that, data reliability models of Hamming and Reed-Solomon (RS) codes are proposed, and their respective running times on the MIST onboard computer are approximated. Finally, the performance of the different codes is discussed with regards to data reliability, memory overhead, and CPU usage. The findings of this thesis suggest that using an EDAC algorithm would greatly increase the data reliability. Among the codes investigated, three good candidates are RS(28,24), RS(196,192) and RS(255,251), depending on how much memory overhead can be accepted. / Rymdens strålningsmiljö är känd för att orsaka bitflippar i datorminnen.Vanligtvis motverkas detta genom att felrättande hårdvara installeraspå satelliten, men för lågkostnadssatelliter kan rättningen iställetskötas i mjukvaran. Ett exempel på en sådan satellit är KTH-projektetMiniature Student Satellite (MIST), vars mål är att skicka upp en 3UCubeSat i låg omloppsbana. Den här uppsatsen undersöker hur olika felrättningsalgoritmer kananvändas för att skydda data ombord på satelliten från att bli korrupt. Först görs en uppskattning av hur strålningskänsliga DRAM minnenär i den planerade omloppsbanan. Därefter föreslås datakorruptionsmodellerför Hamming- och Reed-Solomonkoder (RS) tillsammans meden uppskattning av deras respektive körtider på satellitens omborddator. Slutligen diskuteras de föreslagna koderna med hänsyn till datakorruptionsskydd,minnesanvändning och processoranvändning. Uppsatsens slutsats indikerar att användandet av felrättningsalgoritmerkraftigt minskar risken för datakorruption. Bland de kodersom undersökts framstår RS(28,24), RS(196,192) och RS(255,251) somde bästa alternativen, beroende på hur mycket extra minnesanvändningsom är acceptabelt.
|
70 |
Cost Beneficial Solution for High Rate Data ProcessingMirchandani, Chandru, Fisher, David, Ghuman, Parminder 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / GSFC in keeping with the tenets of NASA has been aggressively investigating new
technologies for spacecraft and ground communications and processing. The application
of these technologies, together with standardized telemetry formats, make it possible to
build systems that provide high-performance at low cost in a short development cycle.
The High Rate Telemetry Acquisition System (HRTAS) Prototype is one such effort that
has validated Goddard's push towards faster, better and cheaper. The HRTAS system
architecture is based on the Peripheral Component Interconnect (PCI) bus and VLSI
Application-Specific Integrated Circuits (ASICs). These ASICs perform frame
synchronization, bit-transition density decoding, cyclic redundancy code (CRC) error
checking, Reed-Solomon error detection/correction, data unit sorting, packet extraction,
annotation and other service processing. This processing in performed at rates of up to
and greater than 150 Mbps sustained using a high-end performance workstation running
standard UNIX O/S, (DEC 4100 with DEC UNIX or better). ASICs are also used for the
digital reception of Intermediate Frequency (IF) telemetry as well as the spacecraft
command interface for commands and data simulations.
To improve the efficiency of the back-end processing, the level zero processing sorting
element is being developed. This will provide a complete hardware solution to extracting
and sorting source data units and making these available in separate files on a remote disk
system. Research is on going to extend this development to higher levels of the science
data processing pipeline. The fact that level 1 and higher processing is instrument
dependent; an acceleration approach utilizing ASICs is not feasible. The advent of field
programmable gate array (FPGA) based computing, referred to as adaptive or reconfigurable computing, provides a processing performance close to ASIC levels while
maintaining much of the programmability of traditional microprocessor based systems.
This adaptive computing paradigm has been successfully demonstrated and its cost
performance validated, to make it a viable technology for the level one and higher
processing element for the HRTAS.
Higher levels of processing are defined as the extraction of useful information from
source telemetry data. This information has to be made available to the science data user
in a very short period of time. This paper will describe this low cost solution for high rate
data processing at level one and higher processing levels. The paper will further discuss
the cost-benefit of this technology in terms of cost, schedule, reliability and performance.
|
Page generated in 0.0767 seconds