Spelling suggestions: "subject:"errorcorrecting modes"" "subject:"errorcorrecting codes""
271 |
On Network Coding and Network-Error CorrectionPrasad, Krishnan January 2013 (has links) (PDF)
The paradigm of network coding was introduced as a means to conserve bandwidth (or equivalently increase throughput) in information flow networks. Network coding makes use of the fact that unlike physical commodities, information can be replicated and coded together at the nodes of the network. As a result, routing can be strictly suboptimal in many classes of information flow networks compared to network coding. Network-error correction is the art of designing network codes such that the sinks of the network will be able to decode the required information in the presence of errors in the edges of the network, known as network-errors. The network coding problem on a network with given sink demands can be considered to have the following three major subproblems, which naturally also extend to the study of network-error correcting codes, as they can be viewed as a special class of network codes (a) Existence of a network code that satisfies the demands (b) Efficient construction of such a network code (c) Minimum alphabet size for the existence of such a network code.
This thesis primarily considers linear network coding and error correction and in- vestigates solutions to these issues for certain classes of network coding and error correction problems in acyclic networks. Our contributions are broadly summarised as follows.
(1) We propose the use of convolutional codes for multicast network-error correc- tion. Depending upon the number of network-errors required to be corrected in the network, convolutional codes are designed at the source of the multicast network so that these errors can be corrected at the sinks of the networks as long as they are separated by certain number of time instants (for which we give a bound). In con- trast to block codes for network-error correction which require large field sizes, using convolutional codes enables the field size of the network code to be small. We discuss the performance of such networks under the BSC edge error model.
(2)Existing construction algorithms of block network-error correcting codes require a rather large field size, which grows with the size of the network and the number of sinks, and thereby can be prohibitive in large networks. In our work, we give an algorithm which, starting from a given network-error correcting code, can obtain an- other network code using a small field, with the same error correcting capability as the original code. The major step in our algorithm is to find a least degree irreducible poly- nomial which is coprime to another large degree polynomial. We utilize the algebraic properties of finite fields to implement this step so that it becomes much faster than the brute-force method. A recently proposed algorithm for network coding using small fields can be seen as a special case of our algorithm for the case of no network-errors.
(3)Matroids are discrete mathematical objects which generalize the notion of linear independence of sets of vectors. It has been observed recently that matroids and network coding share a deep connection, and several important results of network coding has been obtained using these connections from matroid theory. In our work, we establish that matroids with certain special properties correspond to networks with error detecting and correcting properties. We call such networks as matroidal error detecting (or equivalently, correcting) networks. We show that networks have scalar linear network-error detecting (or correcting) codes if and only if there are associated with representable matroids with some special properties. We also use these ideas to construct matroidal error correcting networks along with their associated matroids. In the case of representable matroids, these algorithms give rise to scalar linear network- error correcting codes on such networks. Finally we also show that linear network coding is not sufficient for the general network-error detection (correction) problem with arbitrary demands.
(4)Problems related to network coding for acyclic, instantaneous networks have been extensively dealt with in the past. In contrast, not much attention has been paid to networks with delays. In our work, we elaborate on the existence, construction and minimum field size issues of network codes for networks with integer delays. We show that the delays associated with the edges of the network cannot be ignored, and in fact turn out to be advantageous, disadvantageous or immaterial, depending on the topology of the network and the network coding problem considered. In the process, we also show multicast network codes which involve only delaying the symbols arriving at the nodes of the networks and coding the delayed symbols over a binary field, thereby making coding operations at the nodes less complex.
(5) In the usual network coding framework, for a given set of network demands over an arbitrary acyclic network with integer delays assumed for the links, the out- put symbols at the sink nodes, at any given time instant, is a Fq-linear combination of the input symbols generated at different time instants where Fq denotes the field over which the network operates. Therefore the sinks have to use sufficient memory elements in order to decode simultaneously for the entire stream of demanded infor- mation symbols. We propose a scheme using an ν-point finite-field discrete fourier transform (DFT) which converts the output symbols at the sink nodes at any given time instant, into a Fq-linear combination of the input symbols generated during the same time instant without making use of memory at the intermediate nodes. We call this as transforming the acyclic network with delay into ν-instantaneous networks (ν is sufficiently large). We show that under certain conditions, there exists a network code satisfying sink demands in the usual (non-transform) approach if and only if there exists a network code satisfying sink demands in the transform approach.
|
272 |
Accurate modeling of noise in quantum error correcting circuitsGutierrez Arguedas, Mauricio 07 January 2016 (has links)
A universal, scalable quantum computer will require the use of quantum error correction in order to achieve fault tolerance. The assessment and comparison of error-correcting strategies is performed by classical simulation. However, due to the prohibitive exponential scaling of general quantum circuits, simulations are restrained to specific subsets of quantum operations. This creates a gap between accuracy and efficiency which is particularly problematic when modeling noise, because most realistic noise models are not efficiently simulable on a classical computer. We have introduced extensions to the Pauli channel, the traditional error channel employed to model noise in simulations of quantum circuits. These expanded error channels are still computationally tractable to simulate, but result in more accurate approximations to realistic error channels at the single qubit level. Using the Steane [[7,1,3]] code, we have also investigated the behavior of these expanded channels at the logical error-corrected level. We have found that it depends strongly on whether the error is incoherent or coherent. In general, the Pauli channel will be an excellent approximation to incoherent channels, but an unsatisfactory one for coherent channels, especially because it severely underestimates the magnitude of the error. Finally, we also studied the honesty and accuracy of the expanded channels at the logical level. Our results suggest that these measures can be employed to generate lower and upper bounds to a quantum code's threshold under the influence of a specific error channel.
|
273 |
The use of classification methods for gross error detection in process dataGerber, Egardt 12 1900 (has links)
Thesis (MScEng)-- Stellenbosch University, 2013. / ENGLISH ABSTRACT: All process measurements contain some element of error. Typically, a distinction is made between
random errors, with zero expected value, and gross errors with non-zero magnitude. Data Reconciliation
(DR) and Gross Error Detection (GED) comprise a collection of techniques designed to attenuate
measurement errors in process data in order to reduce the effect of the errors on subsequent use of the
data. DR proceeds by finding the optimum adjustments so that reconciled measurement data satisfy
imposed process constraints, such as material and energy balances. The DR solution is optimal under
the assumed statistical random error model, typically Gaussian with zero mean and known covariance.
The presence of outliers and gross errors in the measurements or imposed process constraints invalidates
the assumptions underlying DR, so that the DR solution may become biased. GED is required to detect,
identify and remove or otherwise compensate for the gross errors. Typically GED relies on formal
hypothesis testing of constraint residuals or measurement adjustment-based statistics derived from the
assumed random error statistical model.
Classification methodologies are methods by which observations are classified as belonging to one of
several possible groups. For the GED problem, artificial neural networks (ANN’s) have been applied
historically to resolve the classification of a data set as either containing or not containing a gross error.
The hypothesis investigated in this thesis is that classification methodologies, specifically classification
trees (CT) and linear or quadratic classification functions (LCF, QCF), may provide an alternative to the
classical GED techniques.
This hypothesis is tested via the modelling of a simple steady-state process unit with associated
simulated process measurements. DR is performed on the simulated process measurements in order to
satisfy one linear and two nonlinear material conservation constraints. Selected features from the DR
procedure and process constraints are incorporated into two separate input vectors for classifier
construction. The performance of the classification methodologies developed on each input vector is
compared with the classical measurement test in order to address the posed hypothesis.
General trends in the results are as follows: - The power to detect and/or identify a gross error is a strong function of the gross error magnitude
as well as location for all the classification methodologies as well as the measurement test.
- For some locations there exist large differences between the power to detect a gross error and the
power to identify it correctly. This is consistent over all the classifiers and their associated
measurement tests, and indicates significant smearing of gross errors.
- In general, the classification methodologies have higher power for equivalent type I error than
the measurement test.
- The measurement test is superior for small magnitude gross errors, and for specific locations,
depending on which classification methodology it is compared with.
There is significant scope to extend the work to more complex processes and constraints, including
dynamic processes with multiple gross errors in the system. Further investigation into the optimal
selection of input vector elements for the classification methodologies is also required. / AFRIKAANSE OPSOMMING: Alle prosesmetings bevat ʼn sekere mate van metingsfoute. Die fout-element van ʼn prosesmeting word
dikwels uitgedruk as bestaande uit ʼn ewekansige fout met nul verwagte waarde, asook ʼn nie-ewekansige
fout met ʼn beduidende grootte. Data Rekonsiliasie (DR) en Fout Opsporing (FO) is ʼn versameling van
tegnieke met die doelwit om die effek van sulke foute in prosesdata op die daaropvolgende aanwending
van die data te verminder. DR word uitgevoer deur die optimale veranderinge aan die oorspronklike
prosesmetings aan te bring sodat die aangepaste metings sekere prosesmodelle gehoorsaam, tipies
massa- en energie-balanse. Die DR-oplossing is optimaal, mits die statistiese aannames rakende die
ewekansige fout-element in die prosesdata geldig is. Dit word tipies aanvaar dat die fout-element
normaal verdeel is, met nul verwagte waarde, en ʼn gegewe kovariansie matriks.
Wanneer nie-ewekansige foute in die data teenwoordig is, kan die resultate van DR sydig wees. FO is
daarom nodig om nie-ewekansige foute te vind (Deteksie) en te identifiseer (Identifikasie). FO maak
gewoonlik staat op die statistiese eienskappe van die meting aanpassings wat gemaak word deur die DR
prosedure, of die afwykingsverskil van die model vergelykings, om formele hipoteses rakende die
teenwoordigheid van nie-ewekansige foute te toets.
Klassifikasie tegnieke word gebruik om die klasverwantskap van observasies te bepaal. Rakende die FO
probleem, is sintetiese neurale netwerke (SNN) histories aangewend om die Deteksie en Identifikasie
probleme op te los. Die hipotese van hierdie tesis is dat klassifikasie tegnieke, spesifiek klassifikasiebome
(CT) en lineêre asook kwadratiese klassifikasie funksies (LCF en QCF), suksesvol aangewend
kan word om die FO probleem op te los.
Die hipotese word ondersoek deur middel van ʼn simulasie rondom ʼn eenvoudige gestadigde toestand
proses-eenheid wat aan een lineêre en twee nie-lineêre vergelykings onderhewig is. Kunsmatige
prosesmetings word geskep met behulp van lukrake syfers sodat die foutkomponent van elke
prosesmeting bekend is. DR word toegepas op die kunsmatige data, en die DR resultate word gebruik
om twee verskillende insetvektore vir die klassifikasie tegnieke te skep. Die prestasie van die
klassifikasie metodes word vergelyk met die metingstoets van klassieke FO ten einde die gestelde
hipotese te beantwoord. Die onderliggende tendense in die resultate is soos volg:
- Die vermoë om ‘n nie-ewekansige fout op te spoor en te identifiseer is sterk afhanklik van die
grootte asook die ligging van die fout vir al die klassifikasie tegnieke sowel as die metingstoets.
- Vir sekere liggings van die nie-ewekansige fout is daar ‘n groot verskil tussen die vermoë om die
fout op te spoor, en die vermoë om die fout te identifiseer, wat dui op smering van die fout. Al
die klassifikasie tegnieke asook die metingstoets baar hierdie eienskap.
- Oor die algemeen toon die klassifikasie metodes groter sukses as die metingstoets.
- Die metingstoets is meer suksesvol vir relatief klein nie-ewekansige foute, asook vir sekere
liggings van die nie-ewekansige fout, afhangende van die klassifikasie tegniek ter sprake.
Daar is verskeie maniere om die bestek van hierdie ondersoek uit te brei. Meer komplekse, niegestadigde
prosesse met sterk nie-lineêre prosesmodelle en meervuldige nie-ewekansige foute kan
ondersoek word. Die moontlikheid bestaan ook om die prestasie van klassifikasie metodes te verbeter
deur die gepaste keuse van insetvektor elemente.
|
274 |
Geometries of Binary Constant Weight CodesEkberg, Joakim January 2006 (has links)
<p>This thesis shows how certain classes of binary constant weight codes can be represented geometrically using linear structures in Euclidean space. The geometric treatment is concerned mostly with codes with minimum distance 2(w - 1), that is, where any two codewords coincide in at most one entry; an algebraic generalization of parts of the theory also applies to some codes without this property. The presented theorems lead to a total of 18 improvements of the table of lower bounds on A(n,d,w) maintained by E. M. Rains and N. J. A. Sloane. Additional improvements have been made by finding new lexicographic codes.</p>
|
275 |
Samoopravné kódy a rozpoznávání podle duhovky / Samoopravné kódy a rozpoznávání podle duhovkyLuhan, Vojtěch January 2013 (has links)
Iris recognition constitutes one of the most powerful method for the iden- tification and authentication of people today. This thesis aims to describe the algorithms used in a sophisticated and mathematically correct way, while re- maining comprehensible. The description of these algorithms is not the only objective of this thesis; the reason they were chosen and potential improvements or substitutions are also discussed. The background of iris recognition, its use in cryptosystems, and the application of error-correcting codes are investigated as well.
|
276 |
Samoopravné kódy a rozpoznávání podle duhovky / Samoopravné kódy a rozpoznávání podle duhovkyLuhan, Vojtěch January 2014 (has links)
Iris recognition constitutes one of the most powerful method for the iden- tification and authentication of people today. This thesis aims to describe the algorithms used by a mathematical apparatus. The description of these algo- rithms is not the only objective of this thesis; the reason they were chosen and potential improvements or substitutions are also discussed. The background of iris recognition, its use in cryptosystems, and the application of error-correcting codes are investigated as well. The second version of the thesis eliminates errata and a quantum of inaccu- racies discovered in the first version, especially in the ROI Definition, the Hough Transform and the Feature Extraction sections. Besides that, it also contains se- veral new propositions. Last, but not least, it shows a potential implementation of the algorithms described by appending pseudocodes to the relevant sections. 1
|
277 |
Comparative study of a time diversity scheme applied to G3 systems for narrowband power-line communicationsRivard, Yves-François January 2016 (has links)
A dissertation submitted to the Faculty of Engineering and the Built Environment,
University of the Witwatersrand, Johannesburg, in ful lment of the requirements for
the degree of Masters of Science in Engineering (Electrical).
Johannesburg, 2016 / Power-line communications can be used for the transfer of data across electrical net-
works in applications such as automatic meter reading in smart grid technology. As
the power-line channel is harsh and plagued with non-Gaussian noise, robust forward
error correction schemes are required. This research is a comparative study where a
Luby transform code is concatenated with power-line communication systems provided
by an up-to-date standard published by electricit e R eseau Distribution France named
G3 PLC. Both decoding using Gaussian elimination and belief propagation are imple-
mented to investigate and characterise their behaviour through computer simulations
in MATLAB. Results show that a bit error rate performance improvement is achiev-
able under non worst-case channel conditions using a Gaussian elimination decoder.
An adaptive system is thus recommended which decodes using Gaussian elimination
and which has the appropriate data rate. The added complexity can be well tolerated
especially on the receiver side in automatic meter reading systems due to the network
structure being built around a centralised agent which possesses more resources. / MT2017
|
278 |
Symbol level decoding of Reed-Solomon codes with improved reliability information over fading channelsOgundile, Olanyika Olaolu January 2016 (has links)
A thesis submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy in the School of Electrical and Information Engineering, 2016 / Reliable and e cient data transmission have been the subject of current research,
most especially in realistic channels such as the Rayleigh fading channels. The focus
of every new technique is to improve the transmission reliability and to increase
the transmission capacity of the communication links for more information to be
transmitted. Modulation schemes such as M-ary Quadrature Amplitude Modulation
(M-QAM) and Orthogonal Frequency Division Multiplexing (OFDM) were
developed to increase the transmission capacity of communication links without
additional bandwidth expansion, and to reduce the design complexity of communication
systems.
On the contrary, due to the varying nature of communication channels, the message
transmission reliability is subjected to a couple of factors. These factors include the
channel estimation techniques and Forward Error Correction schemes (FEC) used
in improving the message reliability. Innumerable channel estimation techniques
have been proposed independently, and in combination with di erent FEC schemes
in order to improve the message reliability. The emphasis have been to improve
the channel estimation performance, bandwidth and power consumption, and the
implementation time complexity of the estimation techniques. Of particular interest, FEC schemes such as Reed-Solomon (RS) codes, Turbo
codes, Low Density Parity Check (LDPC) codes, Hamming codes, and Permutation
codes, are proposed to improve the message transmission reliability of communication
links. Turbo and LDPC codes have been used extensively to combat
the varying nature of communication channels, most especially in joint iterative
channel estimation and decoding receiver structures. In this thesis, attention is
focused on using RS codes to improve the message reliability of a communication
link because RS codes have good capability of correcting random and burst errors,
and are useful in di erent wireless applications.
This study concentrates on symbol level soft decision decoding of RS codes. In
this regards, a novel symbol level iterative soft decision decoder for RS codes
based on parity-check equations is developed. This Parity-check matrix Transformation
Algorithm (PTA) is based on the soft reliability information derived from
the channel output in order to perform syndrome checks in an iterative process.
Performance analysis verify that this developed PTA outperforms the conventional
RS hard decision decoding algorithms and the symbol level Koetter and Vardy
(KV ) RS soft decision decoding algorithm.
In addition, this thesis develops an improved Distance Metric (DM) method of
deriving reliability information over Rayleigh fading channels for combined demodulation
with symbol level RS soft decision decoding algorithms. The newly
proposed DM method incorporates the channel state information in deriving the
soft reliability information over Rayleigh fading channels. Analysis verify that this
developed metric enhances the performance of symbol level RS soft decision decoders
in comparison with the conventional method. Although, in this thesis, the
performance of the developed DM method of deriving soft reliability information
over Rayleigh fading channels is only veri ed for symbol level RS soft decision
decoders, it is applicable to any symbol level soft decision decoding FEC scheme.
Besides, the performance of the all FEC decoding schemes plummet as a result
of the Rayleigh fading channels. This engender the development of joint iterative channel estimation and decoding receiver structures in order to improve the message
reliability, most especially with Turbo and LDPC codes as the FEC schemes.
As such, this thesis develops the rst joint iterative channel estimation and Reed-
Solomon decoding receiver structure. Essentially, the joint iterative channel estimation
and RS decoding receiver is developed based on the existing symbol level
soft decision KV algorithm. Consequently, the joint iterative channel estimation
and RS decoding receiver is extended to the developed RS parity-check matrix
transformation algorithm. The PTA provides design ease and
exibility, and lesser
computational time complexity in an iterative receiver structure in comparison
with the KV algorithm.
Generally, the ndings of this thesis are relevant in improving the message transmission
reliability of a communication link with RS codes. For instance, it is
pertinent to numerous data transmission technologies such as Digital Audio Broadcasting
(DAB), Digital Video Broadcasting (DVB), Digital Subscriber Line (DSL),
WiMAX, and long distance satellite communications. Equally, the developed, less
computationally intensive, and performance e cient symbol level decoding algorithm
for RS codes can be use in consumer technologies like compact disc and
digital versatile disc. / GS2016
|
279 |
Comparison of code rate and transmit diversity in MIMO systemsChurms, Duane January 2016 (has links)
A thesis submitted in ful lment of the requirements
for the degree of Master of Science in the Centre of Excellence in Telecommunications and Software School of Electrical and Information Engineering, March 2016 / In order to compare low rate error correcting codes to MIMO schemes with transmit
diversity, two systems with the same throughput are compared. A VBLAST MIMO
system with (15; 5) Reed-Solomon coding is compared to an Alamouti MIMO system
with (15; 10) Reed-Solomon coding. The latter is found to perform signi cantly better,
indicating that transmit diversity is a more e ective technique for minimising errors than
reducing the code rate. The Guruswami-Sudan/Koetter-Vardy soft decision decoding
algorithm was implemented to allow decoding beyond the conventional error correcting
bound of RS codes and VBLAST was adapted to provide reliability information.
Analysis is also performed to nd the optimal code rate when using various MIMO
systems. / MT2016
|
280 |
Uma abordagem de dígitos verificadores e códigos corretores no ensino fundamental / An approach to check digits and error-correcting codes in middle schoolMachado, Daniel Alves 19 May 2016 (has links)
Este trabalho, elaborado por meio de pesquisa bibliográfica, apresenta um apanhado sobre os dígitos verificadores presentes no Cadastro de Pessoas Físicas (CPF), no código de barras, e no sistema ISBN; faz uma introdução sobre a métrica de Hamming e os códigos corretores de erros; cita a classe de códigos mais utilizada, que são os códigos lineares, e deixa a sugestão de uma proposta pedagógica para professores de matemática aplicarem no Ensino Fundamental, podendo ser ajustada também para o Ensino Médio. No apêndice A, são propostos alguns exercícios que podem ser trabalhados com os alunos em sala de aula. / This work, based on the attached references, presents an overview of the check digits that appear in the Brazilian document CPF, in the bar code and the ISBN system. Moreover, it makes an introduction to the Hamming metric and error-correcting codes. In particular, some considerations about linear codes are done and it makes a suggestion of a pedagogical approach to apply it in middle school and can also be adjusted to high school. In the Appendix A are proposed some exercises to students.
|
Page generated in 0.1144 seconds