51 |
Design of effective decoding techniques in network coding networks / Suné von SolmsVon Solms, Suné January 2013 (has links)
Random linear network coding is widely proposed as the solution for practical network coding
applications due to the robustness to random packet loss, packet delays as well as network topology
and capacity changes. In order to implement random linear network coding in practical scenarios
where the encoding and decoding methods perform efficiently, the computational complex coding
algorithms associated with random linear network coding must be overcome.
This research contributes to the field of practical random linear network coding by presenting
new, low complexity coding algorithms with low decoding delay. In this thesis we contribute to this
research field by building on the current solutions available in the literature through the utilisation
of familiar coding schemes combined with methods from other research areas, as well as developing
innovative coding methods.
We show that by transmitting source symbols in predetermined and constrained patterns from
the source node, the causality of the random linear network coding network can be used to create
structure at the receiver nodes. This structure enables us to introduce an innovative decoding
scheme of low decoding delay. This decoding method also proves to be resilient to the effects of
packet loss on the structure of the received packets. This decoding method shows a low decoding
delay and resilience to packet erasures, that makes it an attractive option for use in multimedia
multicasting.
We show that fountain codes can be implemented in RLNC networks without changing the
complete coding structure of RLNC networks. By implementing an adapted encoding algorithm at
strategic intermediate nodes in the network, the receiver nodes can obtain encoded packets that
approximate the degree distribution of encoded packets required for successful belief propagation
decoding.
Previous work done showed that the redundant packets generated by RLNC networks can be
used for error detection at the receiver nodes. This error detection method can be implemented
without implementing an outer code; thus, it does not require any additional network resources. We
analyse this method and show that this method is only effective for single error detection, not
correction.
In this thesis the current body of knowledge and technology in practical random linear network
coding is extended through the contribution of effective decoding techniques in practical network
coding networks. We present both analytical and simulation results to show that the developed
techniques can render low complexity coding algorithms with low decoding delay in RLNC networks. / Thesis (PhD (Computer Engineering))--North-West University, Potchefstroom Campus, 2013
|
52 |
An Unsupervised Approach to Detecting and Correcting Errors in TextIslam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors.
The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions:
Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature?
What is the magnitude of error coverage, in terms of the number of errors that can be corrected?
We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible.
We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors.
The approach is language-independent, it can be applied to other languages, as long as n-grams are available.
The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
|
53 |
On Error Detection and Recovery in Elliptic Curve CryptosystemsAlkhoraidly, Abdulaziz Mohammad January 2011 (has links)
Fault analysis attacks represent a serious threat to a wide range of cryptosystems including those based on elliptic curves. With the variety and demonstrated practicality of these attacks, it is essential for cryptographic implementations to handle different types of errors properly and securely. In this work, we address some aspects of error detection and recovery in elliptic curve cryptosystems. In particular, we discuss the problem of wasteful computations performed between the occurrence of an error and its detection and propose solutions based on frequent validation to reduce that waste. We begin by presenting ways to select the validation frequency in order to minimize various performance criteria including the average and worst-case costs and the reliability threshold. We also provide solutions to reduce the sensitivity of the validation frequency to variations in the statistical error model and its parameters. Then, we present and discuss adaptive error recovery and illustrate its advantages in terms of low sensitivity to the error model and reduced variance of the resulting overhead especially in the presence of burst errors. Moreover, we use statistical inference to evaluate and fine-tune the selection of the adaptive policy. We also address the issue of validation testing cost and present a collection of coherency-based, cost-effective tests. We evaluate variations of these tests in terms of cost and error detection effectiveness and provide infective and reduced-cost, repeated-validation variants. Moreover, we use coherency-based tests to construct a combined-curve countermeasure that avoids the weaknesses of earlier related proposals and provides a flexible trade-off between cost and effectiveness.
|
54 |
cROVER: Context-augmented Speech Recognizer based on Multi-Decoders' OutputAbida, Mohamed Kacem 20 September 2011 (has links)
The growing need for designing and implementing reliable voice-based human-machine interfaces has inspired intensive research work in the field of voice-enabled systems, and greater robustness and reliability are being sought for those systems. Speech recognition has become ubiquitous. Automated call centers, smart phones, dictation and transcription software are among the many systems currently being designed and involving speech recognition. The need for highly accurate and optimized recognizers has never been more crucial. The research community is very actively involved in developing powerful techniques to combine the existing feature extraction methods for a better and more reliable information capture from the analog signal, as well as enhancing the language and acoustic modeling procedures to better adapt for unseen or distorted speech signal patterns. Most researchers agree that one of the most promising approaches for the problem of reducing the Word Error Rate (WER) in large vocabulary speech transcription, is to combine two or more speech recognizers and then generate a new output, in the expectation that it provides a lower error rate. The research work proposed here aims at enhancing and boosting even further the performance of the well-known Recognizer Output Voting Error Reduction (ROVER) combination technique. This is done through its integration with an error filtering approach. The proposed system is referred to as cROVER, for context-augmented ROVER. The principal idea is to flag erroneous words following the combination of the word transition networks through a scanning process at each slot of the resulting network. This step aims at eliminating some transcription errors and thus facilitating the voting process within ROVER. The error detection technique consists of spotting semantic outliers in a given decoder's transcription output. Due to the fact that most error detection techniques suffer from a high false positive rate, we propose to combine the error filtering techniques to compensate for the poor performance of each of the individual error classifiers. Experimental results, have shown that the proposed cROVER approach is able to reduce the relative WER by almost 10% through adequate combination of speech decoders. The approaches proposed here are generic enough to be used by any number of speech decoders and with any type of error filtering technique. A novel voting mechanism has also been proposed. The new confidence-based voting scheme has been inspired from the cROVER approach. The main idea consists of using the confidence scores collected from the contextual analysis, during the scoring of each word in the transition network. The new voting scheme outperformed ROVER's original voting, by up to 16% in terms of relative WER reduction.
|
55 |
Robust Video Transmission Using Data HidingYilmaz, Ayhan 01 January 2003 (has links) (PDF)
Video transmission over noisy wireless channels leads to errors on video, which
degrades the visual quality notably and makes error concealment an indispensable
job. In the literature, there are several error concealment techniques based on
estimating the lost parts of the video from the available data. Utilization of data
hiding for this problem, which seems to be an alternative of predicting the lost data,
provides a reserve information about the video to the receiver while unchanging the
transmitted bit-stream syntax / hence, improves the reconstruction video quality
without significant extra channel utilization. A complete error resilient video
transmission codec is proposed, utilizing imperceptible embedded information for
combined detecting, resynchronization and reconstruction of the errors and lost
data. The data, which is imperceptibly embedded into the video itself at the encoder,
is extracted from the video at the decoder side to be utilized in error concealment. A
spatial domain error recovery technique, which hides edge orientation information of
a block, and a resynchronization technique, which embeds bit length of a block into
other blocks are combined, as well as some parity information about the hidden
data, to conceal channel errors on intra-coded frames of a video sequence. The
errors on inter-coded frames are basically recovered by hiding motion vector
information along with a checksum into the next frames. The simulation results show
that the proposed approach performs superior to conventional approaches for
concealing the errors in binary symmetric channels, especially for higher bit rates
and error rates.
|
56 |
Hardware Error Detection Using AN-CodesSchiffel, Ute 08 July 2011 (has links) (PDF)
Due to the continuously decreasing feature sizes and the increasing complexity of integrated circuits, commercial off-the-shelf (COTS) hardware is becoming less and less reliable. However, dedicated reliable hardware is expensive and usually slower than commodity hardware. Thus, economic pressure will most likely result in the usage of unreliable COTS hardware in safety-critical systems.
The usage of unreliable, COTS hardware in safety-critical systems results in the need for software-implemented solutions for handling execution errors caused by this unreliable hardware. In this thesis, we provide techniques for detecting hardware errors that disturb the execution of a program. The detection provided facilitates handling of these errors, for example, by retry or graceful degradation.
We realize the error detection by transforming unsafe programs that are not guaranteed to detect execution errors into safe programs that detect execution errors with a high probability. Therefore, we use arithmetic AN-, ANB-, ANBD-, and ANBDmem-codes. These codes detect errors that modify data during storage or transport and errors that disturb computations as well. Furthermore, the error detection provided is independent of the hardware used.
We present the following novel encoding approaches:
- Software Encoded Processing (SEP) that transforms an unsafe binary into a safe execution at runtime by applying an ANB-code, and
- Compiler Encoded Processing (CEP) that applies encoding at compile time and provides different levels of safety by using different arithmetic codes.
In contrast to existing encoding solutions, SEP and CEP allow to encode applications whose data and control flow is not completely predictable at compile time.
For encoding, SEP and CEP use our set of encoded operations also presented in this thesis. To the best of our knowledge, we are the first ones that present the encoding of a complete RISC instruction set including boolean and bitwise logical operations, casts, unaligned loads and stores, shifts and arithmetic operations.
Our evaluations show that encoding with SEP and CEP significantly reduces the amount of erroneous output caused by hardware errors. Furthermore, our evaluations show that, in contrast to replication-based approaches for detecting errors, arithmetic encoding facilitates the detection of permanent hardware errors.
This increased reliability does not come for free. However, unexpectedly the runtime costs for the different arithmetic codes supported by CEP compared to redundancy increase only linearly, while the gained safety increases exponentially.
|
57 |
Σχεδιασμός και υλοποίηση σε υλικό κρυπτογραφικών μηχανισμών με δυνατότητα ανίχνευσης σφαλμάτωνΚοτσιώλης, Απόστολος 21 March 2011 (has links)
Σκοπός της διπλωματικής εργασίας είναι ο σχεδιασμός και η υλοποίηση σε υλικό κρυπτογραφικών μηχανισμών με τέτοιο τρόπο έτσι ώστε να αποκτήσουν ιδιότητες αυτό-ελέγχου χρησιμοποιώντας μηχανισμούς ανίχνευσης σφαλμάτων. Για το σκοπό αυτό θα προσπαθήσουμε να επιλέξουμε μέσα από γνωστούς μηχανισμούς ανίχνευσης λαθών που είναι διαθέσιμοι στη βιβλιογραφία, αυτούς που θα μας βοηθήσουν να εισάγουμε στο σύστημά μας τις επιθυμητές ιδιότητες αυτό-ελέγχου λαμβάνοντας παράλληλα φροντίδα για την διατήρηση των ιδιαίτερων χαρακτηριστικών του.
Λόγω της κρισιμότητας που έχει η διαδικασία κρυπτογράφησης είναι πολύ σημαντικό να πραγματοποιείται χωρίς σφάλματα. Πιθανά σφάλματα θα μπορούσε να τα εκμεταλλευτεί κάποιος εισβολέας ώστε να διαβάσει το περιεχόμενο του μηνύματος κατά τη διάρκεια μιας μετάδοσης ή θα μπορούσαν να προκαλέσουν λάθη στο ίδιο το μήνυμα και την hash value που του αντιστοιχεί. Για αυτούς τους λόγους θα προσπαθήσουμε να εισάγουμε στην υλοποίηση του αλγόριθμου κρυπτογράφησης μηχανισμούς ανίχνευσης σφαλμάτων ώστε να διασφαλιστεί η απροβλημάτιστη λειτουργία του. Παράλληλα λόγω των ιδιαίτερων απαιτήσεων που υπάρχουν για ένα σύστημα κρυπτογράφησης που έχουν να κάνουν με ταχύτητα επεξεργασίας και την όσο το δυνατόν μικρότερη επιφάνεια ολοκλήρωσης θα λάβουμε ιδιαίτερη φροντίδα ώστε το σύστημα μας να διατηρήσει αυτά τα επιθυμητά χαρακτηριστικά. / The purpose of this thesis is the design and implementation in hardware of cryptographic mechanisms in order to gain self-checking properties using error detection techniques. In order to do so we will try to pick through known error detection mechanisms, those who will help us apply the desired self-checking characteristics to our system while taking care to maintain its characteristics.
It is critical for the encryption process to be error-free. Possible errors could be exploited by an attacker to read the contents of the message during a broadcast or could cause errors in the message itself and the hash value that corresponds. For these reasons, we try to apply error detection mechanisms to the hardware implementation of the hash algorithms in order to ensure trouble free operation. At the same time, due to the special requirements of an encryption system about high processing speed and the smallest integration area possible we will take care so as our system to maintain these desired characteristics.
|
58 |
Uma análise dos esquemas de dígitos verificadores usados no Brasil / An analysis of check digit schemes used in BrazilNatália Pedroza de Souza 31 July 2013 (has links)
Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro / Neste trabalho discutimos vários sistemas de dígitos verificadores utilizados no Brasil, muitos deles semelhantes a esquemas usados mundialmente, e fazemos uma análise da sua capacidade de detectar os diversos tipos de erros que são comuns na entrada de dados em sistemas computacionais. A análise nos mostra que os esquemas escolhidos constituem decisões subotimizadas e quase nunca obtêm a melhor taxa de detecção de erros possível.
Os sistemas de dígitos verificadores são baseados em três teorias da álgebra: aritmética modular, teoria de grupos e quasigrupos. Para os sistemas baseados em aritmética modular, apresentamos várias melhorias que podem ser introduzidas. Desenvolvemos um novo esquema ótimo baseado em aritmética modular base 10 com três permutações para identificadores de tamanho maior do que sete. Descrevemos também o esquema Verhoeff, já
antigo, mas pouquíssimo utilizado e que também é uma alternativa de melhoria para identificadores de tamanho até sete. Desenvolvemos ainda, esquemas ótimos para qualquer base modular prima que detectam todos os tipos de erros considerados. A dissertação faz uso ainda de elementos da estatística, no estudo das probabilidades de detecção de erros e de algoritmos, na obtenção de esquemas ótimos. / In this paper we present several check digit systems used in Brazil, many of them similar to schemes used worldwide, and we do an analysis of their ability to detect various
types of errors that are common in data entry computer systems. This analysis shows that the schemes constitute suboptimal decisions and almost never get the best rate
possible error detection. Check digit schemes are based on three algebra theory: modular arithmetic, group theory and quasigroup. For the schemes based on modular arithmetic
we present several improvements that can be made. We developed a new optimal scheme based on modular arithmetic base 10 with three permutations for identifers larger than
7. We also present the Verhoeff scheme, already old but used very little and that is also a good alternative for improvement identifers for size up to 7. We have also developed,optimum schemes for any modular base prime that detect all types of errors considered. The dissertation also makes use of elements of statistics in the study of the probability of error detection and algorithms to obtain optimal schemes.
|
59 |
An Unsupervised Approach to Detecting and Correcting Errors in TextIslam, Md Aminul January 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors.
The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions:
Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature?
What is the magnitude of error coverage, in terms of the number of errors that can be corrected?
We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible.
We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors.
The approach is language-independent, it can be applied to other languages, as long as n-grams are available.
The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
|
60 |
Turbine Generator Performance Dashboard for Predictive Maintenance StrategiesEmily R Rada (11813852) 19 December 2021 (has links)
<div>Equipment health is the root of productivity and profitability in a
company; through the use of machine learning and advancements in
computing power, a maintenance strategy known as Predictive Maintenance
(PdM) has emerged. The predictive maintenance approach utilizes
performance and condition data to forecast necessary machine repairs.
Predicting maintenance needs reduces the likelihood of operational
errors, aids in the avoidance of production failures, and allows for
preplanned outages. The PdM strategy is based on machine-specific data,
which proves to be a valuable tool. The machine data provides
quantitative proof of operation patterns and production while offering
machine health insights that may otherwise go unnoticed.</div><div><br> </div><div>Purdue
University's Wade Utility Plant is responsible for providing reliable
utility services for the campus community. The Wade Utility Plant has
invested in an equipment monitoring system for a thirty-megawatt turbine
generator. The equipment monitoring system records operational and
performance data as the turbine generator supplies campus with
electricity and high-pressure steam. Unplanned and surprise maintenance
needs in the turbine generator hinder utility production and lessen the
dependability of the system.</div><div><br> </div> The work of this
study leverages the turbine generator data the Wade Utility Plant
records and stores, to justify equipment care and provide early error
detection at an in-house level. The research collects and aggregates
operational, monitoring and performance-based data for the turbine
generator in Microsoft Excel, creating a dashboard which visually
displays and statistically monitors variables for discrepancies. The
dashboard records ninety days of data, tracked hourly, determining
averages, extrema, and alerting the user as data approaches recommended
warning levels. Microsoft Excel offers a low-cost and accessible
platform for data collection and analysis providing an adaptable and
comprehensible collection of data from a turbine generator. The
dashboard offers visual trends, simple statistics, and status updates
using 90 days of user selected data. This dashboard offers the ability
to forecast maintenance needs, plan work outages, and adjust operations
while continuing to provide reliable services that meet Purdue
University's utility demands. <br>
|
Page generated in 0.0471 seconds