• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 9
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 125
  • 23
  • 23
  • 22
  • 19
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

On Fault-based Attacks and Countermeasures for Elliptic Curve Cryptosystems

Dominguez Oviedo, Agustin January 2008 (has links)
For some applications, elliptic curve cryptography (ECC) is an attractive choice because it achieves the same level of security with a much smaller key size in comparison with other schemes such as those that are based on integer factorization or discrete logarithm. Unfortunately, cryptosystems including those based on elliptic curves have been subject to attacks. For example, fault-based attacks have been shown to be a real threat in today’s cryptographic implementations. In this thesis, we consider fault-based attacks and countermeasures for ECC. We propose a new fault-based attack against the Montgomery ladder elliptic curve scalar multiplication (ECSM) algorithm. For security reasons, especially to provide resistance against fault-based attacks, it is very important to verify the correctness of computations in ECC applications. We deal with protections to fault attacks against ECSM at two levels: module and algorithm. For protections at the module level, where the underlying scalar multiplication algorithm is not changed, a number of schemes and hardware structures are presented based on re-computation or parallel computation. It is shown that these structures can be used for detecting errors with a very high probability during the computation of ECSM. For protections at the algorithm level, we use the concepts of point verification (PV) and coherency check (CC). We investigate the error detection coverage of PV and CC for the Montgomery ladder ECSM algorithm. Additionally, we propose two algorithms based on the double-and-add-always method that are resistant to the safe error (SE) attack. We demonstrate that one of these algorithms also resists the sign change fault (SCF) attack.
42

On Error Detection and Recovery in Elliptic Curve Cryptosystems

Alkhoraidly, Abdulaziz Mohammad January 2011 (has links)
Fault analysis attacks represent a serious threat to a wide range of cryptosystems including those based on elliptic curves. With the variety and demonstrated practicality of these attacks, it is essential for cryptographic implementations to handle different types of errors properly and securely. In this work, we address some aspects of error detection and recovery in elliptic curve cryptosystems. In particular, we discuss the problem of wasteful computations performed between the occurrence of an error and its detection and propose solutions based on frequent validation to reduce that waste. We begin by presenting ways to select the validation frequency in order to minimize various performance criteria including the average and worst-case costs and the reliability threshold. We also provide solutions to reduce the sensitivity of the validation frequency to variations in the statistical error model and its parameters. Then, we present and discuss adaptive error recovery and illustrate its advantages in terms of low sensitivity to the error model and reduced variance of the resulting overhead especially in the presence of burst errors. Moreover, we use statistical inference to evaluate and fine-tune the selection of the adaptive policy. We also address the issue of validation testing cost and present a collection of coherency-based, cost-effective tests. We evaluate variations of these tests in terms of cost and error detection effectiveness and provide infective and reduced-cost, repeated-validation variants. Moreover, we use coherency-based tests to construct a combined-curve countermeasure that avoids the weaknesses of earlier related proposals and provides a flexible trade-off between cost and effectiveness.
43

cROVER: Context-augmented Speech Recognizer based on Multi-Decoders' Output

Abida, Mohamed Kacem 20 September 2011 (has links)
The growing need for designing and implementing reliable voice-based human-machine interfaces has inspired intensive research work in the field of voice-enabled systems, and greater robustness and reliability are being sought for those systems. Speech recognition has become ubiquitous. Automated call centers, smart phones, dictation and transcription software are among the many systems currently being designed and involving speech recognition. The need for highly accurate and optimized recognizers has never been more crucial. The research community is very actively involved in developing powerful techniques to combine the existing feature extraction methods for a better and more reliable information capture from the analog signal, as well as enhancing the language and acoustic modeling procedures to better adapt for unseen or distorted speech signal patterns. Most researchers agree that one of the most promising approaches for the problem of reducing the Word Error Rate (WER) in large vocabulary speech transcription, is to combine two or more speech recognizers and then generate a new output, in the expectation that it provides a lower error rate. The research work proposed here aims at enhancing and boosting even further the performance of the well-known Recognizer Output Voting Error Reduction (ROVER) combination technique. This is done through its integration with an error filtering approach. The proposed system is referred to as cROVER, for context-augmented ROVER. The principal idea is to flag erroneous words following the combination of the word transition networks through a scanning process at each slot of the resulting network. This step aims at eliminating some transcription errors and thus facilitating the voting process within ROVER. The error detection technique consists of spotting semantic outliers in a given decoder's transcription output. Due to the fact that most error detection techniques suffer from a high false positive rate, we propose to combine the error filtering techniques to compensate for the poor performance of each of the individual error classifiers. Experimental results, have shown that the proposed cROVER approach is able to reduce the relative WER by almost 10% through adequate combination of speech decoders. The approaches proposed here are generic enough to be used by any number of speech decoders and with any type of error filtering technique. A novel voting mechanism has also been proposed. The new confidence-based voting scheme has been inspired from the cROVER approach. The main idea consists of using the confidence scores collected from the contextual analysis, during the scoring of each word in the transition network. The new voting scheme outperformed ROVER's original voting, by up to 16% in terms of relative WER reduction.
44

Cost-effective Designs for Supporting Correct Execution and Scalable Performance in Many-core Processors

Romanescu, Bogdan Florin January 2010 (has links)
<p>Many-core processors offer new levels of on-chip performance by capitalizing on the increasing rate of device integration. Harnessing the full performance potential of these processors requires that hardware designers not only exploit the advantages, but also consider the problems introduced by the new architectures. Such challenges arise from both the processor's increased structural complexity and the reliability issues of the silicon substrate. In this thesis, we address these challenges in a framework that targets correct execution and performance on three coordinates: 1) tolerating permanent faults, 2) facilitating static and dynamic verification through precise specifications, and 3) designing scalable coherence protocols.</p> <p>First, we propose CCA, a new design paradigm for increasing the processor's lifetime performance in the presence of permanent faults in cores. CCA chips rely on a reconfiguration mechanism that allows cores to replace faulty components with fault-free structures borrowed from neighboring cores. In contrast with existing solutions for handling hard faults that simply shut down cores, CCA aims to maximize the utilization of defect-free resources and increase the availability of on-chip cores. We implement three-core and four-core CCA chips and demonstrate that they offer a cumulative lifetime performance improvement of up to 65% for industry-representative utilization periods. In addition, we show that CCA benefits systems that employ modular redundancy to guarantee correct execution by increasing their availability.</p> <p>Second, we target the correctness of the address translation system. Current processors often exhibit design bugs in their translation systems, and we believe one cause for these faults is a lack of precise specifications describing the interactions between address translation and the rest of the memory system, especially memory consistency. We address this aspect by introducing a framework for specifying translation-aware consistency models. As part of this framework, we identify the critical role played by address translation in supporting correct memory consistency implementations. Consequently, we propose a set of invariants that characterizes address translation. Based on these invariants, we develop DVAT, a dynamic verification mechanism for address translation. We demonstrate that DVAT is efficient in detecting translation-related faults, including several that mimic design bugs reported in processor errata. By checking the correctness of the address translation system, DVAT supports dynamic verification of translation-aware memory consistency.</p> <p>Finally, we address the scalability of translation coherence protocols. Current software-based solutions for maintaining translation coherence adversely impact performance and do not scale. We propose UNITD, a hardware coherence protocol that supports scalable performance and architectural decoupling. UNITD integrates translation coherence within the regular cache coherence protocol, such that TLBs participate in the cache coherence protocol similar to instruction or data caches. We evaluate snooping and directory UNITD coherence protocols on processors with up to 16 cores and demonstrate that UNITD reduces the performance penalty of translation coherence to almost zero.</p> / Dissertation
45

Improving Data Quality: Development and Evaluation of Error Detection Methods

Lee, Nien-Chiu 25 July 2002 (has links)
High quality of data are essential to decision support in organizations. However estimates have shown that 15-20% of data within an organization¡¦s databases can be erroneous. Some databases contain large number of errors, leading to a large potential problem if they are used for managerial decision-making. To improve data quality, data cleaning endeavors are needed and have been initiated by many organizations. Broadly, data quality problems can be classified into three categories, including incompleteness, inconsistency, and incorrectness. Among the three data quality problems, data incorrectness represents the major sources for low quality data. Thus, this research focuses on error detection for improving data quality. In this study, we developed a set of error detection methods based on the semantic constraint framework. Specifically, we proposed a set of error detection methods including uniqueness detection, domain detection, attribute value dependency detection, attribute domain inclusion detection, and entity participation detection. Empirical evaluation results showed that some of our proposed error detection techniques (i.e., uniqueness detection) achieved low miss rates and low false alarm rates. Overall, our error detection methods together could identify around 50% of the errors introduced by subjects during experiments.
46

The constructivist learning architecture: a model of cognitive development for robust autonomous robots

Chaput, Harold Henry 28 August 2008 (has links)
Not available / text
47

A Penalty Function-Based Dynamic Hybrid Shop Floor Control System

Zhao, Xiaobing January 2006 (has links)
To cope with dynamics and uncertainties, a novel penalty function-based hybrid, multi-agent shop floor control system is proposed in this dissertation. The key characteristic of the proposed system is the capability of adaptively distributing decision-making power across different levels of control agents in response to different levels of disturbance. The subordinate agent executes tasks based on the schedule from the supervisory level agent in the absence of disturbance. Otherwise, it optimizes the original schedule before execution by revising it with regard to supervisory level performance (via penalty function) and disturbance. Penalty function, mathematical programming formulations, and quantitative metrics are presented to indicate the disturbance levels and levels of autonomy. These formulations are applied to diverse performance measurements such as completion time related metrics, makespan, and number of late jobs. The proposed control system is illustrated, tested with various job shop problems, and benchmarked against other shop floor control systems. In today's manufacturing system, man still plays an important role together with the control system Therefore, better coordination of humans and control systems is an inevitable topic. A novel BDI agent-based software model is proposed in this work to replace the partial decision-making function of a human. This proposed model is capable of 1) generating plans in real-time to adapt the system to a changing environment, 2) supporting not only reactive, but also proactive decision-making, 3) maintaining situational awareness in human language-like logic to facilitate real human decision-making, and 4) changing the commitment strategy adaptive to historical performance. The general purposes human operator model is then customized and integrated with an automated shop floor control system to serve as the error detection and recovery system. This model has been implemented in JACK software; however, JACK does not support real-time generation of a plan. Therefore, the planner sub-module has been developed in Java and then integrated with the JACK. To facilitate integration of an agent, real-human, and the environment, a distributed computing platform based on DOD High Level Architecture has been used. The effectiveness of the proposed model is then tested in several scenarios in a simulated automated manufacturing environment.
48

Software Architecture-Based Failure Prediction

Mohamed, ATEF 28 September 2012 (has links)
Depending on the role of software in everyday life, the cost of a software failure can sometimes be unaffordable. During system execution, errors may occur in system components and failures may be manifested due to these errors. These errors differ with respect to their effects on system behavior and consequent failure manifestation manners. Predicting failures before their manifestation is important to assure system resilience. It helps avoid the cost of failures and enables systems to perform corrective actions prior to failure occurrences. However, effective runtime error detection and failure prediction techniques encounter a prohibitive challenge with respect to the control flow representation of large software systems with intricate control flow structures. In this thesis, we provide a technique for failure prediction from runtime errors of large software systems. Aiming to avoid the possible difficulties and inaccuracies of the existing Control Flow Graph (CFG) structures, we first propose a Connection Dependence Graph (CDG) for control flow representation of large software systems. We describe the CDG structure and explain how to derive it from program source code. Second, we utilize the proposed CDG to provide a connection-based signature approach for control flow error detection. We describe the monitor structure and present the error checking algorithm. Finally, we utilize the detected errors and erroneous state parameters to predict failure occurrences and modes during system runtime. We craft runtime signatures based on these errors and state parameters. Using system error and failure history, we determine a predictive function (an estimator) for each failure mode based on these signatures. Our experimental evaluation for these techniques uses a large open-source software (PostgreSQL 8.4.4 database system). The results show highly efficient control flow representation, error detection, and failure prediction techniques. This work contributes to software reliability by providing a simple and accurate control flow representation and utilizing it to detect runtime errors and predict failure occurrences and modes with high accuracy. / Thesis (Ph.D, Computing) -- Queen's University, 2012-09-25 23:44:12.356
49

On Real Time Digital Phase Locked Loop Implementation with Application to Timing Recovery

Kippenberger, Roger Miles January 2006 (has links)
In digital communication systems symbol timing recovery is of fundamental importance. The accuracy in estimation of symbol timing has a direct effect on received data error rates. The primary objective of this thesis is to implement a practical Digital Phase Locked Loop capable of accurate synchronisation of symbols suffering channel corruption typical of modern mobile communications. This thesis describes an all-software implementation of a Digital Phase Locked in a real-time system. A timing error detection (TED) algorithms optimally implemented into a Digital Signal Processor. A real-time transmitter and receiver system is implemented in order to measure performance when the received signal is corrupted by both Additive White Gaussian Noise and Flat Fading. The Timing Error Detection algorithm implemented is a discrete time maximum likelihood one known as FFML1, developed at Canterbury University. FFML1 along with other components of the Digital Phase Locked loop are implemented entirely in software, using Motorola 56321 assembly language.
50

Design of effective decoding techniques in network coding networks / Suné von Solms

Von Solms, Suné January 2013 (has links)
Random linear network coding is widely proposed as the solution for practical network coding applications due to the robustness to random packet loss, packet delays as well as network topology and capacity changes. In order to implement random linear network coding in practical scenarios where the encoding and decoding methods perform efficiently, the computational complex coding algorithms associated with random linear network coding must be overcome. This research contributes to the field of practical random linear network coding by presenting new, low complexity coding algorithms with low decoding delay. In this thesis we contribute to this research field by building on the current solutions available in the literature through the utilisation of familiar coding schemes combined with methods from other research areas, as well as developing innovative coding methods. We show that by transmitting source symbols in predetermined and constrained patterns from the source node, the causality of the random linear network coding network can be used to create structure at the receiver nodes. This structure enables us to introduce an innovative decoding scheme of low decoding delay. This decoding method also proves to be resilient to the effects of packet loss on the structure of the received packets. This decoding method shows a low decoding delay and resilience to packet erasures, that makes it an attractive option for use in multimedia multicasting. We show that fountain codes can be implemented in RLNC networks without changing the complete coding structure of RLNC networks. By implementing an adapted encoding algorithm at strategic intermediate nodes in the network, the receiver nodes can obtain encoded packets that approximate the degree distribution of encoded packets required for successful belief propagation decoding. Previous work done showed that the redundant packets generated by RLNC networks can be used for error detection at the receiver nodes. This error detection method can be implemented without implementing an outer code; thus, it does not require any additional network resources. We analyse this method and show that this method is only effective for single error detection, not correction. In this thesis the current body of knowledge and technology in practical random linear network coding is extended through the contribution of effective decoding techniques in practical network coding networks. We present both analytical and simulation results to show that the developed techniques can render low complexity coding algorithms with low decoding delay in RLNC networks. / Thesis (PhD (Computer Engineering))--North-West University, Potchefstroom Campus, 2013

Page generated in 0.0666 seconds