• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 9
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 125
  • 23
  • 23
  • 22
  • 19
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Error Detection Techniques Against Strong Adversaries

Akdemir, Kahraman D. 01 December 2010 (has links)
"Side channel attacks (SCA) pose a serious threat on many cryptographic devices and are shown to be effective on many existing security algorithms which are in the black box model considered to be secure. These attacks are based on the key idea of recovering secret information using implementation specific side-channels. Especially active fault injection attacks are very effective in terms of breaking otherwise impervious cryptographic schemes. Various countermeasures have been proposed to provide security against these attacks. Double-Data-Rate (DDR) computation, dual-rail encoding, and simple concurrent error detection (CED) are the most popular of these solutions. Even though these security schemes provide sufficient security against weak adversaries, they can be broken relatively easily by a more advanced attacker. In this dissertation, we propose various error detection techniques that target strong adversaries with advanced fault injection capabilities. We first describe the advanced attacker in detail and provide its characteristics. As part of this definition, we provide a generic metric to measure the strength of an adversary. Next, we discuss various techniques for protecting finite state machines (FSMs) of cryptographic devices against active fault attacks. These techniques mainly depend on nonlinear robust codes and physically unclonable functions (PUFs). We show that due to the nonuniform behavior of FSM variables, securing FSMs using nonlinear codes is an important and difficult problem. As a solution to this problem, we propose error detection techniques based on nonlinear codes with different randomization methods. We also show how PUFs can be utilized to protect a class of FSMs. This solution provides security on the physical level as well as the logical level. In addition, for each technique, we provide possible hardware realizations and discuss area/security performance. Furthermore, we provide an error detection technique for protecting elliptic curve point addition and doubling operations against active fault attacks. This technique is based on nonlinear robust codes and provides nearly perfect error detection capability (except with exponentially small probability). We also conduct a comprehensive analysis in which we apply our technique to different elliptic curves (i.e. Weierstrass and Edwards) over different coordinate systems (i.e. affine and projective). "
32

Tamper-Resistant Arithmetic for Public-Key Cryptography

Gaubatz, Gunnar 01 March 2007 (has links)
Cryptographic hardware has found many uses in many ubiquitous and pervasive security devices with a small form factor, e.g. SIM cards, smart cards, electronic security tokens, and soon even RFIDs. With applications in banking, telecommunication, healthcare, e-commerce and entertainment, these devices use cryptography to provide security services like authentication, identification and confidentiality to the user. However, the widespread adoption of these devices into the mass market, and the lack of a physical security perimeter have increased the risk of theft, reverse engineering, and cloning. Despite the use of strong cryptographic algorithms, these devices often succumb to powerful side-channel attacks. These attacks provide a motivated third party with access to the inner workings of the device and therefore the opportunity to circumvent the protection of the cryptographic envelope. Apart from passive side-channel analysis, which has been the subject of intense research for over a decade, active tampering attacks like fault analysis have recently gained increased attention from the academic and industrial research community. In this dissertation we address the question of how to protect cryptographic devices against this kind of attacks. More specifically, we focus our attention on public key algorithms like elliptic curve cryptography and their underlying arithmetic structure. In our research we address challenges such as the cost of implementation, the level of protection, and the error model in an adversarial situation. The approaches that we investigated all apply concepts from coding theory, in particular the theory of cyclic codes. This seems intuitive, since both public key cryptography and cyclic codes share finite field arithmetic as a common foundation. The major contributions of our research are (a) a generalization of cyclic codes that allow embedding of finite fields into redundant rings under a ring homomorphism, (b) a new family of non-linear arithmetic residue codes with very high error detection probability, (c) a set of new low-cost arithmetic primitives for optimal extension field arithmetic based on robust codes, and (d) design techniques for tamper resilient finite state machines.
33

Uma análise dos esquemas de dígitos verificadores usados no Brasil / An analysis of check digit schemes used in Brazil

Natália Pedroza de Souza 31 July 2013 (has links)
Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro / Neste trabalho discutimos vários sistemas de dígitos verificadores utilizados no Brasil, muitos deles semelhantes a esquemas usados mundialmente, e fazemos uma análise da sua capacidade de detectar os diversos tipos de erros que são comuns na entrada de dados em sistemas computacionais. A análise nos mostra que os esquemas escolhidos constituem decisões subotimizadas e quase nunca obtêm a melhor taxa de detecção de erros possível. Os sistemas de dígitos verificadores são baseados em três teorias da álgebra: aritmética modular, teoria de grupos e quasigrupos. Para os sistemas baseados em aritmética modular, apresentamos várias melhorias que podem ser introduzidas. Desenvolvemos um novo esquema ótimo baseado em aritmética modular base 10 com três permutações para identificadores de tamanho maior do que sete. Descrevemos também o esquema Verhoeff, já antigo, mas pouquíssimo utilizado e que também é uma alternativa de melhoria para identificadores de tamanho até sete. Desenvolvemos ainda, esquemas ótimos para qualquer base modular prima que detectam todos os tipos de erros considerados. A dissertação faz uso ainda de elementos da estatística, no estudo das probabilidades de detecção de erros e de algoritmos, na obtenção de esquemas ótimos. / In this paper we present several check digit systems used in Brazil, many of them similar to schemes used worldwide, and we do an analysis of their ability to detect various types of errors that are common in data entry computer systems. This analysis shows that the schemes constitute suboptimal decisions and almost never get the best rate possible error detection. Check digit schemes are based on three algebra theory: modular arithmetic, group theory and quasigroup. For the schemes based on modular arithmetic we present several improvements that can be made. We developed a new optimal scheme based on modular arithmetic base 10 with three permutations for identifers larger than 7. We also present the Verhoeff scheme, already old but used very little and that is also a good alternative for improvement identifers for size up to 7. We have also developed,optimum schemes for any modular base prime that detect all types of errors considered. The dissertation also makes use of elements of statistics in the study of the probability of error detection and algorithms to obtain optimal schemes.
34

A study of single laser interferometry-based sensing and measuring technique in robot manipulator control and guidance. Volume 1

Teoh, Pek Loo January 2003 (has links)
Abstract not available
35

A practical approach to detection of plant model mismatch for MPC

Carlsson, Rickard January 2010 (has links)
<p>The number of MPC installations in industry is growing as a reaction to demands of increased efficiency. An MPC controller uses an internal plant model to run real-time predictive optimization of future inputs. If a discrepancy between the internal plant model and the plant exists, control performance will be affected. As time from commissioning increases the model accuracy tends to deteriorate. This is natural as the plant changes over time. It is important to detect these changes and re-identify the plant model to maintain control performance over time. A method for identifying Model Plant Mismatch for MPC applications is developed. Focus has been on developing a method that is simple to implement but still robust. The method is able to run in parallel with the process in real time. The efficiency of the method is demonstrated via representative simulation examples.An extension to detection of nonlinear mismatch is also considered, which is important since linear plant models often are used within a small operating range. Since most processes are nonlinear this discrepancy is inevitable and should be detected.</p> / <p>Ökade krav på effektivitet gör att industrin söker efter mer avancerad processtyrning. MPC har växt fram som en kandidat. En MPC regulator änvänder en modell av systemet för att samtidigt som systemet körs utföra en optimering av framtida styrsignaler. Om modellen innehåller felaktigheter kan reglerprestandan påverkas. En modell försämras normalt då tiden från idrifttagning växer eftersom systemet förändras med tiden. Det är av största vikt att upptäcka dessa förändringar och sedan uppdatera modellen för att reglerprestandan inte ska påverkas. Avsikten är att utveckla en metod för att upptäcka modellfel med fokus på att den ska vara enkel att implementera. Det ska även vara möjligt att använda metoden parallellt med en process. För att utvärdera metoden så körs den på ett antal representativa simuleringsexempel. Det har även varit en avsikt att utveckla en metod för detektion av ickelinjära modellfel. Motivet till det är att linjära modeller används för att beskriva ickelinjära processer och då är modellfel naturliga.</p>
36

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
37

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
38

On Fault-based Attacks and Countermeasures for Elliptic Curve Cryptosystems

Dominguez Oviedo, Agustin January 2008 (has links)
For some applications, elliptic curve cryptography (ECC) is an attractive choice because it achieves the same level of security with a much smaller key size in comparison with other schemes such as those that are based on integer factorization or discrete logarithm. Unfortunately, cryptosystems including those based on elliptic curves have been subject to attacks. For example, fault-based attacks have been shown to be a real threat in today’s cryptographic implementations. In this thesis, we consider fault-based attacks and countermeasures for ECC. We propose a new fault-based attack against the Montgomery ladder elliptic curve scalar multiplication (ECSM) algorithm. For security reasons, especially to provide resistance against fault-based attacks, it is very important to verify the correctness of computations in ECC applications. We deal with protections to fault attacks against ECSM at two levels: module and algorithm. For protections at the module level, where the underlying scalar multiplication algorithm is not changed, a number of schemes and hardware structures are presented based on re-computation or parallel computation. It is shown that these structures can be used for detecting errors with a very high probability during the computation of ECSM. For protections at the algorithm level, we use the concepts of point verification (PV) and coherency check (CC). We investigate the error detection coverage of PV and CC for the Montgomery ladder ECSM algorithm. Additionally, we propose two algorithms based on the double-and-add-always method that are resistant to the safe error (SE) attack. We demonstrate that one of these algorithms also resists the sign change fault (SCF) attack.
39

Effects of Wide Reading Vs. Repeated Readings on Struggling College Readers' Comprehension Monitoring Skills

Ari, Omer 26 October 2009 (has links)
Fluency instruction has had limited effects on reading comprehension relative to reading rate and prosodic reading (Dowhower, 1987; Herman, 1985; National Institute of Child Health and Human Development, 2000a). More specific components (i.e., error detection) of comprehension may yield larger effects through exposure to a wider range of materials than repeated readings (Kuhn, 2005b). Thirty-three students reading below college level were randomly assigned to a Repeated Readings (RR), a Wide Reading (WR), or a Vocabulary Study (VS) condition and received training in 9 sessions of 30 minutes in a Southeast community college. RR students read an instructional-level text consecutively four times before answering comprehension questions about it; WR students read four instructional-level texts each once and answered questions while the VS group studied and took a quiz on academic vocabulary. An additional 13 students reading at college level provided comparison data. At pretest, all participants completed the Nelson Denny Reading Test, Test of Word Reading Efficiency, Error Detection task (Albrecht & O'Brien, 1993), working memory test, Metacognitive Awareness of Reading Strategies Inventory (MARSI; Mokhtari & Reichard, 2002), a maze test, Author Recognition Test (ART), and reading survey. All pretest measures except for the ART and reading surveys were re-administered at posttest to training groups. Paired-samples t-test analyses revealed (a) significant gains for the WR condition in vocabulary (p = .043), silent reading rate (p < .05), maze (p < .05) and working memory (p < .05) (b) significant gains for the RR students in silent reading rate (p = .05) and maze (p = .006) and (c) significant increases on vocabulary (p < .05), maze (p = .005), and MARSI (p < .005) for the VS group at posttest. Unreliable patterns of error detection were observed for all groups at pretest and post-test. Results suggest that effects of fluency instruction be sought at the local level processes of reading using the maze test, which reliably detected reading improvements from fluency instruction (RR, WR) and vocabulary study (VS) in only 9 sessions. With significant gains on more reading measures, the WR condition appears superior to the RR condition as a fluency program for struggling college readers. Combining the WR condition with vocabulary study may augment students’ gains.
40

A practical approach to detection of plant model mismatch for MPC

Carlsson, Rickard January 2010 (has links)
The number of MPC installations in industry is growing as a reaction to demands of increased efficiency. An MPC controller uses an internal plant model to run real-time predictive optimization of future inputs. If a discrepancy between the internal plant model and the plant exists, control performance will be affected. As time from commissioning increases the model accuracy tends to deteriorate. This is natural as the plant changes over time. It is important to detect these changes and re-identify the plant model to maintain control performance over time. A method for identifying Model Plant Mismatch for MPC applications is developed. Focus has been on developing a method that is simple to implement but still robust. The method is able to run in parallel with the process in real time. The efficiency of the method is demonstrated via representative simulation examples.An extension to detection of nonlinear mismatch is also considered, which is important since linear plant models often are used within a small operating range. Since most processes are nonlinear this discrepancy is inevitable and should be detected. / Ökade krav på effektivitet gör att industrin söker efter mer avancerad processtyrning. MPC har växt fram som en kandidat. En MPC regulator änvänder en modell av systemet för att samtidigt som systemet körs utföra en optimering av framtida styrsignaler. Om modellen innehåller felaktigheter kan reglerprestandan påverkas. En modell försämras normalt då tiden från idrifttagning växer eftersom systemet förändras med tiden. Det är av största vikt att upptäcka dessa förändringar och sedan uppdatera modellen för att reglerprestandan inte ska påverkas. Avsikten är att utveckla en metod för att upptäcka modellfel med fokus på att den ska vara enkel att implementera. Det ska även vara möjligt att använda metoden parallellt med en process. För att utvärdera metoden så körs den på ett antal representativa simuleringsexempel. Det har även varit en avsikt att utveckla en metod för detektion av ickelinjära modellfel. Motivet till det är att linjära modeller används för att beskriva ickelinjära processer och då är modellfel naturliga.

Page generated in 0.1094 seconds