• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On cyclotomic primality tests

Boucher, Thomas Francis 01 August 2011 (has links)
In 1980, L. Adleman, C. Pomerance, and R. Rumely invented the first cyclotomicprimality test, and shortly after, in 1981, a simplified and more efficient versionwas presented by H.W. Lenstra for the Bourbaki Seminar. Later, in 2008, ReneSchoof presented an updated version of Lenstra's primality test. This thesis presents adetailed description of the cyclotomic primality test as described by Schoof, along withsuggestions for implementation. The cornerstone of the test is a prime congruencerelation similar to Fermat's little theorem" that involves Gauss or Jacobi sumscalculated over cyclotomic fields. The algorithm runs in very nearly polynomial time.This primality test is currently one of the most computationally efficient tests and isused by default for primality proving by the open source mathematics systems Sageand PARI/GP. It can quickly test numbers with thousands of decimal digits.
2

Error Detection in Number-Theoretic and Algebraic Algorithms

Vasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
3

Error Detection in Number-Theoretic and Algebraic Algorithms

Vasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
4

Elliptic curves

Jensen, Crystal Dawn 05 January 2011 (has links)
This report discusses the history, use, and future of elliptic curves. Uses of elliptic curves in various number theory settings are presented. Fermat’s Last Proof is shown to be proven with elliptic curves. Finally, the future of elliptic curves with respect to cryptography and primality is shown. / text
5

Novel Methods for Primality Testing and Factoring

Hammad, Yousef Bani January 2005 (has links)
From the time of the Greeks, primality testing and factoring have fascinated mathematicians, and for centuries following the Greeks primality testing and factorization were pursued by enthusiasts and professional mathematicians for their intrisic value. There was little practical application. One example application was to determine whether or not the Fermat numbers, that is, numbers of the form F;, = 2'" + 1 were prime. Fermat conjectured that for all n they were prime. For n = 1,2,3,4, the Fermat numbers are prime, but Euler showed that F; was not prime and to date no F,, n 2 5 has been found to be prime. Thus, for nearly 2000 years primality testing and factorization was largely pure mathematics. This all changed in the mid 1970's with the advent of public key cryptography. Large prime numbers are used in generating keys in many public key cryptosystems and the security of many of these cryptosystems depends on the difficulty of factoring numbers with large prime factors. Thus, the race was on to develop new algorithms to determine the primality or otherwise of a given large integer and to determine the factors of given large integers. The development of such algorithms continues today. This thesis develops both of these themes. The first part of this thesis deals with primality testing and after a brief introduction to primality testing a new probabilistic primality algorithm, ALI, is introduced. It is analysed in detail and compared to Fermat and Miller-Rabin primality tests. It is shown that the ALI algorithm is more efficient than the Miller-Rabin algorithm in some aspects. The second part of the thesis deals with factoring and after looking closely at various types of algorithms a new algorithm, RAK, is presented. It is analysed in detail and compared with Fermat factorization. The RAK algorithm is shown to be significantly more efficient than the Fermat factoring algorithm. A number of enhancements is made to the basic RAK algorithm in order to improve its performance. The RAK algorithm with its enhancements is known as IMPROVEDRAK. In conjunction with this work on factorization an improvement to Shor's factoring algorithm is presented. For many integers Shor's algorithm uses a quantum computer multiple times to factor a composite number into its prime factors. It is shown that Shor's alorithm can be modified in a way such that the use of a quantum computer is required just once. The common thread throughout this thesis is the application of factoring and primality testing techniques to integer types which commonly occur in public key cryptosystems. Thus, this thesis contributes not only in the area of pure mathematics but also in the very contemporary area of cryptology.
6

Primality Testing

Siracusa, Mia 01 January 2017 (has links)
In this thesis, I review the problem of primality testing. More specifically, I review the AKS algorithm and the theorems and problems leading up to the proof of this algorithm.
7

Commutative n-ary Arithmetic

Bingham, Aram 15 May 2015 (has links)
Motivated by primality and integer factorization, this thesis introduces generalizations of standard binary multiplication to commutative n-ary operations based upon geometric construction and representation. This class of operations are constructed to preserve commutativity and identity so that binary multiplication is included as a special case, in order to preserve relationships with ordinary multiplicative number theory. This leads to a study of their expression in terms of elementary symmetric polynomials, and connections are made to results from the theory of polyadic (n-ary) groups. Higher order operations yield wider factorization and representation possibilities which correspond to reductions in the set of primes as well as tiered notions of primality. This comes at the expense of familiar algebraic properties such as associativity, and unique factorization. Criteria for primality and a naive testing algorithm are given for the ternary arithmetic, drawing heavily upon modular arithmetic. Finally, connections with the theory of partitions of integers and quadratic forms are discussed in relation to questions about cardinality of primes.
8

Paralelizace faktorizace celých čísel z pohledu lámání RSA / Parallelization of Integer Factorization from the View of RSA Breaking

Breitenbacher, Dominik January 2015 (has links)
This paper follows up the factorization of integers. Factorization is the most popular and used method for RSA cryptoanalysis. The SIQS was chosen as a factorization method that will be used in this paper. Although SIQS is the fastest method (up to 100 digits), it can't be effectively computed at polynomial time, so it's needed to look up for options, how to speed up the method as much as possible. One of the possible ways is paralelization. In this case OpenMP was used. Other possible way is optimalization. The goal of this paper is also to show, how easily is possible to use paralelizion and thanks to detailed analyzation the source codes one can reach relatively large speed up. Used method of iterative optimalization showed itself as a very effective tool. Using this method the implementation of SIQS achieved almost 100 multiplied speed up and at some parts of the code even more.

Page generated in 0.0675 seconds