Return to search

Error Detection in Number-Theoretic and Algebraic Algorithms

CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms.

Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories:
1. Correct and feasible -- the algorithm computes the correct result,

2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O,

3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O.

Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues.
Moreover, we show that typically, there
will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).

Identiferoai:union.ndltd.org:WATERLOO/oai:uwspace.uwaterloo.ca:10012/3895
Date January 2008
CreatorsVasiga, Troy Michael John
Source SetsUniversity of Waterloo Electronic Theses Repository
LanguageEnglish
Detected LanguageEnglish
TypeThesis or Dissertation

Page generated in 0.0019 seconds