• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis of Another Left Shift Binary GCD Algorithm

Chen, Yan-heng 14 July 2009 (has links)
In general, to compute the modular inverse is very important in information security, many encrypt/decrypt and signature algorithms always need to use it. In 2007, Liu, Horng, and Liu proposed a variation on Euclidean algorithm, which can calculate the modular inverses as simple as calculate GCDs. This paper analyzes another type of left-shift binary GCD algorithm, which is suitable for the variation and that needs the fewer bit-operations than LSBGCD, which is analyzed by Shallit, and Sorenson.
2

Computer Architectures for Cryptosystems Based on Hyperelliptic Curves

Wollinger, Thomas Josef 04 May 2001 (has links)
Security issues play an important role in almost all modern communication and computer networks. As Internet applications continue to grow dramatically, security requirements have to be strengthened. Hyperelliptic curve cryptosystems (HECC) allow for shorter operands at the same level of security than other public-key cryptosystems, such as RSA or Diffie-Hellman. These shorter operands appear promising for many applications. Hyperelliptic curves are a generalization of elliptic curves and they can also be used for building discrete logarithm public-key schemes. A major part of this work is the development of computer architectures for the different algorithms needed for HECC. The architectures are developed for a reconfigurable platform based on Field Programmable Gate Arrays (FPGAs). FPGAs combine the flexibility of software solutions with the security of traditional hardware implementations. In particular, it is possible to easily change all algorithm parameters such as curve coefficients and underlying finite field. In this work we first summarized the theoretical background of hyperelliptic curve cryptosystems. In order to realize the operation addition and doubling on the Jacobian, we developed architectures for the composition and reduction step. These in turn are based on architectures for arithmetic in the underlying field and for arithmetic in the polynomial ring. The architectures are described in VHDL (VHSIC Hardware Description Language) and the code was functionally verified. Some of the arithmetic modules were also synthesized. We provide estimates for the clock cycle count for a group operation in the Jacobian. The system targeted was HECC of genus four over GF(2^41).
3

Fast Order Basis and Kernel Basis Computation and Related Problems

Zhou, Wei 28 November 2012 (has links)
In this thesis, we present efficient deterministic algorithms for polynomial matrix computation problems, including the computation of order basis, minimal kernel basis, matrix inverse, column basis, unimodular completion, determinant, Hermite normal form, rank and rank profile for matrices of univariate polynomials over a field. The algorithm for kernel basis computation also immediately provides an efficient deterministic algorithm for solving linear systems. The algorithm for column basis also gives efficient deterministic algorithms for computing matrix GCDs, column reduced forms, and Popov normal forms for matrices of any dimension and any rank. We reduce all these problems to polynomial matrix multiplications. The computational costs of our algorithms are then similar to the costs of multiplying matrices, whose dimensions match the input matrix dimensions in the original problems, and whose degrees equal the average column degrees of the original input matrices in most cases. The use of the average column degrees instead of the commonly used matrix degrees, or equivalently the maximum column degrees, makes our computational costs more precise and tighter. In addition, the shifted minimal bases computed by our algorithms are more general than the standard minimal bases.
4

Fast Order Basis and Kernel Basis Computation and Related Problems

Zhou, Wei 28 November 2012 (has links)
In this thesis, we present efficient deterministic algorithms for polynomial matrix computation problems, including the computation of order basis, minimal kernel basis, matrix inverse, column basis, unimodular completion, determinant, Hermite normal form, rank and rank profile for matrices of univariate polynomials over a field. The algorithm for kernel basis computation also immediately provides an efficient deterministic algorithm for solving linear systems. The algorithm for column basis also gives efficient deterministic algorithms for computing matrix GCDs, column reduced forms, and Popov normal forms for matrices of any dimension and any rank. We reduce all these problems to polynomial matrix multiplications. The computational costs of our algorithms are then similar to the costs of multiplying matrices, whose dimensions match the input matrix dimensions in the original problems, and whose degrees equal the average column degrees of the original input matrices in most cases. The use of the average column degrees instead of the commonly used matrix degrees, or equivalently the maximum column degrees, makes our computational costs more precise and tighter. In addition, the shifted minimal bases computed by our algorithms are more general than the standard minimal bases.
5

Process capability assessment for univariate and multivariate non-normal correlated quality characteristics

Ahmad, Shafiq, Shafiq.ahmad@rmit.edu.au January 2009 (has links)
In today's competitive business and industrial environment, it is becoming more crucial than ever to assess precisely process losses due to non-compliance to customer specifications. To assess these losses, industry is extensively using Process Capability Indices for performance evaluation of their processes. Determination of the performance capability of a stable process using the standard process capability indices such as and requires that the underlying quality characteristics data follow a normal distribution. However it is an undisputed fact that real processes very often produce non-normal quality characteristics data and also these quality characteristics are very often correlated with each other. For such non-normal and correlated multivariate quality characteristics, application of standard capability measures using conventional methods can lead to erroneous results. The research undertaken in this PhD thesis presents several capability assessment methods to estimate more precisely and accurately process performances based on univariate as well as multivariate quality characteristics. The proposed capability assessment methods also take into account the correlation, variance and covariance as well as non-normality issues of the quality characteristics data. A comprehensive review of the existing univariate and multivariate PCI estimations have been provided. We have proposed fitting Burr XII distributions to continuous positively skewed data. The proportion of nonconformance (PNC) for process measurements is then obtained by using Burr XII distribution, rather than through the traditional practice of fitting different distributions to real data. Maximum likelihood method is deployed to improve the accuracy of PCI based on Burr XII distribution. Different numerical methods such as Evolutionary and Simulated Annealing algorithms are deployed to estimate parameters of the fitted Burr XII distribution. We have also introduced new transformation method called Best Root Transformation approach to transform non-normal data to normal data and then apply the traditional PCI method to estimate the proportion of non-conforming data. Another approach which has been introduced in this thesis is to deploy Burr XII cumulative density function for PCI estimation using Cumulative Density Function technique. The proposed approach is in contrast to the approach adopted in the research literature i.e. use of best-fitting density function from known distributions to non-normal data for PCI estimation. The proposed CDF technique has also been extended to estimate process capability for bivariate non-normal quality characteristics data. A new multivariate capability index based on the Generalized Covariance Distance (GCD) is proposed. This novel approach reduces the dimension of multivariate data by transforming correlated variables into univariate ones through a metric function. This approach evaluates process capability for correlated non-normal multivariate quality characteristics. Unlike the Geometric Distance approach, GCD approach takes into account the scaling effect of the variance-covariance matrix and produces a Covariance Distance variable that is based on the Mahanalobis distance. Another novelty introduced in this research is to approximate the distribution of these distances by a Burr XII distribution and then estimate its parameters using numerical search algorithm. It is demonstrates that the proportion of nonconformance (PNC) using proposed method is very close to the actual PNC value.
6

Charge Transfer and Capacitive Properties of Polyaniline/ Polyamide Thin Films

Abrahams, Dhielnawaaz January 2018 (has links)
Magister Scientiae - MSc (Chemistry) / Blending polymers together offers researchers the ability to create novel materials that have a combination of desired properties of the individual polymers for a variety of functions as well as improving specific properties. The behaviour of the resulting blended polymer or blend is determined by the interactions between the two polymers. The resultant synergy from blending an intrinsically conducting polymer like polyaniline (PANI), is that it possesses the electrical, electronic, magnetic and optical properties of a metal while retaining the poor mechanical properties, solubility and processibility commonly associated with a conventional polymer. Aromatic polyamic acid has outstanding thermal, mechanical, electrical, and solvent resistance properties that can overcome the poor mechanical properties and instability of the conventional conducting polymers, such as polyaniline.
7

Sur des méthodes préservant les structures d'une classe de matrices structurées / On structure-preserving methods of a class of structured matrices

Ben Kahla, Haithem 14 December 2017 (has links)
Les méthodes d'algèbres linéaire classiques, pour le calcul de valeurs et vecteurs propres d'une matrice, ou des approximations de rangs inférieurs (low-rank approximations) d'une solution, etc..., ne tiennent pas compte des structures de matrices. Ces dernières sont généralement détruites durant le procédé du calcul. Des méthodes alternatives préservant ces structures font l'objet d'un intérêt important par la communauté. Cette thèse constitue une contribution dans ce domaine. La décomposition SR peut être calculé via l'algorithme de Gram-Schmidt symplectique. Comme dans le cas classique, une perte d'orthogonalité peut se produire. Pour y remédier, nous avons proposé deux algorithmes RSGSi et RMSGSi qui consistent à ré-orthogonaliser deux fois les vecteurs à calculer. La perte de la J-orthogonalité s'est améliorée de manière très significative. L'étude directe de la propagation des erreurs d'arrondis dans les algorithmes de Gram-Schmidt symplectique est très difficile à effectuer. Nous avons réussi à contourner cette difficulté et donner des majorations pour la perte de la J-orthogonalité et de l'erreur de factorisation. Une autre façon de calculer la décomposition SR est basée sur les transformations de Householder symplectique. Un choix optimal a abouti à l'algorithme SROSH. Cependant, ce dernier peut être sujet à une instabilité numérique. Nous avons proposé une version modifiée nouvelle SRMSH, qui a l'avantage d'être aussi stable que possible. Une étude approfondie a été faite, présentant les différentes versions : SRMSH et SRMSH2. Dans le but de construire un algorithme SR, d'une complexité d'ordre O(n³) où 2n est la taille de la matrice, une réduction (appropriée) de la matrice à une forme condensée (J(Hessenberg forme) via des similarités adéquates, est cruciale. Cette réduction peut être effectuée via l'algorithme JHESS. Nous avons montré qu'il est possible de réduire une matrice sous la forme J-Hessenberg, en se basant exclusivement sur les transformations de Householder symplectiques. Le nouvel algorithme, appelé JHSJ, est basé sur une adaptation de l'algorithme SRSH. Nous avons réussi à proposer deux nouvelles variantes, aussi stables que possible : JHMSH et JHMSH2. Nous avons constaté que ces algorithmes se comportent d'une manière similaire à l'algorithme JHESS. Une caractéristique importante de tous ces algorithmes est qu'ils peuvent rencontrer un breakdown fatal ou un "near breakdown" rendant impossible la suite des calculs, ou débouchant sur une instabilité numérique, privant le résultat final de toute signification. Ce phénomène n'a pas d'équivalent dans le cas Euclidien. Nous avons réussi à élaborer une stratégie très efficace pour "guérir" le breakdown fatal et traîter le near breakdown. Les nouveaux algorithmes intégrant cette stratégie sont désignés par MJHESS, MJHSH, JHM²SH et JHM²SH2. Ces stratégies ont été ensuite intégrées dans la version implicite de l'algorithme SR lui permettant de surmonter les difficultés rencontrées lors du fatal breakdown ou du near breakdown. Rappelons que, sans ces stratégies, l'algorithme SR s'arrête. Finalement, et dans un autre cadre de matrices structurées, nous avons présenté un algorithme robuste via FFT et la matrice de Hankel, basé sur le calcul approché de plus grand diviseur commun (PGCD) de deux polynômes, pour résoudre le problème de la déconvolution d'images. Plus précisément, nous avons conçu un algorithme pour le calcul du PGCD de deux polynômes bivariés. La nouvelle approche est basée sur un algorithme rapide, de complexité quadratique O(n²), pour le calcul du PGCD des polynômes unidimensionnels. La complexité de notre algorithme est O(n²log(n)) où la taille des images floues est n x n. Les résultats expérimentaux avec des images synthétiquement floues illustrent l'efficacité de notre approche. / The classical linear algebra methods, for calculating eigenvalues and eigenvectors of a matrix, or lower-rank approximations of a solution, etc....do not consider the structures of matrices. Such structures are usually destroyed in the numerical process. Alternative structure-preserving methods are the subject of an important interest mattering to the community. This thesis establishes a contribution in this field. The SR decomposition is usually implemented via the symplectic Gram-Schmidt algorithm. As in the classical case, a loss of orthogonality can occur. To remedy this, we have proposed two algorithms RSGSi and RMSGSi, where the reorthogonalization of a current set of vectors against the previously computed set is performed twice. The loss of J-orthogonality has significantly improved. A direct rounding error analysis of symplectic Gram-Schmidt algorithm is very hard to accomplish. We managed to get around this difficulty and give the error bounds on the loss of the J-orthogonality and on the factorization. Another way to implement the SR decomposition is based on symplectic Householder transformations. An optimal choice of free parameters provided an optimal version of the algorithm SROSH. However, the latter may be subject to numerical instability. We have proposed a new modified version SRMSH, which has the advantage of being numerically more stable. By a detailes study, we are led to two new variants numerically more stables : SRMSH and SRMSH2. In order to build a SR algorithm of complexity O(n³), where 2n is the size of the matrix, a reduction to the condensed matrix form (upper J-Hessenberg form) via adequate similarities is crucial. This reduction may be handled via the algorithm JHESS. We have shown that it is possible to perform a reduction of a general matrix, to an upper J-Hessenberg form, based only on the use of symplectic Householder transformations. The new algorithm, which will be called JHSH algorithm, is based on an adaptation of SRSH algorithm. We are led to two news variants algorithms JHMSH and JHMSH2 which are significantly more stable numerically. We found that these algortihms behave quite similarly to JHESS algorithm. The main drawback of all these algorithms (JHESS, JHMSH, JHMSH2) is that they may encounter fatal breakdowns or may suffer from a severe form of near-breakdowns, causing a brutal stop of the computations, the algorithm breaks down, or leading to a serious numerical instability. This phenomenon has no equivalent in the Euclidean case. We sketch out a very efficient strategy for curing fatal breakdowns and treating near breakdowns. Thus, the new algorithms incorporating this modification will be referred to as MJHESS, MJHSH, JHM²SH and JHM²SH2. These strategies were then incorporated into the implicit version of the SR algorithm to overcome the difficulties encountered by the fatal breakdown or near-breakdown. We recall that without these strategies, the SR algorithms breaks. Finally ans in another framework of structured matrices, we presented a robust algorithm via FFT and a Hankel matrix, based on computing approximate greatest common divisors (GCD) of polynomials, for solving the problem pf blind image deconvolution. Specifically, we designe a specialized algorithm for computing the GCD of bivariate polynomials. The new algorithm is based on the fast GCD algorithm for univariate polynomials , of quadratic complexity O(n²) flops. The complexitiy of our algorithm is O(n²log(n)) where the size of blurred images is n x n. The experimental results with synthetically burred images are included to illustrate the effectiveness of our approach

Page generated in 0.0519 seconds