1 |
Fast Order Basis and Kernel Basis Computation and Related ProblemsZhou, Wei 28 November 2012 (has links)
In this thesis, we present efficient deterministic algorithms
for polynomial matrix computation problems, including the computation
of order basis, minimal kernel basis, matrix inverse, column basis,
unimodular completion, determinant, Hermite normal form, rank and
rank profile for matrices of univariate polynomials over a field.
The algorithm for kernel basis computation also immediately provides
an efficient deterministic algorithm for solving linear systems. The
algorithm for column basis also gives efficient deterministic algorithms
for computing matrix GCDs, column reduced forms, and Popov normal
forms for matrices of any dimension and any rank.
We reduce all these problems to polynomial matrix multiplications.
The computational costs of our algorithms are then similar to the
costs of multiplying matrices, whose dimensions match the input matrix
dimensions in the original problems, and whose degrees equal the average
column degrees of the original input matrices in most cases. The use
of the average column degrees instead of the commonly used matrix
degrees, or equivalently the maximum column degrees, makes our computational
costs more precise and tighter. In addition, the shifted minimal bases
computed by our algorithms are more general than the standard minimal
bases.
|
2 |
Fast Order Basis and Kernel Basis Computation and Related ProblemsZhou, Wei 28 November 2012 (has links)
In this thesis, we present efficient deterministic algorithms
for polynomial matrix computation problems, including the computation
of order basis, minimal kernel basis, matrix inverse, column basis,
unimodular completion, determinant, Hermite normal form, rank and
rank profile for matrices of univariate polynomials over a field.
The algorithm for kernel basis computation also immediately provides
an efficient deterministic algorithm for solving linear systems. The
algorithm for column basis also gives efficient deterministic algorithms
for computing matrix GCDs, column reduced forms, and Popov normal
forms for matrices of any dimension and any rank.
We reduce all these problems to polynomial matrix multiplications.
The computational costs of our algorithms are then similar to the
costs of multiplying matrices, whose dimensions match the input matrix
dimensions in the original problems, and whose degrees equal the average
column degrees of the original input matrices in most cases. The use
of the average column degrees instead of the commonly used matrix
degrees, or equivalently the maximum column degrees, makes our computational
costs more precise and tighter. In addition, the shifted minimal bases
computed by our algorithms are more general than the standard minimal
bases.
|
Page generated in 0.0759 seconds