• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 2
  • 2
  • Tagged with
  • 11
  • 11
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

移位QR算則在三對角矩陣上之收斂 / Convergence of the Shifted QR Algorithm on Tridiagonal Matrices

蔡淑芬, Tsai ,Shu-Fen Unknown Date (has links)
在計算矩陣的特徵值(eigenvalues)中,QR演算法是一種常見的技巧. 尤其如果使用適當的移位,將可以較快得到特徵值. 在本文中提出一種新的移位策略, 我們證明這各方法是可行的,而且可適用於任何矩陣. 換句說, 本篇論文主旨即是提出有關新的移位QR演算法的收斂. / The QR algorithm is a popular method for computing all the eigenvalues of a dense matrix. If we use a proper shift, we can accelerate convergence of the iterative process. Hence, we design a new shift strategy which includes an eigenvalue of the trailing principal 3-by-3 submatrix of the tridiagonal matrix. We prove the global convergence of the new strategy. In other words, the purpose of this thesis is to propose a theory of the convergence of a new shifted QR algorithm.
2

The QR Algorithm

Chu, Hsiao-yin Edith 01 May 1979 (has links)
In this section, we will consider two methods for computing an eigenvector and in addition the associated eigenvalue of a matrix A.
3

Sobre um método assemelhado ao de Francis para a determinação de autovalores de matrizes /

Oliveira, Danilo Elias de. January 2006 (has links)
Orientador: Eliana Xavier Linhares de Andrade / Banca: Roberto Andreani / Banca: Cleonice Fátima Bracciali / Resumo: O principal objetivo deste trabalho é apresentar, discutir as qualidades e desempenho e provar a convergência de um método iterativo para a solução numérica do problema de autovalores de uma matriz, que chamamos de Método Assemelhado ao de Francis (MAF). O método em questão distingue-se do QR de Francis pela maneira, mais simples e rápida, de se obter as matrizes ortogonais Qk, k = 1; 2. Apresentamos, também, uma comparação entre o MAF e os algoritmos QR de Francis e LR de Rutishauser. / Abstract: The main purpose of this work is to presente, to discuss the qualities and performance and to prove the convergence of an iterative method for the numerical solution of the eigenvalue problem, that we have called the Método Assemelhado ao de Francis (MAF)þþ. This method di ers from the QR method of Francis by providing a simpler and faster technique of getting the unitary matrices Qk; k = 1; 2; We present, also, a comparison analises between the MAF and the QR of Francis and LR of Rutishauser algorithms. / Mestre
4

Sobre um método assemelhado ao de Francis para a determinação de autovalores de matrizes

Oliveira, Danilo Elias de [UNESP] 23 February 2006 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:27:08Z (GMT). No. of bitstreams: 0 Previous issue date: 2006-02-23Bitstream added on 2014-06-13T19:26:10Z : No. of bitstreams: 1 oliveira_de_me_sjrp.pdf: 1040006 bytes, checksum: 88dd8fa849febafe8d0aa9bf32892235 (MD5) / O principal objetivo deste trabalho é apresentar, discutir as qualidades e desempenho e provar a convergência de um método iterativo para a solução numérica do problema de autovalores de uma matriz, que chamamos de Método Assemelhado ao de Francis (MAF). O método em questão distingue-se do QR de Francis pela maneira, mais simples e rápida, de se obter as matrizes ortogonais Qk, k = 1; 2. Apresentamos, também, uma comparação entre o MAF e os algoritmos QR de Francis e LR de Rutishauser. / The main purpose of this work is to presente, to discuss the qualities and performance and to prove the convergence of an iterative method for the numerical solution of the eigenvalue problem, that we have called the Método Assemelhado ao de Francis (MAF)þþ. This method di ers from the QR method of Francis by providing a simpler and faster technique of getting the unitary matrices Qk; k = 1; 2; We present, also, a comparison analises between the MAF and the QR of Francis and LR of Rutishauser algorithms.
5

QR與LR算則之位移策略 / On the shift strategies for the QR and LR algorithms

黃義哲, HUANG, YI-ZHE Unknown Date (has links)
用QR與LR迭代法求矩陣特徵值與特徵向量之過程中,前人曾提出位移策略以加速其收斂速度,其中最有效的是Wilkinson 移位值。在此我們希望尋求能使收斂更快速的位移值。 我們首先嘗使用一三階子矩陣之特徵值作為一次QR迭代之移位值。在此子矩陣之特徵值中,我們選擇最接近Wilkinson 移位值的特徵值為移位值,期使特徵值之收斂更快。 另一移位策略是用一較快速省功的算則先計算矩陣之特徵值,再以這些計算值作為QR迭代之位移值,來計算較為費功的特徵向量。希望能較快得到所需要的特徵值與特徵向量。 在計算特徵值之算則中,Cholesky迭代法以其計算簡單,執行速度快為我們所選擇。由程式執行結果可知這兩種算則較EISPACK 的算則分別節省了約10% 與30% 的運算量。我們比較這些策略,並將結果列於文中。 / Abstract The QR and LR algorithms are the general methods for computing eigenvalues and eigenvectors of a dense matrix. In this paper, we propose some shift strategies that can increase the efficiency of the QR algorithm by first computing the eigenvalues of the matrix (or its trailing submatrix) in a fast and economical way, and then using them as shifts to find the eigenvalues and their corresponding eigenvectors. When incorporated with QR algorithm, these kinds of shift strategies can save about 10 to 30percent of work in arithmetic operations.
6

Isospectral algorithms, Toeplitz matrices and orthogonal polynomials

Webb, Marcus David January 2017 (has links)
An isospectral algorithm is one which manipulates a matrix without changing its spectrum. In this thesis we study three interrelated examples of isospectral algorithms, all pertaining to Toeplitz matrices in some fashion, and one directly involving orthogonal polynomials. The first set of algorithms we study come from discretising a continuous isospectral flow designed to converge to a symmetric Toeplitz matrix with prescribed eigenvalues. We analyse constrained, isospectral gradient flow approaches and an isospectral flow studied by Chu in 1993. The second set of algorithms compute the spectral measure of a Jacobi operator, which is the weight function for the associated orthogonal polynomials and can include a singular part. The connection coefficients matrix, which converts between different bases of orthogonal polynomials, is shown to be a useful new tool in the spectral theory of Jacobi operators. When the Jacobi operator is a finite rank perturbation of Toeplitz, here called pert-Toeplitz, the connection coefficients matrix produces an explicit, computable formula for the spectral measure. Generalisation to trace class perturbations is also considered. The third algorithm is the infinite dimensional QL algorithm. In contrast to the finite dimensional case in which the QL and QR algorithms are equivalent, we find that the QL factorisations do not always exist, but that it is possible, at least in the case of pert-Toeplitz Jacobi operators, to implement shifts to generate rapid convergence of the top left entry to an eigenvalue. A fascinating novelty here is that the infinite dimensional matrices are computed in their entirety and stored in tailor made data structures. Lastly, the connection coefficients matrix and the orthogonal transformations computed in the QL iterations can be combined to transform these pert-Toeplitz Jacobi operators isospectrally to a canonical form. This allows us to implement a functional calculus for pert-Toeplitz Jacobi operators.
7

利用計算矩陣特徵值的方法求多項式的根 / Finding the Roots of a Polynomial by Computing the Eigenvalues of a Related Matrix

賴信憲 Unknown Date (has links)
我們將原本求只有實根的多項式問題轉換為利用QR方法求一個友矩陣(companion matrix)或是對稱三對角(symmetric tridiagonal matrix)的特徵值問題,在數值測試中顯示出利用傳統演算法去求多項式的根會比求轉換過後矩陣特徵值的方法較沒效率。 / Given a polynomial pn(x) of degree n with real roots, we transform the problem of finding all roots of pn (x) into a problem of finding the eigenvalues of a companion matrix or of a symmetric tridiagonal matrix, which can be done with the QR algorithm. Numerical testing shows that finding the roots of a polynomial by standard algorithms is less efficient than by computing the eigenvalues of a related matrix.
8

計算一個逆特徵值問題 / Computing an Inverse Eigenvalue Problem

范慶辰, Fan, Ching chen Unknown Date (has links)
In this thesis three methods LMGS, TQR and GR are applied to solve an inverseeigenvalue problem. We list the numerical results and compare the accuracy of the computed Jacobi matrix $T$ and the associated orthogonal matrix $Q$, wherethe columns of $Q^T$ are the eigenvectors of $T$. In the application of this inverse eigenvalue problem, the Fourier coefficients of $h(x)=e^x$ relative to the orthonormal polynomials associatedwith $T$ are evaluated, and these values are used to compute the least squarescoefficients of $h$ relative to the Chebyshev polynomials. We list thesenumerical results and compare them as our conclusion.
9

Analytic and numerical aspects of isospectral flows

Kaur, Amandeep January 2018 (has links)
In this thesis we address the analytic and numerical aspects of isospectral flows. Such flows occur in mathematical physics and numerical linear algebra. Their main structural feature is to retain the eigenvalues in the solution space. We explore the solution of Isospectral flows and their stochastic counterpart using explicit generalisation of Magnus expansion. \par In the first part of the thesis we expand the solution of Bloch--Iserles equations, the matrix ordinary differential system of the form $ X'=[N,X^{2}],\ \ t\geq0, \ \ X(0)=X_0\in \textrm{Sym}(n),\ N\in \mathfrak{so}(n), $ where $\textrm{Sym}(n)$ denotes the space of real $n\times n$ symmetric matrices and $\mathfrak{so}(n)$ denotes the Lie algebra of real $n\times n$ skew-symmetric matrices. This system is endowed with Poisson structure and is integrable. Various important properties of the flow are discussed. The flow is solved using explicit Magnus expansion and the terms of expansion are represented as binary rooted trees deducing an explicit formalism to construct the trees recursively. Unlike classical numerical methods, e.g.\ Runge--Kutta and multistep methods, Magnus expansion respects the isospectrality of the system, and the shorthand of binary rooted trees reduces the computational cost of the exponentially growing terms. The desired structure of the solution (also with large time steps) has been displayed. \par Having seen the promising results in the first part of the thesis, the technique has been extended to the generalised double bracket flow $ X^{'}=[[N,X]+M,X], \ \ t\geq0, \ \ X(0)=X_0\in \textrm{Sym}(n),$ where $N\in \textrm{diag}(n)$ and $M\in \mathfrak{so}(n)$, which is also a form of an Isospectral flow. In the second part of the thesis we define the generalised double bracket flow and discuss its dynamics. It is noted that $N=0$ reduces it to an integrable flow, while for $M=0$ it results in a gradient flow. We analyse the flow for various non-zero values of $N$ and $M$ by assigning different weights and observe Hopf bifurcation in the system. The discretisation is done using Magnus series and the expansion terms have been portrayed using binary rooted trees. Although this matrix system appears more complex and leads to the tri-colour leaves; it has been possible to formulate the explicit recursive rule. The desired structure of the solution is obtained that leaves the eigenvalues invariant in the solution space.
10

Eigenvalue Algorithms for Symmetric Hierarchical Matrices / Eigenwert-Algorithmen für Symmetrische Hierarchische Matrizen

Mach, Thomas 05 April 2012 (has links) (PDF)
This thesis is on the numerical computation of eigenvalues of symmetric hierarchical matrices. The numerical algorithms used for this computation are derivations of the LR Cholesky algorithm, the preconditioned inverse iteration, and a bisection method based on LDLT factorizations. The investigation of QR decompositions for H-matrices leads to a new QR decomposition. It has some properties that are superior to the existing ones, which is shown by experiments using the HQR decompositions to build a QR (eigenvalue) algorithm for H-matrices does not progress to a more efficient algorithm than the LR Cholesky algorithm. The implementation of the LR Cholesky algorithm for hierarchical matrices together with deflation and shift strategies yields an algorithm that require O(n) iterations to find all eigenvalues. Unfortunately, the local ranks of the iterates show a strong growth in the first steps. These H-fill-ins makes the computation expensive, so that O(n³) flops and O(n²) storage are required. Theorem 4.3.1 explains this behavior and shows that the LR Cholesky algorithm is efficient for the simple structured Hl-matrices. There is an exact LDLT factorization for Hl-matrices and an approximate LDLT factorization for H-matrices in linear-polylogarithmic complexity. This factorizations can be used to compute the inertia of an H-matrix. With the knowledge of the inertia for arbitrary shifts, one can compute an eigenvalue by bisectioning. The slicing the spectrum algorithm can compute all eigenvalues of an Hl-matrix in linear-polylogarithmic complexity. A single eigenvalue can be computed in O(k²n log^4 n). Since the LDLT factorization for general H-matrices is only approximative, the accuracy of the LDLT slicing algorithm is limited. The local ranks of the LDLT factorization for indefinite matrices are generally unknown, so that there is no statement on the complexity of the algorithm besides the numerical results in Table 5.7. The preconditioned inverse iteration computes the smallest eigenvalue and the corresponding eigenvector. This method is efficient, since the number of iterations is independent of the matrix dimension. If other eigenvalues than the smallest are searched, then preconditioned inverse iteration can not be simply applied to the shifted matrix, since positive definiteness is necessary. The squared and shifted matrix (M-mu I)² is positive definite. Inner eigenvalues can be computed by the combination of folded spectrum method and PINVIT. Numerical experiments show that the approximate inversion of (M-mu I)² is more expensive than the approximate inversion of M, so that the computation of the inner eigenvalues is more expensive. We compare the different eigenvalue algorithms. The preconditioned inverse iteration for hierarchical matrices is better than the LDLT slicing algorithm for the computation of the smallest eigenvalues, especially if the inverse is already available. The computation of inner eigenvalues with the folded spectrum method and preconditioned inverse iteration is more expensive. The LDLT slicing algorithm is competitive to H-PINVIT for the computation of inner eigenvalues. In the case of large, sparse matrices, specially tailored algorithms for sparse matrices, like the MATLAB function eigs, are more efficient. If one wants to compute all eigenvalues, then the LDLT slicing algorithm seems to be better than the LR Cholesky algorithm. If the matrix is small enough to be handled in dense arithmetic (and is not an Hl(1)-matrix), then dense eigensolvers, like the LAPACK function dsyev, are superior. The H-PINVIT and the LDLT slicing algorithm require only an almost linear amount of storage. They can handle larger matrices than eigenvalue algorithms for dense matrices. For Hl-matrices of local rank 1, the LDLT slicing algorithm and the LR Cholesky algorithm need almost the same time for the computation of all eigenvalues. For large matrices, both algorithms are faster than the dense LAPACK function dsyev.

Page generated in 0.0547 seconds