Spelling suggestions: "subject:"krylov subspace"" "subject:"krylov kubspace""
1 |
Parallel Sparse Linear Algebra for Homotopy MethodsDriver, Maria Sosonkina Jr. 19 September 1997 (has links)
Globally convergent homotopy methods are used to solve difficult nonlinear systems of equations by tracking the zero curve of a homotopy map. Homotopy curve tracking involves solving a sequence of linear systems, which often vary greatly in difficulty. In this research, a popular iterative solution tool, GMRES(k), is adapted to deal with the sequence of such systems. The proposed adaptive strategy of GMRES(k) allows tuning of the restart parameter k based on the GMRES convergence rate for the given problem. Adaptive GMRES(k) is shown to be superior to several other iterative techniques on analog circuit simulation problems and on postbuckling structural analysis problems.
Developing parallel techniques for robust but expensive sequential computations, such as globally convergent homotopy methods, is important. The design of these techniques encompasses the functionality of the iterative method (adaptive GMRES(k)) implemented sequentially and is based on the results of a parallel performance analysis of several implementations. An implementation of adaptive GMRES(k) with Householder reflections in its orthogonalization phase is developed. It is shown that the efficiency of linear system solution by the adaptive GMRES(k) algorithm depends on the change in problem difficulty when the problem is scaled. In contrast, a standard GMRES(k) implementation using Householder reflections maintains a constant efficiency with increase in problem size and number of processors, as concluded analytically and experimentally. The supporting numerical results are obtained on three distributed memory homogeneous parallel architectures: CRAY T3E, Intel Paragon, and IBM SP2. / Ph. D.
|
2 |
Iterative Methods for Computing Eigenvalues and Exponentials of Large MatricesZhang, Ping 01 January 2009 (has links)
In this dissertation, we study iterative methods for computing eigenvalues and exponentials of large matrices. These types of computational problems arise in a large number of applications, including mathematical models in economics, physical and biological processes. Although numerical methods for computing eigenvalues and matrix exponentials have been well studied in the literature, there is a lack of analysis in inexact iterative methods for eigenvalue computation and certain variants of the Krylov subspace methods for approximating the matrix exponentials. In this work, we proposed an inexact inverse subspace iteration method that generalizes the inexact inverse iteration for computing multiple and clustered eigenvalues of a generalized eigenvalue problem. Compared with other methods, the inexact inverse subspace iteration method is generally more robust. Convergence analysis showed that the linear convergence rate of the exact case is preserved. The second part of the work is to present an inverse Lanczos method to approximate the product of a matrix exponential and a vector. This is proposed to allow use of larger time step in a time-propagation scheme for solving linear initial value problems. Error analysis is given for the inverse Lanczos method, the standard Lanczos method as well as the shift-and-invert Lanczos method. The analysis demonstrates different behaviors of these variants and helps in choosing which variant to use in practice.
|
3 |
Migration preconditioning with curvelets.Moghaddam, Peyman P., Herrmann, Felix J. January 2004 (has links)
In this paper, the property of Curvelet transforms for preconditioning the migration and normal operators is investigated. These operators belong to the class of Fourier integral operators and pseudo-differential operators, respectively. The effect of this preconditioner is shown in term of improvement of sparsity, convergence rate, number of iteration for the Krylov-subspace solver and clustering of singular(eigen) values. The migration operator, which we employed in this work is the common-offset Kirchoff-Born migration.
|
4 |
Teoretické otázky popisu chování krylovovských metod / Teoretické otázky popisu chování krylovovských metodStrnad, Otto January 2011 (has links)
The presented thesis is focused on the GMRES convergence analysis. The basic principles of CG, MINRES and GMRES are briefly explained. The thesis summarizes some known convergence results of these methods. The known characterizations of the matrices and the right hand sides gen- erating the same Krylov residual spaces are summarized. Connections and the differences between the different points of view on GMRES convergence analysis are shown. We expect that if the convergence curve of GMRES applied to the nonnormal matrix and the right hand side seems to be de- termined by the eigenvalues of the matrix then exists a matrix that is close to normal and has the same spectrum as the matrix and for the right hand side has the same GMRES convergence curve (We assume that the initial approximation 0 = 0). Several numerical experiments are done to examine this assumption. This thesis describes an unpublished result of Gérard Meu- rant which is the formula for the norm of the -th error of GMRES applied to the matrix and right hand side and its derivation. The upper estimate of the -th GMRES error is derived. This estimate is minimized via spectrum.
|
5 |
Analysis and Implementation Considerations of Krylov Subspace Methods on Modern Heterogeneous Computing ArchitecturesHiggins, Andrew, 0009-0007-5527-9263 05 1900 (has links)
Krylov subspace methods are the state-of-the-art iterative algorithms for solving large, sparse systems of equations, which are ubiquitous throughout scientific computing. Even with Krylov methods, these problems are often infeasible to solve on standard workstation computers and must be solved instead on supercomputers. Most modern supercomputers fall into the category of “heterogeneous architectures”, typically meaning a combination of CPU and GPU processors. Thus, development and analysis of Krylov subspace methods on these heterogeneous architectures is of fundamental importance to modern scientific computing.
This dissertation focuses on how this relates to several specific problems. Thefirst analyzes the performance of block GMRES (BGMRES) compared to GMRES for linear systems with multiple right hand sides (RHS) on both CPUs and GPUs, and modelling when BGMRES is most advantageous over GMRES on the
GPU. On CPUs, the current paradigm is that if one wishes to solve a system of equations with multiple RHS, BGMRES can indeed outperform GMRES, but not always. Our original goal was to see if there are some cases for which BGMRES
is slower in execution time on the CPU than GMRES on the CPU, while on the GPU, the reverse holds. This is true, and we generally observe much faster execution times and larger improvements in the case of BGMRES on the GPU. We
also observe that for any fixed matrix, when the number of RHS increase, there is a point in which the improvements start to decrease and eventually any advantage of the (unrestarted) block method is lost. We present a new computational model which helps us explain why this is so. The significance of this analysis is that it first demonstrates increased potential of block Krylov methods on heterogeneous architectures than on previously studied CPU-only machines. Moreover, the theoretical runtime model can be used to identify an optimal partitioning strategy of the RHS
for solving systems with many RHS.
The second problem studies the s-step GMRES method, which is an implementation of GMRES that attains high performance on modern heterogeneous machines by generating s Krylov basis vectors per iteration, and then orthogonalizing the vectors in a block-wise fashion. The use of s-step GMRES is currently limited because the algorithm is prone to numerical instabilities, partially due to breakdowns in a tall-and-skinny QR subroutine. Further, a conservatively small step size must be used in practice, limiting the algorithm’s performance. To address these issues, first a novel randomized tall-and-skinny QR factorization is presented that is significantly more stable than the current practical algorithms without sacrificing performance on GPUs. Then, a novel two-stage block orthogonalization scheme is introduced that significantly improves the performance of the s-step GMRES algorithm when small step sizes are used. These contributions help make s-step GMRES a more practical method in heterogeneous, and therefore exascale, environments. / Mathematics
|
6 |
Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTABAhuja, Kapil 21 September 2009 (has links)
Engineering problems frequently require solving a sequence of dual linear systems. This paper introduces recycling BiCG, that recycles the Krylov subspace from one pair of linear systems to the next pair. Augmented bi-Lanczos algorithm and modified two-term recurrence are developed for using the recycle space. Recycle space is built from the approximate invariant subspace corresponding to eigenvalues close to the origin. Recycling approach is extended to the CGS and the BiCGSTAB algorithms. Experiments on a convection-diffusion problem give promising results. / Master of Science
|
7 |
GENERALIZATIONS OF AN INVERSE FREE KRYLOV SUBSPACE METHOD FOR THE SYMMETRIC GENERALIZED EIGENVALUE PROBLEMQuillen, Patrick D. 01 January 2005 (has links)
Symmetric generalized eigenvalue problems arise in many physical applications and frequently only a few of the eigenpairs are of interest. Typically, the problems are large and sparse, and therefore traditional methods such as the QZ algorithm may not be considered. Moreover, it may be impractical to apply shift-and-invert Lanczos, a favored method for problems of this type, due to difficulties in applying the inverse of the shifted matrix. With these difficulties in mind, Golub and Ye developed an inverse free Krylov subspace algorithm for the symmetric generalized eigenvalue problem. This method does not rely on shift-and-invert transformations for convergence acceleration, but rather a preconditioner is used. The algorithm suffers, however, in the presence of multiple or clustered eigenvalues. Also, it is only applicable to the location of extreme eigenvalues. In this work, we extend the method of Golub and Ye by developing a block generalization of their algorithm which enjoys considerably faster convergence than the usual method in the presence of multiplicities and clusters. Preconditioning techniques for the problems are discussed at length, and some insight is given into how these preconditioners accelerate the method. Finally we discuss a transformation which can be applied so that the algorithm extracts interior eigenvalues. A preconditioner based on a QR factorization with respect to the B-1 inner product is developed and applied in locating interior eigenvalues.
|
8 |
Nonstandard inner products and preconditioned iterative methodsPestana, Jennifer January 2011 (has links)
By considering Krylov subspace methods in nonstandard inner products, we develop in this thesis new methods for solving large sparse linear systems and examine the effectiveness of existing preconditioners. We focus on saddle point systems and systems with a nonsymmetric, diagonalizable coefficient matrix. For symmetric saddle point systems, we present a preconditioner that renders the preconditioned saddle point matrix nonsymmetric but self-adjoint with respect to an inner product and for which scaling is not required to apply a short-term recurrence method. The robustness and effectiveness of this preconditioner, when applied to a number of test problems, is demonstrated. We additionally utilize combination preconditioning (Stoll and Wathen. SIAM J. Matrix Anal. Appl. 2008; 30:582-608) to develop three new combination preconditioners. One of these is formed from two preconditioners for which only a MINRES-type method can be applied, and yet a conjugate-gradient type method can be applied to the combination preconditioned system. Numerical experiments show that application of these preconditioners can result in faster convergence. When the coefficient matrix is diagonalizable, but potentially nonsymmetric, we present conditions under which the pseudospectra of a preconditioner and coefficient matrix are identical and characterize the pseudospectra when this condition is not exactly fulfilled. We show that when the preconditioner and coefficient matrix are self-adjoint with respect to nearby symmetric bilinear forms the convergence of a particular minimum residual method is bounded by a term that depends on the spectrum of the preconditioned coefficient matrix and a constant that is small when the symmetric bilinear forms are close. An iteration-dependent bound for GMRES in the Euclidean inner product is presented that shows precisely why a standard bound can be pessimistic. We observe that for certain problems known, effective preconditioners are either self-adjoint with respect to the same symmetric bilinear form as the coefficient matrix or one that is nearby.
|
9 |
Analýza Krylovovských metod / Analysis of Krylov subspace methodsGergelits, Tomáš January 2013 (has links)
Title: Analysis of Krylov subspace methods Author: Tomáš Gergelits Department: Department of Numerical Mathematics Supervisor: prof. Ing. Zdeněk Strakoš, DrSc. Abstract: After the derivation of the Conjugate Gradient method (CG) and the short review of its relationship with other fields of mathematics, this thesis is focused on its convergence behaviour both in exact and finite precision arith- metic. Fundamental difference between the CG and the Chebyshev semi-iterative method is described in detail. Then we investigate the use of the widespread lin- ear convergence bound based on Chebyshev polynomials. Through the example of the composite polynomial convergence bounds it is showed that the effects of rounding errors must be included in any consideration concerning the CG rate of convergence relevant to practical computations. Furthermore, the close corre- spondence between the trajectories of the CG approximations generated in finite precision and exact arithmetic is studied. The thesis is concluded with the discus- sion concerning the sensitivity of the closely related Gauss-Christoffel quadrature. The last two topics may motivate our further research. Keywords: Conjugate Gradient Method, Chebyshev semi-iterative method, fi- nite precision computations, delay of convergence, composite polynomial conver-...
|
10 |
Méthodes tangentielles pour les réductions de modèles et applications / Tangential methods for model reductions and applicationsKaouane, Yassine 31 December 2018 (has links)
Les simulations à grande dimension jouent un rôle crucial dans l'étude d'une grande variété de phénomènes physiques complexes, entraînant souvent des demandes écrasantes sur les ressources informatiques. La gestion de ces demandes constitue la principale motivation pour la réduction du modèle : produire des modèles de commande réduite plus simples, qui permettent une simulation plus rapide et moins coûteuse tout en se rapprochant avec précision du comportement du modèle d'origine. La présence des systèmes avec multiples entrées et multiples sorties (MIMO) rend le processus de réduction encore plus difficile. Dans cette thèse, nous nous intéressons aux méthodes de réduction de modèles à grande dimension en utilisant la projection sur des sous-espaces de Krylov tangentielles. Nous nous penchons sur le développement de techniques qui utilisent l'interpolation tangentielle. Celles-ci présentent une alternative efficace et intéressante à la troncature équilibrée qui est considérée comme référence dans le domaine et tout particulièrement la réduction pour les systèmes linéaire à temps invariants. Enfin, une attention particulière sera portée sur l'élaboration de nouveaux algorithmes efficaces et sur l'application à des problèmes pratiques. / Large-scale simulations play a crucial role in the study of a great variety of complex physical phenomena, leading often to overwhelming demands on computational resources. Managing these demands constitutes the main motivation for model reduction : produce simpler reduced-order models, which allow for faster and cheaper simulation while accurately approximating the behaviour of the original model. The presence of multiple inputs and outputs (MIMO) systems, makes the reduction process even more challenging. In this thesis we are interested in methods of reducing large-scale models, using projection on tangential Krylov subspaces. We are looking at the development of techniques using tangential interpolation. These present an effective and interesting alternative to the balanced truncation which is considered as a reference in the field and especially for the reduction of linear time invariant systems. Finally, special attention will be focused on the development of new efficient algorithms and application to practical problems.
|
Page generated in 0.0307 seconds